Title
stringlengths 12
257
| Annotation
stringlengths 101
3.94k
| PDF
stringlengths 38
45
| Latex
stringlengths 1
261k
⌀ |
---|---|---|---|
Title:
Contribution of spicules to solar coronal emission |
Abstract: Recent high-resolution imaging and spectroscopic observations have generated
renewed interest in spicules' role in explaining the hot corona. Some studies
suggest that some spicules, often classified as type II, may provide
significant mass and energy to the corona. Here we use numerical simulations to
investigate whether such spicules can produce the observed coronal emission
without any additional coronal heating agent. Model spicules consisting of a
cold body and hot tip are injected into the base of a warm ($0.5$ MK)
equilibrium loop with different tip temperatures and injection velocities. Both
piston- and pressure-driven shocks are produced. We find that the hot tip cools
rapidly and disappears from coronal emission lines such as Fe XII $195$ and Fe
XIV $274$. Prolonged hot emission is produced by pre-existing loop material
heated by the shock and by thermal conduction from the shock. However, the
shapes and Doppler shifts of synthetic line profiles show significant
discrepancies with observations. Furthermore, spatially and temporally averaged
intensities are extremely low, suggesting that if the observed intensities from
the quiet Sun and active regions were solely due to type II spicules, one to
several orders of magnitude more spicules would be required than have been
reported in the literature. This conclusion applies strictly to the ejected
spicular material. We make no claims about emissions connected with waves or
coronal currents that may be generated during the ejection process and heat the
surrounding area.
| https://export.arxiv.org/pdf/2208.05240 |
\title{Contribution of spicules to solar coronal emission}
\correspondingauthor{Shanwlee Sow Mondal}
\email{[email protected]\\
[email protected]}
\author[0000-0003-4225-8520]{Shanwlee Sow Mondal}
\affil{Astronomy and Astrophysics Division, Physical Research Laboratory, Ahmedabad 380009, India}
\affil{Indian Institute of Technology, Gandhinagar, Gujarat 382355, India}
\author[0000-0003-2255-0305]{James A. Klimchuk}
\affil{Heliophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd., Greenbelt, MD 20771, USA}
\author[0000-0002-4781-5798]{Aveek Sarkar}
\affil{Astronomy and Astrophysics Division, Physical Research Laboratory, Ahmedabad 380009, India}
\keywords{methods: numerical, Sun: corona, Sun: chromosphere, Sun: atmosphere, Sun: magnetic fields, Sun: UV radiation}
\section{Introduction}\label{sec:introduction}
Defying decades of continued efforts, many aspects of coronal heating remain unanswered~\citep{Klimchuk_2006, Klimchuk_2015, Viall_2021}. Even the basic mechanism is a matter of debate. Despite the fact that all the coronal mass is sourced at the chromosphere, agreement on how the chromospheric mass is heated and transported up to the corona has not been reached. An early observation of the solar chromosphere reported the existence of several small jet-like features~\citep{Secchi1877}. They were later named~\textit{spicules}~\citep{Roberts45}. With improved observations, these spicules were seen to propagate upwards~\citep{Pneuman_1977, Pneuman_1978} with speed $20$ - $50$ km s$^{-1}$. They were also seen to survive for about $5$ to $10$ minutes and carry almost $100$ times the mass needed to balance the mass loss in the solar corona due to the solar wind. Further studies of the spicules~\citep{Athay_1982} suggested a pivotal role in transferring energy from the inner layers of the solar atmosphere to the lower solar corona. However, the proposal was not pursued further because these traditional spicules lack emission in the Transition Region (TR) and coronal lines~\citep{Withbroe_1983}.
About a decade ago, using high-resolution imaging and spectroscopic observations from the Hinode and Solar Dynamic Observatory missions, \cite{Pontieu_2007, Pontieu_2011} further discovered jet like features traveling from the chromosphere to the corona. These features appear all over the Sun with a lifetime between $10-150$ s and a velocity of $50-150$ km s$^{-1}$. \cite{Pontieu_2007} termed them type II spicules and suggested that they are capable of connecting the relatively cooler solar chromosphere with the hot corona.
Since their discovery, multiple observations have identified type II spicules and reported on their characteristics. However, nothing conclusive has yet been established about their origin. Only recently,~\cite{samanta_19} have identified the near-simultaneous origin of spicules and the emergence of photospheric magnetic bipoles. The tips of the originated spicules eventually appear in the coronal passband, suggesting that the plasma is heated to coronal temperatures. A 2.5D radiative MHD simulation of type II spicules~\citep{Martinez_2017} has reproduced many of their observed features. This simulation also suggests that ambipolar diffusion in the partially ionized chromosphere may play a crucial role in the origin of type II spicules. On the other hand, a recent work~\citep{Dey_22} based on radiative MHD simulation and laboratory experiment suggests that quasi-periodic photospheric driving in the presence of vertical magnetic fields can readily generate spicules in the solar atmosphere. Their work, devoid of any chromospheric physics, can still account for the abundance of wide varieties of spicules, as seen in the observations.
The evolution of spicules during their propagation is understood through multi-wavelength studies~\citep[e.g.,][]{Pontieu_2011, Skogsrud_2015}. Observations of ~\cite{Pontieu_2011} suggest that spicule plasma emanating from the chromosphere gets heated to transition region (TR) temperatures and even up to coronal temperatures. Such heating may happen for two reasons:
\begin{enumerate}
\item[(a)] Spicule propagation can produce shocks, compressing the material lying ahead of it. In such a scenario, it is not the ejected spicule material but the pre-existing coronal material in front of it that gets compressed by the shock to contribute to the hot emission ~\citep{Klimchuk_2012, Petralia_2014};
\item[(b)] The tip of the spicule may get heated during the ejection process, on-site, through impulsive heating and produce emissions in the coronal lines. In the latter scenario, the emission indeed comes from the ejected spicule material ~\citep{Pontieu_2007}.
\end{enumerate}
The radiative MHD simulations of ~\cite{Martinez_2018} suggest that spicules and surrounding areas get heated by ohmic dissipation of newly created currents and by waves. Note, however, that the currents in the simulations are relatively large-scale volume currents and would not be dissipated efficiently at the many orders of magnitude smaller resistivity of the real corona. Heating in the real corona involves magnetic reconnection at thin current sheets, of which there are at least $100,000$ in a single active region \citep{Klimchuk_2015}. It is not known whether the ohmic heating in the simulations is a good proxy for the actual reconnection-based heating.
\cite{Klimchuk_2012} considered a simple analytical model for the evolution of spicules with a hot tip. He argued that if a majority of observed coronal emission were from such hot tips, it would be inconsistent with several observational features (see also \cite{Tripathi_2013, Patsourakos_2014}). The result was further supported by hydrodynamic simulations~\citep{Klimchuk_2014}. Using these simulations, they studied the response of a static loop to impulsive heating in the upper chromosphere, which produces localized hot material that rapidly expands upward and might represent the hot tip of a spicule. Noticing the inability of a single hot spicule tip to explain the observations, \cite{Bradshaw_2015} further explored the role of frequently recurring chromospheric nanoflares. The study was motivated by the suggestion that rapidly repeating type II spicules might accumulate enough hot plasma to explain the coronal observations \citep{Pontieu_2011}. However, the simulations were still inconsistent with observations.
In both the analytical model and the simulations, the dynamics of the hot material is due entirely to an explosive expansion from the locally enhanced pressure. There is no additional imposed force to bodily eject the material. The consequences of such a force were investigated by
\cite{Petralia_2014}. Their study involves injecting cold and dense chromospheric material into the corona with an initial velocity. The result indicates that the production of a shock can give rise to coronal emission. However, the emission is from the preexisting coronal material rather than the spicule itself. The injected material has no hot component.
The studies mentioned above have investigated the dynamics of either the hot tip of a spicule without any initial velocity or a spicule with a cold tip and finite injection velocity. Our work combines these two effects. The spicule is now injected in a stratified flux tube with high velocity and consists of both a hot tip and a cold body ($T = 2 \times 10^{4}$~K). We further investigate the possibility that most of the observed hot emission from the corona can be explained by such spicules. Through forward modelling, we quantitatively compare the simulations with observations to answer this question.
The rest of this paper is organized as follows. The numerical setup is described in Section~\ref{sec:setup}. We report on the simulation results in Section~\ref{sec:result}. Finally we summarize and discuss our results in Section~\ref{sec:summary}.
\section{Numerical Setup}\label{sec:setup}
Spicules are seen to follow magnetic field lines. To simulate their dynamics, we consider a straight 2D magnetic flux tube consisting of uniform $10$~G magnetic field. We impose a gravity corresponding to a semi-circular loop such that the vertical component of the gravitational force is maximum at both ends and smoothly becomes zero in the middle of the tube. Two ends of the tube are embedded in the chromosphere. The loop is symmetric about the center, which corresponds to the apex. We use Cartesian coordinates, and therefore the loop actually corresponds to an infinite slab. This is a reasonable approximation because we are interested in how the plasma evolves within an effectively rigid magnetic field appropriate to the low $\beta$ corona. The slab dimension corresponding to the loop length is $100$~Mm. The other dimension is $0.42$~Mm, but this is not relevant. Rigid wall boundary conditions are imposed at the sides, and the evolution is essentially equivalent to 1D hydrodynamics, as discussed below. The first 2 Mm of both ends of the loop are resolved with a fine uniform grid with 10 km cells, while the coronal part is resolved with a stretched grid containing 1500 cells on each side. The fine grid close to both the footpoints allows us to resolve the steep transition region more accurately.
The spicule simulation begins with an initial static equilibrium atmosphere obtained with the double relaxation method described in Appendix~\ref{append:steady_state}. We choose a relatively low temperature and low density loop because we wish to test the hypothesis that the observed coronal emission comes primarily from spicules. The apex temperature of the loop is $0.5$ MK. Figure~\ref{initial_density_temp} shows the background loop profile that is used in most of our simulations. The chromosphere is $470$ km deep - approximately half a gravitational scale height. It merely acts as a mass reservoir. Detailed chromospheric physics like partial ionization and optically thick radiation are not implemented in the code as we are solely interested in coronal emission. We use a modified radiation loss function to maintain a chromospheric temperature near $2 \times 10^{4}$~K, as described in Appendix~\ref{append:steady_state}.
The propagation of a spicule in the loop is emulated through an injection of dense material from the left footpoint. The injected material follows specified density, velocity and temperature profiles in time which are described below. At this injection boundary, all plasma parameters, except the density and pressure, are set to their initial values once the injection is over. The density is set to have the prescribed value at the end of the injection phase, and pressure is determined from the ideal gas equation of state. On the other hand, at the right footpoint, all the plasma parameters maintain the initially prescribed values throughout the entire simulation.
We solve the compressible MHD equations inside our simulation domain using the PLUTO code \citep{2007ApJS..170..228M} with ideal gas environment. Plasma inside the domain is cooled by radiation and field aligned thermal conduction. The CHIANTI \citep{chianti} radiative loss function for coronal abundance is used to model the radiative cooling. For anisotropic conduction, the thermal conductivity, $\kappa_{\parallel}$ = $5.6 \times 10^{-7} T^{5/2}$ erg s$^{-1}$ K$^{-1}$ cm$^{-1}$ is considered along the magnetic field lines, whereas $\kappa_{\perp}$ is taken to be zero. Also, for the saturated conductive flux used in PLUTO, $F_{sat}$ = 5$\phi \rho C_{s}^{3}$, where we have considered the value of the free parameter $\phi$ to be 0.9, which represents effective thermal conduction in the system, and $C_{s}$ is the isothermal sound speed. The MHD equations are solved in Cartesian coordinates.
The photospheric magnetic flux is found to be localized and clumpy, whereas, in the corona, it fills out space uniformly. Such nature of the magnetic flux at different layers of the solar atmosphere dictates that the flux tubes expand laterally at the junction of the chromosphere and corona, where the plasma $\beta$ changes from being greater than one to less than one. This type of expansion of flux tubes is realized in 2D MHD simulations of coronal loops~\citep[e.g.,][]{Guarrasi_14}. Through an area expansion factor, this has also been incorporated in 1D or 0D models \citep{Mikic_13, Cargill_22}. We do not include expansion in our model because we are interested in the spicule dynamics in the corona, and the simplification should not affect our results significantly. We note that the plasma $\beta$ is less than unity throughout the evolution, so no expansion from the spicule injection would be expected. Additionally, the initial atmosphere and injection profile are uniform along the horizontal (cross-field) axis. Hence the plasma remains nearly uniform in the lateral direction, effectively making our simulations similar to 1D hydrodynamic simulations. Nevertheless, we ran all our computations using the 2D MHD set up because of our familiarity with the powerful PLUTO code. The limited number of grid points in the cross-field direction keeps the computational demands relatively low.
Two main components of our simulations are: (a) a background loop in hydrostatic and energy equilibrium representing a tenuous coronal atmosphere, and (b) the propagation of injected material resembling spicule propagation along the loop. Our experimental spicule consists of a hot dense tip followed by cold dense material injected from the base of the model. Here we investigate how changing the temperature of the hot tip and injection speed can alter the intensities and profiles of the Fe XII (195 \AA) and Fe XIV(274 \AA) coronal spectral lines.
We have performed six sets of simulations where the spicule tip temperatures are considered to be at $2$, $1$, and $0.02$ MK, followed by cold material with a temperature of $0.02$ MK. All the runs are performed with two injection velocities: $50$ and $150$ km s$^{-1}$ (see Table~\ref{tab:table1}). Since we assume that spicules might have been generated deep inside the chromosphere, we inject a high-density material in the loop to emulate the spicule. The density scale height of the spicule is chosen to be six times the gravitational scale height at the base of the equilibrium loop. To impose such conditions on the ejected spicule, its density follows a time profile given by,
\begin{equation}
\label{rho_profile}
\rho(t)=
\begin{cases}
\rho_{0}\exp\Big[\frac{v(t) t}{6H}\Big], & \ 0 < t \le t_{5} \\
\rho(t_{5}), & \ t_{5} < t \\
\end{cases}~,
\end{equation}
where $\rho(t)$ and $\rho_{0}$ are the injected density at time $t$ and the base density of the equilibrium loop, respectively. The time profile of the injection velocity is given by,
\begin{equation}
\label{vel_profile}
v(t)=
\begin{cases}
v_{inj} \times \Big(\frac{t}{t_{1}}\Big), & \ 0 < t \le t_{1} \\
v_{inj}, & \ t_{1} < t \le t_{4}\\
v_{inj} \times \Big(\frac{t_{5}-t}{t_{5}-t_{4}}\Big), & \ t_{4} < t \le t_{5} \\
0, & \ t_{5} < t \\
\end{cases}~,
\end{equation}
where $v_{inj}$ corresponds to $50$ or $150$ km s$^{-1}$ (depending on the simulation). $H$ represents the gravitational scale height given by
\begin{equation}
H = \frac{k_{B}T_{base}}{\mu m_{H} g_{\odot}}~,
\end{equation}
where $T_{base}=0.02$~MK is the base temperature of the loop, $k_{B}$ is the Boltzmann constant, while $m_{H}$ and $g_{\odot}$ represent mass of the hydrogen atom and solar surface gravity, respectively, and $\mu = 0.67$ denotes the mean molecular weight of the plasma.
The temperature of the ejected spicule also follows a time profile given by
\begin{equation}
\label{tmp_profile}
T(t) =
\begin{cases}
T_{base} + (T_{tip}-T_{base})\times\Big(\frac{t}{t_{1}}\Big), & \ 0 < t \le t_{1} \\
T_{tip}, & \ t_{1} < t \le t_{2}\\
T_{tip} + (T_{base}-T_{tip})\times\Big(\frac{t-t_{2}}{t_{3}-t_{2}}\Big), & \ t_{2} < t \le t_{3} \\
T_{base}, & \ t_{3} < t \\
\end{cases}~,
\end{equation}
where $T_{base}$ is the temperature of the cold material (bottom part) of the spicule ($=0.02$ MK) and $T_{tip}$ is the spicule tip temperature which can take values $2$, $1$, or $0.02$~MK depending on the run being performed. In all the above equations, times $t_{1}$, $t_{2}$, $t_{3}$, $t_{4}$ and $t_{5}$ are chosen to be $2$, $10$, $12$, $90$ and $100$ s, respectively. Times are chosen so that the top $10\%$ of the spicule's body emits in coronal lines as is generally observed~\citep{Pontieu_2011}. The total injection duration is also motivated by the observed lifetime of type II spicules \citep{Pontieu_2011}.
The ramping up of velocity, density, and temperature ensures a smooth entry of the spicules into the simulation domain. Similarly, the ramping down at the end of the injection avoids any spurious effects. Figure~\ref{pulse} shows one such example of velocity, density, and temperature profiles when the spicule is ejected with velocity $150$ km s$^{-1}$, and its hot tip is at $2$~MK. Likewise, different injection time profiles have been used for other injection velocities and temperatures. The initial equilibrium loop remains the same in all cases, unless specified. \\
\section{Results}\label{sec:result}
The large velocity of the spicule and high pressure compared to the ambient medium give rise to a shock, which propagates along the loop and heats the material ahead of it. Depending on the sound speed of the ambient medium (i.e., the preexisting loop plasma) and the temperature of the injected spicule material, the generated shock turns out to be of different kinds: (a) Piston driven shock -- in which case the shock speed is nearly equal to the injection speed (e.g., simulation with $T_{tip} = 0.02$~MK), and (b) Pressure driven~ shock -- in which case the shock speed exceeds the injection speed (e.g., when $T_{tip} = 2$ and $1$~MK). Emission from the shock heated plasma differs depending on the nature of the shock.
We compare different simulations to understand the coronal response to spicules with different injection parameters. Our discussion starts with the results from Run1 where the hot tip of the injected spicule has a temperature $T_{tip} = 2$~MK and injection velocity $v = 150$ km s$^{-1}$. The injection profiles are those already shown in Figure~\ref{pulse}.
\subsection{Dynamics and heating}
Insertion of dense, high temperature plasma ($T_{tip} = 2$ MK) with high velocity ($v = 150$ km s$^{-1}$, Run1) into the warm loop produces a shock. Figure~\ref{shock} shows the temperature, density and plasma velocity along the loop at $t = 70$~s. The dashed lines mark the location of the shock front. It is evident from the figure that the high compression ratio exceeds the ratio of an adiabatic shock. The compression ratio of an adiabatic shock should always be $\leq 4$. To understand the nature of the shock, we perform a shock test with Rankine-Hugoniot (RH) conditions, which read
\begin{equation}\label{RH_rho_vel}
\frac{\rho_{2}}{\rho_{1}} = \frac{\gamma + 1}{\frac{2}{M^{2}} + (\gamma - 1)} = \frac{v_{1}}{v_{2}}~.
\end{equation}
Here $\rho_{1}$ and $\rho_{2}$ are the pre- and post-shock plasma mass densities respectively, and $v_{1}$ and $v_{2}$ are likewise the pre- and post-shock plasma velocities in the shock rest frame. Furthermore, $\gamma$ is the ratio of the specific heats, $c_{s} = \sqrt{\frac{\gamma P_{1}}{\rho_{1}}}$ is the upstream sound speed, where $P_{1}$ is the upstream pressure, and finally $M=v_{1}/c_{s}$ is the upstream Mach number in the shock reference frame. Injection of high temperature plasma accelerates the shock with a speed much larger than the injection speed of the spicule material giving rise to a pressure driven shock front.
Figure~\ref{shock} demonstrates an abrupt change in plasma variables at the shock. The shock speed at this instant is $562$ km s$^{-1}$. It also shows that at the discontinuity location ($s = 36.9$~Mm, $s$ being the coordinate along the loop), the density and velocity ratios are $10.7$ and $0.094$, respectively, in the shock rest frame. The inverse relationship of these ratios indicates a constant mass flux across the shock front, in accordance with equation~\eqref{RH_rho_vel}. The Mach number in the shock frame at the same location is $3.37$. With this Mach number, the RH condition gives density and velocity ratios in accordance with those in the simulation ($10.5$ and $0.095$) when $\gamma = 1.015$. In other words, consistency is achieved with this value of $\gamma$. Being close to unity, it implies a nearly isothermal shock. Efficient thermal conduction carries a large heat flux from the shock front to its surroundings, giving rise to the locally smooth, near-isothermal temperature profile in Figure~\ref{shock}. It is worth mentioning here that RH-jump conditions do not consider any heat loss/gain, such as thermal conduction or radiative loss. However, our system includes these sink terms in the energy equation. It is because of these loss functions the shock-jump is larger. Limited thermal conduction would bring the jump condition closer to the adiabatic approximation but would also affect the thermal profile ahead and behind the shock. Our result is consistent with that of~\cite{Petralia_2014}, where the signature of the shocks in front of the spicule has been reported. As we show later, the initially hot material in the spicule tip cools dramatically. Only ambient material heated by the shock is hot enough to produce significant coronal emission.
Interestingly, the high compression ratio at the shock front depends more on the temperature difference and corresponding pressure difference between the injected and ambient plasma material than on the velocity with which it is injected. Table~\ref{tab:table1} shows a study of how the compression ratio (or the shock strength) varies when the tip of the spicules are at different temperatures and are injected with different velocities. As mentioned earlier, the injection conditions give rise to two different types of shocks. When the injected plasma temperature is high (e.g., spicule tips with temperatures $2$ and $1$~MK), the excess pressure gives rise to a pressure-driven shock. On the other hand, injection of a cold material (tip temperature equal to that at the loop footpoint, i.e., $0.02$~MK) produces a piston-driven shock. Our test runs identify both kinds of shocks. For example, when we inject spicules with a fixed injection velocity of $150$~km s$^{-1}$, but with different tip temperatures (viz.\,$2$, $1$ and $0.02$~MK), the average shock speed is $520$, $400$ and $210$ km s$^{-1}$, respectively (see Figure~\ref{shock_speed}). The first two shocks are pressure-driven as the average shock speeds exceed the injection speed by a wide margin. The third shock maintains a speed close to the injection speed and can be categorized as a piston-driven shock. The shock speed depends not only on the injected tip temperature, but also on the properties of the ambient material in which it is propagating, which vary along the loop. This is discussed further in Appendix~\ref{append:shock_speed}.
\begin{deluxetable}{cccc}
\tablenum{1}
\tablecaption{Dependence of compression ratio on the injected hot tip temperature and speed.\label{tab:table1}}
\tablehead{
\colhead{Run} & \colhead{\hspace{1cm}$T_{tip}$\hspace{1cm}} & \colhead{\hspace{1cm}$v$\hspace{1cm}} & \colhead{\hspace{1cm}Compression\hspace{1cm}}\\
\colhead{} & \colhead{\hspace{1cm}(MK)\hspace{1cm}} & \colhead{\hspace{1cm}(km s$^{-1}$)\hspace{1cm}} & \colhead{\hspace{1cm}ratio\hspace{1cm}}
}
\startdata
1 & \hspace{1cm} 2 & \hspace{1cm}150 & \hspace{1cm}11.2\\
2 & \hspace{1cm}2 & \hspace{1cm}50 & \hspace{1cm}8.9\\
3 & \hspace{1cm}1 & \hspace{1cm}150 & \hspace{1cm}8.7\\
4 & \hspace{1cm}1 & \hspace{1cm}50 & \hspace{1cm}6.2\\
5 & \hspace{1cm}0.02 & \hspace{1cm}150 & \hspace{1cm}3.6\\
6 & \hspace{1cm}0.02 & \hspace{1cm}50 & \hspace{1cm}1.7\\
\enddata
\end{deluxetable}
\subsection{Loop emission}\label{sec:forward}
Thermally conducted energy from the shock front heats the material lying ahead of it. Therefore, a magnetic flux tube subjected to spicule activity could produce hot emission from newly ejected material at the spicule's hot tip and from pre-existing coronal material in both the pre and post-shock regions. We now examine the contributions from these three different sources. We identify the leading edge of the hot spicule tip by finding the location in the loop where the column mass integrated from the right footpoint equals the initial column mass of the loop. Recall that the spicule is injected from the left footpoint. The spicule compresses the material in the loop, but does not change its column mass. We identify the trailing edge of the hot material in a similar manner, but using the column mass at time $t=10$~s, when the injection of hot material ceases and the injection of cold material begins.
Figure~\ref{fe12_10_70} shows emission along the loop in the Fe XII and Fe XIV lines at $t = 10$ and $70$~s, evaluated from Run1. The orange region is the hot spicule tip, while the red region is the shock-heated material ahead of it. The shock front is the dot-dashed black vertical line. The dark orange curve is temperature in units of $10^5$ K, with the scale on the left. The blue curve is the logarithm of density, with the scale on the right. The yellow and green curves are the logarithms of Fe XII and Fe XIV intensity, respectively, with the scale on the left. The variation of intensity is enormous; a difference of 10 corresponds to 10 orders of magnitude. The intensity is what would be observed by the Extreme ultraviolet Imaging Spectrometer (EIS; \cite{Culhane_2007}) onboard Hinode \citep{Kosugi_2007} if the emitting plasma had a line-of-sight depth equal to the EIS pixel dimension, i.e., if observing an EIS pixel cube. This can be interpreted as normalized emissivity.
At $t=10$~s, the emission in both lines comes primarily from the injected hot plasma (orange region). On the other hand, at $t=70$~s it comes primarily from the shock heated plasma (red region). The transition happens very early on. Shortly after the injection of the hot material stops ($t = 10$ s), emission from the shock heated material starts dominating the total emission from the loop. This is evident in the time evolution plot of the loop-integrated emission in Figure~\ref{emission_evolution_fe12_fe14}. Shown are the intensities that would be observed by EIS, assuming that the loop has a cross section equal to the pixel area and that all of the loop plasma is contained within a single pixel. This corresponds to a loop that has been straightened along the line of sight and crudely represents a line of sight passing through an arcade of similar, out of phase loops. The black curve shows the evolution of the total emission contributed by the spicule and pre-existing plasma. Subtracting the spicule component (red curve) from the total gives the evolution of the emission coming solely from the pre-existing (non-spicule) loop material (green curve). Soon after the hot tip of the spicule completes its entry into the loop (at $t=10$~s), the emission from the spicule falls off rapidly. This is because the hot material at the spicule tip cools rapidly as it expands in the absence of any external heating. It is far too faint to make a significant contribution to the observed coronal emission, as emphasized earlier by \cite{Klimchuk_2012} and \cite{Klimchuk_2014}.
For better comparison with observations, we construct synthetic spectral line profiles. The methodology is explained in Appendix~\ref{append:Forward_Modelling}. To construct these profiles, we imagine that the loop lies in a vertical plane and is observed from above. We account for the semi-circular shape when converting velocities to Doppler shifts. We then integrate the emission over the entire loop and distribute it uniformly along the projection of the loop onto the solar surface. We assume a cross section corresponding to an EIS pixel, and thereby obtain a spatially averaged EIS line profile for loop. Finally, a temporal average is taken over the time required for the shock to travel to the other end of the loop ($\approx 190$~s in this case). Such spatially and temporally averaged line profiles from a single loop (e.g., Figure~\ref{spectral_line_fe12_fe14}) is equivalent to an observation of many unresolved loops of similar nature but at different stages of their evolution \citep{Patsourakos_2006,Klimchuk_2014}.
Asymmetric coronal line profiles with blue wing enhancement are manifestations of mass transport in the solar corona. Type II spicules are often suggested to be associated with such a mass transport mechanism ~\citep{Pontieu_2009, Pontieu_2011, Martinez_2017}. However, the extreme non-Gaussian shapes of the simulated Fe XII and Fe XIV line profiles (Figure~\ref{spectral_line_fe12_fe14}) are significantly different from observed shapes~\citep{Pontieu_2009, Tian_2011, Tripathi_2013}. Also, the very large blue shifts are inconsistent with observations. Observed Doppler shifts of coronal lines tend to be slower than $5$ km s$^{-1}$ in both active regions ~\citep{Doschek_2012,Tripathi_2012} and quiet Sun ~\citep{Chae_1998,Peter_1999}. In contrast, a shift of $150$ km s$^{-1}$ is evident in the simulated spectral lines (Figure~\ref{spectral_line_fe12_fe14}).
Our simulation is not reliable after the shock reaches the right boundary of the model. Because of rigid wall boundary conditions, it reflects in an unphysical manner. One might question whether the emission after this time could dramatically alter the predicted line profiles. We estimate the brightness of this neglected emission using the loop temperature profile shortly before the shock reaches the chromosphere at $t = 190$~s. The temperature peaks at the shock, and there is strong cooling from thermal conduction both to the left (up the loop leg) and, especially, to the right (down the loop leg). We estimate the cooling timescale according to:
\begin{equation}
\tau_{cond}= \frac{21}{2}\frac{k_{B}n_{e}l^{2}}{\kappa_{0\parallel}T^{5/2}}~,
\end{equation}
where, $k_{B}$ is the Boltzmann constant, $\kappa_{0\parallel}$ is the coefficient of thermal conductivity along the field lines, $T$ is the temperature at the shock, $n_{e}$ is the electron number density behind the shock, and $l$ is the temperature scale length. We do this separately using the scale lengths on both sides of the shock, obtaining $\tau_{cond}=1290$~s and {7}~s for the left and right sides, respectively. Radiative cooling is much weaker and can be safely ignored. We estimate the integrated emission after $t = 190$~s by multiplying the count rate at that time by the longer of the two timescales, thereby obtaining an upper limit on the neglected emission in our synthetic line profiles. The result is $10565$ DN pix$^{-1}$ for Fe XII and $2206$ DN pix$^{-1}$ for Fe XIV. These are about $0.97$ and $2.76$ times the temporally integrated emission before this time, for Fe XII and Fe XIV, respectively.
The factors are much smaller using the shorter cooling timescale. Even the large factors do not qualitatively alter our conclusions. The profile shapes and Doppler shifts would still be much different from observed. The conclusions we draw below are also not affected by neglecting the emission after the shock reaches the right footpoint.
\subsection{Comparison with observations}
We now estimate the spicule occurrence rate that would be required to explain the observed coronal intensities from active regions and quiet Sun. We have already seen that, in the absence of any external (coronal) heating, the hot material at the tip of the spicule cools down rapidly. However, we are concerned here with the total emission, including that from pre-existing material that is heated as the spicule propagates along the loop. Consider a region of area $\mathcal{A}_{reg}$ on the solar surface, large enough to include many spicules. If the spatially averaged occurrence rate of spicules in this region is $\mathcal{R}$ (cm$^{-2}$ s$^{-1}$), then one may expect $\mathcal{N}_{reg} = \mathcal{R}\tau\mathcal{A}_{reg}$ spicules to be present at any moment, where $\tau$ is the typical spicule lifetime. Since we are averaging over large areas, the orientations of the spicule loops does not matter, and we can treat the loops as straightened along the line of sight, as done for Figure 5. If $\mathcal{I}_{sp}$ (DN s$^{-1}$ pix$^{-1}$) is the temporally averaged intensity of such a loop (the full loop intensity divided by 190 s in Fig. 5), then the expected intensity from a corona that only contains spicule loops is $\mathcal{I}_{obs} = \mathcal{N}_{reg}\mathcal{I}_{sp}\mathcal{A}_{sp}/\mathcal{A}_{reg} = \mathcal{I}_{sp}\mathcal{R}\tau\mathcal{A}_{sp}$,
where $\mathcal{A}_{sp}$ is the cross-sectional area of the loop.
The typical intensities ($\mathcal{I}_{obs}$) observed by EIS in active regions and quiet Sun are, respectively, $162$ and $34$ DN s$^{-1}$ pix$^{-1}$ in Fe XII (195 \AA) and $35$ and $4$ DN s$^{-1}$ pix$^{-1}$ in Fe XIV (274 \AA) ~\citep{Brown_2008}. On the other hand, the temporally averaged intensities from our simulation ($\mathcal{I}_{sp}$) are $56.36$ and $4.22$ DN s$^{-1}$ pix$^{-1}$ for Fe XII and Fe XIV, respectively. Considering $\tau$ to be $190$~s, the time it takes for the shock to travel across the loop, we derive an occurrence rate ($\mathcal{R}$) of spicules as a function of their cross-sectional area ($\mathcal{A}_{sp}$). Results are shown in Figure~\ref{count_fe12_fe14} for the two lines.
Following our earlier logic, we may also argue that at any given time there are $\mathcal{N}_{\odot}=\mathcal{R}\tau \mathcal{A}_{\odot}$ spicules on the solar disk, where $\mathcal{A}_{\odot}$ is the area of the solar disk. Using the estimated value of the occurrence rate of the spicules ($\mathcal{R}$), and taking $\tau$ to be $190$~s as before, the number of spicules on the solar disk is related to the other quantities as per $\mathcal{N}_{\odot} = (\mathcal{I}_{obs}/\mathcal{I}_{sp})(\mathcal{A}_{\odot}/\mathcal{A}_{sp})$. This formula represents $\mathcal{N}_{\odot}$ as a function of the spicule cross-sectional area $\mathcal{A}_{sp}$ (Figure~\ref{count_QS_AR}). Considering the fact that the typical observed widths of spicules lie between $200-400$~km~\citep{Pereira_2011}, we find that the full disk equivalent number of spicules required to explain the observed intensities exceeds $10^7$ in the quiet Sun and $10^8$ in active regions, as indicated by the green shaded region in Figure~\ref{count_QS_AR}. However, observational estimations for the number of spicules on the disk vary between $10^5$~\citep{Sterling_2016} and $2 \times 10^7$~\citep{Judge_2010}. There is a large discrepancy. Far more spicules than observed would be required to produce all the observed coronal emission. For the quiet Sun, $100$ times more spicules would be needed, while for active regions, $10-10^3$ times more would be needed. These are lower limits based on Run1. Our other simulations imply even greater numbers of spicules (see Table~\ref{tab:table2}).
We should mention here that the larger the height the spicule rises, the longer the time it compresses the ambient material, and thus the brighter the time averaged emission. The spicules in our simulations with $150$ km s$^{-1}$ injection speed reach a height of about $23$~Mm, which is much larger than the typically observed spicule height ($\sim 10$~Mm). Therefore, we are likely to overestimate the emission coming from spicule loops, and so the discrepancy between the required and observed number of spicules is even greater. It should also be noted that the values estimated by ~\cite{Sterling_2016, Judge_2010} consider both type I \& II spicules. The discrepancy thus increases further if one considers type II spicules alone.
Analysis of our simulated observations thus suggests that spicules contribute a relatively minor amount to the emission and thermal energy of the corona. Through the generation of shocks, they may heat the local plasma, but that too cools down rapidly due to expansion and thermal conduction. Therefore, synthetic spectra derived from our simulation show a high discrepancy with observed spectra. However, this does not rule out the possibility of spicules contributing significantly to the coronal mass. The ejected spicule material may still get heated in the corona through some other heating mechanism -- a source that exceeds the initial thermal and kinetic energy of the spicule. However, observational evidence of such a process is still lacking. Analyzing the excess blue wing emission of multiple spectral lines hotter than $0.6$~MK, \cite{Tripathi_2013} have concluded that the upward mass flux is too small to explain the mass of the active region corona. Their observations indicate that spicules hotter than $0.6$~MK are not capable of providing sufficient mass to the corona.
So far, we have allowed our spicules to propagate within a warm ($T =0.5$ MK), relatively low density loop in order to determine whether they, by themselves, can explain the observed hot emission. Our simulations indicate that this is not viable. Therefore, there must be some other heating mechanisms at play that produce the hot, dense plasma. Setting aside the issue of heating mechanisms, in the following section we simply test the response of a spicule in a hot and dense flux tube.
\subsection{Spicule propagation in a hot loop}
We have considered a static equilibrium loop with apex and footpoint temperatures of approximately $2$ and $0.02$ MK, respectively. A spicule with a tip temperature of $2$ MK followed by a cold, dense material with temperature $0.02$ MK is injected with a velocity of $150$ km s$^{-1}$ from the bottom boundary, similar to our previous spicules. The velocity profile of the injected spicule is the same as shown in Figure~\ref{pulse}. The injected spicule generates a shock that takes about $180$~s to traverse the loop.
The spatio-temporal averaged spectral line profiles are obtained following the method described in Appendix~\ref{append:Forward_Modelling}. However, in this case, because of the high background temperature, the loop itself emits significantly in the Fe XII and Fe XIV coronal lines. We consider the situation where the line of sight passes through many loops. Some contain spicules and some are maintained in the hot equilibrium state. We adjust the relative proportions to determine what combination is able to reproduce the observed red-blue (RB) profile asymmetries, which are generally $< 0.05$ \citep{Hara_2008, Pontieu_2009, Tian_2011}. For an asymmetry of $\approx 0.04$, we find that the ratios of spicule to non-spicule strands are $1:150$ for the Fe XII line and $1:72$ for the Fe XIV line. Again the conclusion is that spicules are a relatively minor contributor to the corona overall, though they are important for the loops in which they occur.
\begin{deluxetable*}{ccccccccc}
\tablenum{2}
\tablecaption{Summarizing the number of spicules (width $\sim 300$ km) required to explain the quiet Sun and active region intensities as predicted from the test runs.}\label{tab:table2}
\tablecolumns{9}
\tablewidth{0pt}
\tablehead{
\colhead{Run} & \colhead{T$_{tip}$} & \colhead{v} & \multicolumn2c{Loop integrated counts} & \multicolumn2c{Quiet Sun} & \multicolumn2c{Active Region} \\
\colhead{} &\colhead{(MK)} & \colhead{(km s$^{-1}$)} & \multicolumn2c{(DN s$^{-1}$ pix$^{-1}$)} &
\multicolumn2c{(Required number of spicules)} & \multicolumn2c{(Required number of spicules)} \\
\colhead{} & \colhead{} & \colhead{} & \colhead{Fe XII} & \colhead{Fe XIV} & \colhead{Fe XII} & \colhead{Fe XIV} & \colhead{Fe XII} & \colhead{Fe XIV}
}
\startdata
1&2 & 150 & 0.6405 & $4.8\times10^{-2}$ & $4.02\times10^{7}$ & $6.31\times10^{7}$ & $1.92\times10^{8}$ & $5.52\times10^{8}$\\
2&2 & 50 & 0.2336 & $1.47\times10^{-2}$ & $1.1\times10^{8}$ & $2.06\times10^{8}$ & $5.25\times10^{8}$ & $1.8\times10^{9}$ \\
3&1 & 150 & $6.7\times10^{-2}$ & $1.3\times10^{-3}$ & $3.86\times10^{8}$ & $2.26\times10^{9}$ & $1.84\times10^{9}$ & $1.98\times10^{10}$\\
4&1 & 50 & $5.9\times10^{-3}$ & $7.81\times10^{-6}$ & $4.3\times10^{9}$ & $3.8\times10^{11}$ & $2.06\times10^{10}$ & $3.4\times10^{12}$ \\
5&0.02 & 150 & $4.4\times10^{-4}$ & $1.5\times10^{-7}$ & $5.89\times10^{10}$ & $2.02\times10^{13}$ & $2.8\times10^{11}$ & $1.77\times10^{14}$ \\
6&0.02 & 50 & $1.02\times10^{-7}$ & $1.1\times10^{-13}$ & $2.5\times10^{14}$ & $2.7\times10^{19}$ & $1.2\times10^{15}$ & $2.4\times10^{20}$ \\
\enddata
\end{deluxetable*}
\section{Summary and discussion}\label{sec:summary}
The solar atmosphere displays a wide variety of spicules with different temperatures and velocities. It has been suggested that type II spicules are a major source of coronal mass and energy~\citep{Pontieu_2007,Pontieu_2009,Pontieu_2011}. In this work, we numerically investigate the role of spicules in producing observed coronal emissions. In particular, we examine whether, in the absence of any external heating, the hot tips of the spicules and the shock-heated ambient plasma can explain the observed coronal emission. For this, we inject spicules with different temperatures and velocities into a coronal loop in static equilibrium. We choose a relatively cool equilibrium so that the loop does not itself produce appreciable emission in the absence of a spicule. Each of our injected spicules consists of a hot tip followed by a cold body. We consider three different temperatures for the hot tips, viz., $2$, $1$ and $0.02$~MK, while the cold, dense chromospheric plasma that follows the tip has a temperature of $0.02$~MK. Six different simulations are run by injecting each of these spicules with an initial velocity of either $50$ km s$^{-1}$ or $150$ km s$^{-1}$ (see Table~\ref{tab:table1}). We also have constructed spectral line profiles and estimated the spicule occurrence rate required to explain the observed intensities from the quiet Sun and active regions. Our main results are summarized as follows.
\paragraph{Shock formation during spicule propagation} All six runs described above suggest the formation of shocks due to the injection of spicule material into the coronal flux tubes. The shocks are stronger when the temperature differences and therefore pressure differences with the ambient plasma are higher. Table~\ref{tab:table1} shows the variation of the compression ratio (measure of shock strength) with changing temperature of the spicule tip. The nature of the shock depends on the tip temperature. Spicules with a hotter tip produce a pressure-driven shock that propagates with a speed larger than the injection speed. Spicules with a cold tip (i.e., $T_{tip} = 0.02$ MK) produce a piston-driven shock which propagates with a speed close to the injection speed. The intensities and shapes of spectral line profiles depend on the nature of the shock. The formation of shocks during spicule injection agrees well with previous studies~\citep{Petralia_2014, Martinez_2018}.
\paragraph{Rapid cooling of the hot spicule tip} Our simulations show that, in the absence of any external heating, the hot tip of the spicule cools rapidly before reaching a substantial coronal height. Consequently, the tip emission from coronal lines like Fe XII (195 \AA) and Fe XIV (274 \AA) is short lived (Figure~\ref{emission_evolution_fe12_fe14}) and confined to low altitudes. The result is consistent with earlier studies by~\cite{Klimchuk_2012} and \cite{Klimchuk_2014}.
\paragraph{Relative emission contributions of hot tip and shock heated plasma} Our simulations show that the pre-existing material in the loop gets heated through shock compression and thermal conduction. However, the time-integrated emission from this heated pre-existing material is less than that from the hot tip, as shown in Figure~\ref{emission_evolution_fe12_fe14}. The tip plasma is hot for a much shorter time, but it is inherently much brighter because of the greater densities (it is injected in a dense state).
\paragraph{Line profile discrepancies} The shapes of our synthetic spectral line profiles show significant discrepancies with observations. The simulated profiles are highly non-Gaussian and far more asymmetric than observed. A strong blue shift ($\sim 150$ km s$^{-1}$) of the synthetic lines is also inconsistent with the mild Doppler shifts ($< 5$ km s$^{-1}$) observed in the quiet Sun and active regions.
\paragraph{Excessive number of spicules required to explain observed intensities} The spatially and temporally averaged intensities from our simulations (Figures~\ref{spectral_line_fe12_fe14}) imply that far more spicules are required to reproduce the observed emission from the solar disk than are observed (Figure~\ref{count_QS_AR}). The discrepancies are up to a factor of $100$ for the quiet Sun and factors of $10-10^3$ for active regions. These factors apply specifically to Run1, where a spicule with a $2$ MK tip is ejected at a velocity of $150$ km s$^{-1}$. As listed in Table \ref{tab:table2}, the loops in our other simulations with different combinations of tip temperature and ejection velocity are fainter, and therefore more of them would be required to reproduce the observed disk emission, exacerbating the discrepancy.
\paragraph{Ratio of loops with and without spicules} Under the assumption that the corona is comprised of hot loops with and without spicule ejections, red-blue spectral line asymmetries similar to those observed ($0.04$) require far more loops without spicules than with them. The spicule to non-spicule loop number ratio is $1:150$ for the FeXII line and $1:72$ for the Fe XIV line.
Our simulations indicate that spicules contribute a relatively minor amount to the mass and energy of the corona. Such a claim had already been made by \cite{Klimchuk_2012}, where it was shown that hot tip material rapidly expanding into the corona is unable to explain the observed coronal emission. However, a bodily ejection of the spicule was not considered, and the emission from ambient material effected by the expansion was not rigorously investigated (though see Appendix B in that paper). Later, \cite{Petralia_2014} argued that the shock-heated material in front of an ejected cold spicule might be erroneously interpreted as ejected hot material. They did not compare the brightness of the shock-heated material with coronal observations. Our numerical simulations improve on both of these studies. We show that neither the expanding hot tip nor the shock-heated ambient material of a bodily ejected spicule can reproduce coronal observations. A number of discrepancies exist. The existence of some coronal heating mechanism - operating in the corona itself - is required to explain the hot corona. It is not sufficient to eject hot (or cold) material into the corona from below.
We emphasize that our conclusion does not rule out the possibility that waves may be launched into the corona as part of the spicule ejection process, or that new coronal currents may be created outside the flux tube in which the ejected material resides, as suggested by ~\cite{Martinez_2018}. Such waves and currents would lead to coronal heating and could explain at least some non-spicule loops. It seems doubtful, however, that this could explain the many non-spicule loops implied by observed line profile asymmetries. It seems that some type of heating unrelated to spicules must play the primary role in explaining hot coronal plasma.
\begin{acknowledgments}
We thank the anonymous referee for her/his comments to improve the clarity of the paper. SSM \& AS thank Dr. Jishnu Bhattacharyya for many useful discussions. Computations were carried out on the Physical Research Laboratory's VIKRAM cluster. JAK was supported by the Internal Scientist Funding Model (competed work package program) at Goddard Space Flight Center. \\
\end{acknowledgments}
\appendix
\section{Static equilibrium configuration from double relaxation method}\label{append:steady_state}
We inject spicules in a magnetic structure that is in static equilibrium. Such an equilibrium is achieved recursively, and the final equilibrium profile is obtained through two stages of relaxation. First, we obtain the density and temperature profiles by solving the hydrostatic and energy balance equations~\citep{Aschwanden_2002} assuming a steady and uniform background heating $Q_{bg}$. The CHIANTI radiative loss function $\Lambda(T)$ is used to describe the loop's radiation in the energy balance equation. The desired looptop temperature is achieved by adjusting the value of $Q_{bg}$. However, due to lack of exact energy balance, the temperature and density profiles derived in this way do not achieve a perfect equilibrium state. Rather these derived profiles are then used to calculate the final equilibrium loop profile , such that the resulting temperature profile never drops below the chromospheric temperature $T_{ch}$ ($2 \times 10^4$~K), and the system does not generate any spurious velocity either. In the following, we explain these two stages in detail.
\subsection{Heating and cooling in Relaxation-I:}
Starting with the initial profiles described above, the loop is allowed to relax under gravity with the constant background heating $Q_{bg}$. To avoid numerical artifacts, from this stage onward, we smoothly reduce the radiative cooling of the chromosphere to zero over a narrow temperature range between $T_{ch}$ and $T_{min}$, where $T_{min} = 1.95 \times 10^4$~K is a conveniently chosen temperature slightly less than $T_{ch}$. This is achieved by the radiative loss function $\lambda(T)$, defined as
\begin{equation}\label{rad_relax1}
\lambda(T) =
\begin{cases}
\Lambda(T), & \text{if}\, T \ge T_{ch} \\
\left(\frac{T - T_{min}}{T_{ch} - T_{min}}\right) \Lambda(T_{ch}), & \text{if}\, T_{min} < T < T_{ch} \\
0, & \text{if}\, T \le T_{min} \\
\end{cases}~.
\end{equation}
Here $\Lambda(T)$ denotes the optically thin radiative loss function from CHIANTI. The modified function $\lambda(T)$ is plotted in Figure~\ref{chrom_heat_cool}. As the loop relaxes, material drains from the corona and accumulates at the footpoints. The resulting high density of the loop footpoints gives rise to excessive cooling and brings down the footpoint temperatures below $T_{min}$, along with generating short lived velocities. However, the loop eventually achieves a steady-state, and we use the enhanced footpoint density at that time ($n_{base}$) to estimate the additional heating required to keep the chromospheric temperature above $T_{min}$. This is carried out in the next relaxation stage.
\subsection{Heating and cooling in Relaxation-II:}
Once again, we start with the initial density and temperature profiles from the beginning of the first stage. However, this time we apply additional heating in the chromosphere above the constant background heating $Q_{bg}$. This prevents the plasma from cooling below $T_{min}$ and instead lets it hover between $T_{ch}$ and $T_{min}$. The total heating function $Q$ is given by
\begin{equation}\label{heat_relax2}
Q =
\begin{cases}
Q_{bg}, & \text{if}\, T \ge T_{ch} \\
\left(\frac{n}{n_{base}}\right)^{2} Q_{ch} \left(\frac{T_{ch}-T}{T_{ch}-T_{min}}\right) + Q_{bg}, & \text{if}\, T_{min} < T < T_{ch} \\
\left(\frac{n}{n_{base}}\right)^{2} Q_{ch} + Q_{bg}, & \text{if}\, T \le T_{min}
\end{cases}~,
\end{equation}
where $Q_{ch} = n_{ch}^{2} \Lambda (T_{ch})$ is the heat required to balance the radiative losses from the footpoint plasma of the initial loop profile at temperature $T_{ch}$ and density $n_{ch}$. Figure~\ref{chrom_heat_cool} graphically depicts the radiative loss and heating functions that are maintained throughout the simulation.
\section{Variation of shock speed with height}\label{append:shock_speed}
For a pressure driven shock, the shock's speed primarily depends on the pressure difference between the spicule's tip and the ambient medium in which it is propagating. Lower pressure close to the loop apex provides lesser resistance to the shock propagation, and hence the shock speed increases. On the other hand, high pressure close to the footpoints provides greater resistance and thus the shock speed reduces. For a better understanding, we track the shock front along the loop and derive its speed during its propagation. The shock front at any instant can be identified from the density jump moving ahead of the injected spicule material. To track it, we identify the jump in density at each time, which is also associated with the maximum temperature of the loop. Once the locations of the shock front along the loop are identified, a derivative of the same gives the instantaneous shock speed as a function of loop coordinates. Figure~\ref{shock_speed} shows the variation of shock speed as a function of loop coordinates for three different shocks, all ejected with velocity $150$ km s$^{-1}$ but with three different tip temperatures, viz. $2$, $1$ and $0.02$ MK. Though the shock speeds increase at the loop apex for all three shocks, velocity amplitudes depend on the injection temperatures and thus pressures. The larger the tip temperature, the higher the spicule tip pressure and hence larger is the shock speed.
\section{Forward Modelling}\label{append:Forward_Modelling}
Spectral profiles provide a wealth of information about the plasma dynamics along the line of sight (LOS). Adapting the method outlined in~\cite{Patsourakos_2006}, synthetic spectral line profiles are constructed at each numerical grid cell using the cell's density, velocity and temperature.
At any given time, $t$, and location along the loop, $s$, the line profile is
\begin{equation}
I(s,t) = \frac{I_{0}}{\sqrt{\pi} v_{\text{width}}}\exp\left[\frac{-(v - v_\text{shift})^{2}}{v_\text{width}^{2}}\right]~,
\end{equation}
where $I_{0}$ is the amplitude, $v_\text{shift}$ is the Doppler shift, and $v_\text{width}$ is the thermal line width. The amplitude is given by
\begin{equation}
I_{0}(s,t) = n_{e}^{2} G(T)ds~,
\end{equation}
where $n_{e}$, $T$ and $ds$ denote the electron number density, temperature, and length of the cell. The contribution function $G(T)$ for the line is taken from the CHIANTI atomic data base \citep{chianti}. The Doppler shift equals the line of sight velocity of the cell,
\begin{equation}
v_\text{shift} = v_\text{los}~
\end{equation}
in wavelength units, and the thermal width is given by
\begin{equation}
v_\text{width} = \sqrt{\frac{2k_{B}T}{m_{ion}}}~,
\end{equation}
where $m_\text{ion}$ is the mass of the ion.
Once the line profile at each grid point is constructed, spatial averaging is performed by summing the profiles along the loop and dividing by its projected length assuming that it lies in a vertical plane and is viewed from above:
\begin{equation}
\langle I(t) \rangle_{\text{spatial}} = \frac{\pi}{2L}\sum_{s} I(s,t) \times d
\end{equation}
where $L$ is the loop length and $d$ is the pixel dimension. The loop is assumed to have a cross section of $d^2$. Finally the spatially averaged line profiles are temporally averaged over a time $\tau$, which is taken to be the travel time of the shock along the loop; this yields
\begin{equation}
\langle I \rangle_{\text{spatial, temporal}} = \frac{1}{\tau}\sum_{t}\langle I(t) \rangle_{\text{spatial}}
\end{equation}
\bibliography{spicule}
|
Title:
Pushchino multibeams pulsar search. First results |
Abstract: Since the discovery of pulsars, dozens of surveys have already been conducted
with their searches. In the course of surveys in the sky, areas from thousands
to tens of thousands of square degrees are explored. Despite repeated
observations of the same areas, new pulsars are constantly being discovered. We
present Pushchino Multibeam Pulsar Search (PUMPS), having a sensitivity that is
an order of magnitude higher than the sensitivity of all previously made
surveys on pulsar search. In PUMPS daily round-the-clock observations are
carried out of the area located on declinations $-9^o < \delta < +42^o$. The
survey is carried out on 96 beams of a Large Phased Array (LPA) at a frequency
of 111 MHz. During the observation period of August 2014 - August 2022, the
survey was repeated approximately 3,000 times. The expected sensitivity in the
survey reaches up to 0.1 mJy. The paper considers some tasks that can be solved
when processing the received data.
| https://export.arxiv.org/pdf/2208.04578 |
\section{Introduction}
The surveys on the pulsar search are conducted since their discovery in 1967 \cite{Hewish1968} and they have already led to the discovery of more than 3,300 pulsars (https://www.atnf.csiro.au/ research/pulsar/psrcat/, \cite{Manchester2005}). The estimate of the expected number of observed radio pulsars is 30,000 pulsars with a luminosity higher than 0.1 mJy per kpc$^2$ (\citeauthor{Lorimer2006}, \citeyear{Lorimer2006}). The estimate of the number of pulsars available for observations on the radio telescope Square Kilometer Array (SKA) under construction is 20,000 pulsars (\citeauthor{Cordes2004}, \citeyear{Cordes2004}). That is, about 10-15\% of the radio pulsars available for observation have been discovered so far.
In the paper \citeauthor{Wilkinson2007} (\citeyear{Wilkinson2007}), it was shown that the number of new major discoveries in pulsar searches increases as a natural logarithm of the number of discovered pulsars. For each subsequent discovery, it is necessary to discover many times more pulsars than they were known at the time of the previous discovery. A natural question arises about the point of conducting new surveys. Having more and more time and financial expenditures for conducting the observations themselves and processing them, the experimental scientist has less and less chance for a major discovery.
Surveys conducted with high sensitivity make it possible not only to discover new types of pulsars (\citeauthor{McLaughlin2006} (\citeyear{McLaughlin2006}), \citeauthor{Caleb2022} (\citeyear{Caleb2022})), but also to study pulsar populations in detail, to explore the interstellar medium both in the Galactic plane and in the halo. Therefore, if expenditures of time and other resources are acceptable, the surveys should be carried out, because they improve our knowledge about the evolution of pulsars and their properties.
Surveys are conducted, as a rule, on an area of the sky accessible to the telescope. That is, on about half of the celestial sphere. Taking into account the small, as a rule, dimension of the antenna pattern and the number of simultaneously available beams, the survey is an extremely long and, taking into account the cost of an hour of observations, also an extremely expensive task for almost all large telescopes. However, for the Large Phased Array (LPA) radio telescope located in Pushchino Radio Astronomy Observatory (PRAO), a survey of the entire sky is a daily routine task. In this paper, we consider PUshchino Multibeams Pulsar Search (PUMPS) and some of the problems that can be solved along the way in this survey.
\section{Survey and tasks}
The survey on the LPA transit radio telescope (observation frequency 111 MHz) is carried out daily, around the clock since August 2014 on 96 beams, and since January 2022 on 128 beams aligned in the meridian plane and covering declinations $-9^o < \delta < +55^o$. The data are recorded in the 2.5 MHz band, in 32 frequency channels of 78 kHz width with a point reading time of 12.5 ms. The amount of data recorded per year is almost 45 terabytes. Work on the pulsar search started in 2015, and to date 42 pulsars and 46 rotating radio transients (RRAT) have been discovered up to date (see site https://bsa-analytics.prao.ru/en/ and references in it). Since we have not had a good computation server, processing of all the data was impossible. We expect that the new server being purchased, which has a terabyte of RAM and 128 full-fledged cores, will start operation this year and will allow processing in a reasonable time the accumulated data with a volume of about 300 terabytes.
Based on PUMPS data, the search for classical second-duration pulsars and transients is carried out. As was shown in the paper \citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022}), the sensitivity of the LPA in one 3.5 minutes duration observation session, when the source passes through the meridian at half the power of the radiation pattern, is inferior to the surveys conducted on the aperture synthesis Low Frequency Array (LOFAR) system and on the Five-hundred-meter Aperture Spherical Telescope (FAST).
Accumulation of the signal by summing up power spectra and periodograms allows to improve the sensitivity by tens of times on the dispersion measures $DM<100$ pc/cm$^3$. For the search for transients, instantaneous sensitivity is primarily important, and this sensitivity is provided by a large effective area of the LPA equal to 45,000 sq.m.
In the paper \citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022}), we considered sensitivity when searching for seconds duration pulsars and limited ourselves to PUMPS sensitivity estimates $DM \le 200$ pc/cm$^3$. However, the discovery of the pulsar with a period of 77 seconds and a pulse half-width of 300 ms (\citeauthor{Caleb2022} (\citeyear{Caleb2022})) allows us to seriously consider the search for pulsars at significantly larger $DM$. For such $DM$, the main factor reducing the sensitivity of the search is interstellar scattering ($\tau_s$). In experimental dependencies $\tau_s$(DM) it can be seen that for the same DM the scattering can differ by three orders of magnitude (\citeauthor{Cordes2002} (\citeyear{Cordes2002}), \citeauthor{Bhat2004} (\citeyear{Bhat2004}), \citeauthor{Kuzmin2007} (\citeyear{Kuzmin2007}), \citeauthor{Pynzar2008} (\citeyear{Pynzar2008})). So, in observations on the frequency 111 MHz the scattering can be from ten seconds to ten minutes. We have recalculated the sensitivity curves up to $DM=1,000$ pc/cm$^3$. The following formula was used for estimating the scattering (\citeauthor{Cordes2002}, \citeyear{Cordes2002}):
\begin{equation}
\log(\tau_s) = 3.59 + 0.129\log(DM) + 1.02\log(DM)^2 - 4.4log(f),
\label{eq:1}
\end{equation}
where $\tau_s$ is obtained in microseconds, $DM$ is expressed in pc/cm$^3$, $f$ is the central frequency of observations in GHz. Since the estimates of $\tau_s$ may be significantly higher than those obtained from formula~\ref{eq:1}, this may lead to a deterioration in the sensitivity assessment both in single observation sessions ($S_{min}$) and when summing up power spectra and periodograms ($S_{min-sum}$).
Fig.~\ref{fig:fig1} shows sensitivity estimates for pulsars with different periods after evaluation of $\tau_s$ using formula~\ref{eq:1}. The sensitivities in a pulsar search with different periods and dispersion measures shown in the figure differ slightly from the sensitivities shown in fig.4 in the paper \citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022}). These differences are related to the fact that in the paper \citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022})2 the scattering was estimated according to the empirical formula from the paper \citeauthor{Kuzmin2007} (\citeyear{Kuzmin2007}).
The sensitivity in PUMPS, equal to 0.2-0.3 mJy, is about 16 times better than sensitivity equal to 3-4 mJy in the survey LOTAAS made on LOFAR (Sanidas2019), when recalculated into the frequency 111 MHz (\citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022})). This means that, all other things being equal, 4 times weaker pulsars can be detected on LPA than on LOFAR. In addition, there is a possibility in principle to find pulsars with long (>10-20 s) periods at high DM. Since the luminosity decreases in proportion to the square of the distance, and the volume increases in proportion to the cube of the distance, the number of pulsars available in the survey can grow up to 64 times. There are 73 pulsars discovered in the LOTAAS survey, and the possible number of new pulsars in the PUMPS survey may reach $64 \times 73 \approx 4,500$. This fantastic assessment is most likely very far from reality. The sensitivities indicated in Fig.~\ref{fig:fig1} are achieved only for pulsars that do not have pulse gaps and with very stable (not variable) radiation. Let us note, however, that when we conduct the search, the sensitivity is several times lower than expected with the accumulation of 8 years of observations (see site https://bsa-analytics.prao.ru/en/, references in it and the paper \citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022})), for the same areas in the sky, we detect \textbf{all} pulsars discovered on LOFAR and detect almost the same number of new pulsars in these areas that were not detected in the LOTAAS survey (see Fig.6 in the paper \citeauthor{Tyulbashev2022} (\citeyear{Tyulbashev2022})). Our conservative scenario is the discovery of 200-300 new pulsars in PUMPS, the optimistic scenario is the discovery of 1,000-1,500 new pulsars.
The sensitivity in the search for pulsed radiation of transients for the LPA radio telescope is fixed, because we cannot change neither the area of the antenna, nor the temperature of the system, nor the band. However, for a year of observations, taking into account the 3.5-minute passage through the meridian at half power, approximately 20 hours of recording are accumulated for each point in the sky. The survey started in August 2014 and is planned at least until December 2024. This means the accumulation of approximately 8.5 days of data for each point entering the observation area. In the papers \citeauthor{Logvinenko2020} (\citeyear{Logvinenko2020}), \citeauthor{Tyulbashev2022a} (\citeyear{Tyulbashev2022a}), the existence of RRAT is shown, between the pulses of which 10 or more hours can pass. To detect and study such transients, it is necessary to conduct very long-term observations, which appear during monitoring.
The field of view of the LPA on 128 antenna beams is approximately 50 sq.deg.. Field of view estimates of other large telescopes used for RRAT detection are: for 64-meter mirror Parks (Australia) it it is 0.7 sq.deg. on 13 beams at the frequency 1.4 GHz; for 100-meter mirror Green-Bank (USA) it is 0.35 sq.deg. on one beam at the frequency 350 MHz; for 300-meter mirror Arecibo (USA) it is 1 sq.deg. on 7 beams at the frequency 327 MHz; for 500-meter mirror FAST (China) it is 0.16 sq.deg. on 19 beams at the frequency 1.2 GHz. Instant sensitivity for FAST (\citeauthor{Han2021}, \citeyear{Han2021}) after recalculation of the frequency 1.2 GHz into the frequency 111 MHz with an assumed spectral index 1.7 ($S\sim\nu^{-\alpha}$) exceeds the sensitivity of the LPA by about an order of magnitude. However, if we talk about RRAT, for which hours can pass between the appearance of successive pulses, the second main factor for searching, after instantaneous sensitivity, becomes the time of observations at one point in the sky. If the average time between RRAT pulses is one hour, then the FAST radio telescope will need to view half of the sky once [20,000 sq.deg. (half of the celestial sphere)/0.7 sq.deg. (FAST field of view )] $\times$ 1 hour = 28,570 hours or 3.3 years of round-the-clock observations. Due to the FAST availability, even for a single examination of the sky, the task looks hardly realizable.
Thus, both for the search for second-duration pulsars and for the search for RRAT, the LPA radio telescope turned out to be surprisingly suitable, despite all its obvious disadvantages: observations in one linear polarization (whole classes of tasks fall out plus a loss of sensitivity 2$^{1/2}$ times); narrow full band (leads to low accuracy of DM estimation and deterioration of sensitivity obtained on modern broadband recorders); the lack of direct ascension diagram control (leads to low accuracy in determining the pulsar period at an interval of 3.5 minutes, there are problems in obtaining timing, it is impossible or very difficult to investigate weak sources); the dimension of one LPA beam is too large $0.5 \times 1$ deg. (it leads to low coordinate accuracy of detected objects).
Despite the mentioned disadvantages of the tool, the data obtained at the LPA can be used for research for many scientific tasks. We list some of the planned PUMPS tasks without going into details of their solutions. Search tasks: search for pulsars with periods from 25 ms up to minutes, search for RRATs and Fast Radio Bursts (FRBs), search for pulsars in nearby galaxies, search for pulsars with small DM down to 0 pc/cm$^3$, search for pulsars with sporadic radiation.
Research of the interplanetary, interstellar, intergalactic environments: pulsar variability induced by scintillations (diffraction and refraction of radio emission in different medium), pulse scattering of pulsars and FRB, Faraday rotation. Research of pulsed and periodic radiation sources: a nature of RRAT and FRB, statistics of pulsars with inter-pulses, inter-pulse radiation of pulsars, pulsars with fading radiation, intrinsic variability, targeted search for gamma, X-ray and other radio-quiet pulsars, spatial distribution of pulsars in the Galaxy as a whole and for different samples, pulse energy distribution at a frequency of 111 MHz, timing and others.
Let's look at three examples of using monitoring data:
- there are opposite hypotheses about the evolution of the pulsar's "magnetic axis" relative to its axis of rotation. There are hypotheses according to which, over time, the directions of the pulsar's "magnetic axis" and its axis of rotation become perpendicular to each other (orthogonal rotator), other hypotheses suggest that the direction of the axes coincides with time (coaxial rotator) (see \citeauthor{Arzamasskiy2017} (\citeyear{Arzamasskiy2017}) and references there). For the orthogonal rotator, the pulse takes the smallest possible fraction of the period, for coaxial rotators, on the contrary, the pulse takes the maximum possible fraction of the period. Pulsars with long periods are old pulsars (see, for example, the hand book \citeauthor{Lorimer2004} (\citeyear{Lorimer2004})). Their evolution took longer time compared to ordinary second-duration pulsars. Therefore, a simple comparison of the relative pulse duration (duty cycle), that is, the fraction of the period occupied by the pulse for ordinary second-duration pulsars and for pulsars with extra-long periods (>10-20 s), which will be discovered in the survey, should give an answer to the question whether radio pulsars become coaxial or orthogonal rotators by the end of their life in the active phase;
- since pulsars are formed during supernova explosions, and the exploded stars are in the plane of the Galaxy, then pulsars at birth should be located there as well. As a result of the explosion, pulsars can acquire a velocity component perpendicular to the plane of the Galaxy and go into the halo. Pulsar lifetime in the active phase (as a radio pulsar) can be from millions to tens of millions of years, and the pulsar can move away from the plane of the Galaxy by some distance. However, the farther away from the Galactic plane, the fewer pulsars should be detected. Fig.~\ref{fig:fig2} presents histograms showing the number of pulsars and RRATs at different galactic latitudes (at different elevations above the plane of the Galaxy). It is obvious that the distributions for second-duration pulsars with small DM located in the same area where all Pushchino RRATs were detected (see https://bsa-analytics.prao.ru/en/transients/rrat/), and the distribution for Pushchino RRATs are different in appearance. We do not discuss this difference, which can be explained within the framework of hypotheses from insufficient RRAT statistics and selection effects to the discovery of relic pulsars inherited from the previous Universe (\citeauthor{Gorkavyi2021}, \citeyear{Gorkavyi2021}). We are only presenting here one of the problems associated with the strange dependence of the RRAT distribution over Galactic latitudes;
- daily observations allow us to obtain an averaged profile for hundreds of pulsars. The peak or integral flux density estimated from the average profile allows us to construct the dependence of the estimate of the observed flux density on time. The observed variability may be related to scintillations on interstellar plasma. If the variability cannot be explained by the interstellar medium, it is related to internal factors. Fig.~\ref{fig:fig3} shows the "light curve" of the pulsar J0323+3944 (B0320+39). The figure shows changes in the flux density over time. We do not investigate in this paper the causes of apparent variability of J0323+3944, but we only show that the task of studying the variability can be solved on the PUMPS data.
In the presented paper there are no solutions of the problems discussed above in the examples, this is a matter of the future. We only show the fundamental possibility of performing various tasks based on the data received in PUMPS.
\section{Conclusion}
Up to date, 88 pulsars have been discovered in the PUMPS survey (see cite https://bsa-analytics.prao.ru/en/). Having a sensitivity an order of magnitude higher than in the surveys conducted up to date, we can expect the detection of more than 1,000 new pulsars. The main thing, in our opinion, is that with a radical increase in sensitivity, we begin to exploit the area that is called "unknown-unknown" in the paper \citeauthor{Wilkinson2015} (\citeyear{Wilkinson2015}). When working in this area, there is no guarantee that major discoveries will be made, however, as we believe, this kind of work is the real academic science.
\acknowledgments
The study was carried out at the expense of a grant Russian Science Foundation 22-12-00236, https://rscf.ru/project/22-12-00236/.
|
Title:
Structure and evolution of ultra-massive white dwarfs in general relativity |
Abstract: We present the first set of constant rest-mass ultra-massive oxygen/neon
white dwarf cooling tracks with masses larger than 1.29 Msun which fully take
into account the effects of general relativity on their structural and
evolutionary properties. We have computed the full evolution sequences of 1.29,
1.31, 1.33, 1.35, and 1.369 Msun white dwarfs with the La Plata stellar
evolution code, LPCODE. For this work, the standard equations of stellar
structure and evolution have been modified to include the full effects of
general relativity. For comparison purposes, the same sequences have been
computed but for the Newtonian case. According to our calculations, the
evolutionary properties of the most massive white dwarfs are strongly modified
by general relativity effects. In particular, the resulting stellar radius is
markedly smaller in the general relativistic case, being up to 25% smaller than
predicted by the Newtonian treatment for the more massive ones. We find that
oxygen/neon white dwarfs more massive than 1.369 Msun become gravitationally
unstable with respect to general relativity effects. When core chemical
distribution due to phase separation on crystallization is considered, such
instability occurs at somewhat lower stellar masses, greater than 1.36 Msun. In
addition, cooling times for the most massive white dwarf sequences result in
about a factor of two smaller than in the Newtonian case at advanced stages of
evolution. Finally, a sample of white dwarfs has been identified as ideal
candidates to test these general relativistic effects. We conclude that the
general relativity effects should be taken into account for an accurate
assessment of the structural and evolutionary properties of the most massive
white dwarfs.
| https://export.arxiv.org/pdf/2208.14144 |
\title{Structure and evolution of ultra-massive white dwarfs in general relativity
\thanks{The cooling sequences are publicly available at
\url{http://evolgroup.fcaglp.unlp.edu.ar/TRACKS/tracks.html}}}
\author{Leandro G. Althaus\inst{1,2}, Mar\'ia E. Camisassa\inst{3}, Santiago Torres\inst{4,5}, Tiara Battich\inst{6}, Alejandro H. C\'orsico\inst{1,2}, Alberto Rebassa-Mansergas\inst{4,5}, Roberto Raddi\inst{4,5} }
\institute{Grupo de Evoluci\'on Estelar y Pulsaciones.
Facultad de Ciencias Astron\'omicas y Geof\'{\i}sicas,
Universidad Nacional de La Plata,
Paseo del Bosque s/n, 1900
La Plata,
Argentina
\and
IALP-CCT - CONICET
\and
Applied Mathematics Department, University of Colorado, Boulder, CO 80309-0526, USA \and
Departament de F\'\i sica,
Universitat Polit\`ecnica de Catalunya,
c/Esteve Terrades 5,
08860 Castelldefels,
Spain
\and
Institute for Space Studies of Catalonia,
c/Gran Capita 2--4,
Edif. Nexus 104,
08034 Barcelona,
Spain
\and
Max-Planck-Institut f\"{u}r Astrophysik, Karl-Schwarzschild-Str. 1, 85748, Garching, Germany
}
\date{Received ; accepted }
\abstract{Ultra-massive white dwarfs ($M_{\star} \gtrsim 1.05 M_{\sun}$)
are of utmost importance in view of the role they play in type Ia
supernovae explosions, merger events, the existence of high magnetic
field white dwarfs, and the physical processes in the Super
Asymptotic Giant Branch phase.} {We present the first set of
constant rest-mass ultra-massive oxygen/neon white dwarf cooling
tracks with masses $M_{\star} > 1.29 M_{\sun}$ which fully take into
account the effects of general relativity on their structural and
evolutionary properties.}
{We have computed the full evolution sequences of 1.29, 1.31, 1.33,
1.35, and 1.369 $M_{\sun}$ white dwarfs with the La Plata stellar
evolution code, {\tt LPCODE}. For this work, the standard equations
of stellar structure and evolution have been modified to include the
effects of general relativity. Specifically, the fully general
relativistic partial differential equations governing the evolution
of a spherically symmetric star are solved in a way they resemble
the standard Newtonian equations of stellar structure. For
comparison purposes, the same sequences have been computed but for
the Newtonian case.}
{According to our calculations, the evolutionary properties of the most
massive white dwarfs are strongly modified by general relativity
effects. In particular, the resulting stellar radius is markedly
smaller in the general relativistic case, being up to 25$\%$ smaller
than predicted by the Newtonian treatment for the more massive
ones. We find that oxygen/neon white dwarfs more massive than 1.369
$M_{\sun}$ become gravitationally unstable with respect to general
relativity effects. When core chemical distribution due to phase
separation on crystallization is considered, such instability occurs
at somewhat lower stellar masses, $\gtrsim 1.36 M_{\sun}$. In
addition, cooling times for the most massive white dwarf sequences
result in about a factor of two smaller than in the Newtonian case
at advanced stages of evolution. Finally, a sample of white dwarfs
has been identified as ideal candidates to test these general
relativistic effects.}
{%
We conclude
that the general relativity effects should be taken into account for an accurate assessment of the
structural and evolutionary properties of the most massive white dwarfs. These new ultra-massive white dwarf models constitute a considerable improvement over those computed in the framework of the standard Newtonian theory of stellar interiors. }
\keywords{stars: evolution --- stars: interiors --- stars: white
dwarfs --- stars: oscillations (including pulsations) --- Physical data and processes: Relativistic processes}
\titlerunning{Relativistic ultra-massive white dwarfs}
\authorrunning{Althaus et al.}
\section{Introduction}
\label{introduction}
White dwarf stars are the most common end point of stellar
evolution. Therefore, these old stellar remnants contain valuable
information on the stellar evolution theory, the kinematics and the
star formation history of our Galaxy, and the ultimate fate of
planetary systems \citep[see][for
reviews]{2008ARA&A..46..157W,2010A&ARv..18..471A,2016NewAR..72....1G,
2019A&ARv..27....7C}. Furthermore, given the large densities that
characterize the white dwarf interiors, these compact objects are
considered reliable cosmic laboratories to study the properties of
baryonic matter under extreme physical conditions
\citep{2022FrASS...9....6I}. Among all the white dwarfs, of special
interest are the so-called ultra-massive white dwarfs, defined as
those with masses larger than $\sim 1.05 M_\odot$. Ultra-massive white
dwarfs play a key role in constraining the threshold above which stars
explode as supernova to create neutron stars and they are involved in
extreme astrophysical phenomena, such as type Ia supernovae,
micronovae explosions, radio transients via an
accretion-induced collapse \citep{2019MNRAS.490.1166M} as well as
stellar mergers. Ultra-massive white dwarfs constitute
also powerful tools to study the theory of high density
plasmas and general relativity.
The theoretical evolution of ultra-massive white dwarfs with masses up
to $1.29\, M_\odot$ has been studied in detail in
\cite{2019A&A...625A..87C,2022MNRAS.511.5198C}. These studies provide
white dwarf evolutionary sequences with oxygen-neon (O/Ne) and carbon-oxygen
(C/O) core-chemical composition, considering realistic initial chemical
profiles that are the result of the full progenitor evolution
calculated in \cite{2010A&A...512A..10S} and \cite{ALTUMCO2021},
respectively.
This set of ultra-massive white dwarf
evolutionary models provides an appropriate tool to study the
ultra-massive white dwarf population in our Galaxy, subject to the
condition that white dwarf masses do not exceed $1.29\, M_\odot$.
During the last years, observations of ultra-massive white dwarfs have
been reported in several studies
\citep{2004ApJ...607..982M,2016IAUFM..29B.493N,2011ApJ...743..138G,2013ApJS..204....5K,2015MNRAS.450.3966B,2016MNRAS.455.3413K,2017MNRAS.468..239C,2021MNRAS.503.5397K,Hollands2020,2021Natur.595...39C,2022MNRAS.511.5462T}. In
particular, \cite{2018ApJ...861L..13G} derived a mass of
$1.28\pm0.08\,M_{\odot}$ for the long known white dwarf GD 50. The
number of ultra-massive white dwarfs with mass determinations beyond
$1.29\, M_\odot$ is steadily increasing with recent observations.
\cite{2020MNRAS.499L..21P} discovered a rapidly-rotating ultra-massive
white dwarf, WDJ183202.83+085636.24, with $M=1.33\pm0.01\,M_{\odot}$ meanwhile
\cite{2021Natur.595...39C} reported the existence of a
highly-magnetized, rapidly-rotating ultra-massive white dwarf, ZTF
J190132.9+145808.7, with a mass of $\sim 1.327 - 1.365 \,
M_\odot$. \cite{2021MNRAS.503.5397K} studied the most massive white
dwarfs in the solar neighborhood and concluded that other 22 white
dwarfs could also have masses larger than $1.29\, M_\odot$, if they
had pure H envelopes and C/O cores. Furthermore,
\cite{2022RNAAS...6...36S} has confirmed the existence of a branch of
faint blue white dwarfs in the {\it Gaia} color magnitude diagram,
some of them also reported in \cite{Kilic2020},
which is mainly composed by
ultra-massive white dwarfs more massive than $1.29\, M_\odot$.
In addition to all these observations, gravity($g$)-mode pulsations
have been detected at least in four ultra-massive white dwarfs
\citep{1992ApJ...390L..89K,2013ApJ...771L...2H,2017MNRAS.468..239C,2019MNRAS.486.4574R}. Although
these stars have masses slightly below $1.29~M_\odot$, we expect that
more massive pulsating white dwarfs will be identified in the coming
years with the advent of huge volumes of high-quality photometric data
collected by space missions such as the ongoing {\sl TESS} mission
\citep{2015JATIS...1a4003R} and {\sl Cheops}
\citep{2018A&A...620A.203M} mission, and the future {\sl Plato} space
telescope \citep{2018EPSC...12..969P}. This big amount of photometric
data is expected to make asteroseismology a promising tool to study
the structure and chemical composition of ultra-massive white dwarfs
\citep{2019A&A...621A.100D, 2019A&A...632A.119C}. In fact, several
successful asteroseismological analyzes of white dwarfs have been
carried out employing data from space thanks to the {\sl Kepler/K2}
mission \citep{2010Sci...327..977B,
2014PASP..126..398H,2020FrASS...7...47C} and {\sl TESS}
\citep{2022arXiv220303769C}.
The increasing number of detected ultra-massive white dwarfs with masses beyond
$1.29\,M_{\odot}$ as well as the immediate prospect of detecting
pulsating white dwarfs with such masses, demand new appropriate
theoretical evolutionary models to analyze them. Recently,
\cite{2021ApJ...916..119S} has studied the evolution of white dwarfs
more massive than $1.29\, M_\odot$ with the focus on neutrino cooling
via the Urca process, showing that this process is important for age
determination of O/Ne-core white dwarf stars. These models were
calculated employing the set of standard equations to solve the
stellar structure and evolution under the assumption of Newtonian
gravity. However, the importance of general relativity for the
structure of the most massive white dwarfs cannot be completely
disregarded. This was recently assessed by \cite{2018GReGr..50...38C},
who solved the general relativistic hydrostatic equilibrium equation
for a completely degenerate ideal Fermi electron gas. They demonstrate
that for fixed values of total mass, large deviations (up to 50$\%$)
in the Newtonian white dwarf radius are expected, as compared with the
general relativistic white dwarf radius. The impact of a non-ideal
treatment of the electron gas on the equilibrium structure of
relativistic white dwarfs was studied by \cite{2011PhRvD..84h4007R}
and \cite{2017RAA....17...61M}, who derived the mass-radius relations
and critical masses in the general relativity framework for white
dwarfs of different core chemical compositions. These studies conclude
that general relativistic effects are relevant for the determination
of the radius of massive white dwarfs. \cite{2014PhRvC..89a5801D} and,
more recently, \cite{2021ApJ...921..138N} have investigated the
general relativity effects in static white dwarf structures of
non-ideal matter in the case of finite temperature. While
\cite{2014PhRvC..89a5801D} focused their work on the effects of finite
temperature on extremely low-mass white dwarfs,
\cite{2021ApJ...921..138N} studied the stability of massive hot white
dwarfs against radial oscillations, inverse $\beta-$decay and
pycnonulcear reactions. They find that the effect of the temperature
is still important for determining the radius of very massive white
dwarfs.
Despite several works have been devoted to the study of the effects of
general relativity on the structure of white dwarfs, none of these
works has calculated the evolution of such structures. Moreover, in
all of the works mentioned above, the white dwarf models are assumed
to be composed by solely one chemical element. The exact chemical
composition determines both the mass limit of white dwarfs and the
nature of the instability (due to general-relativity effects or to
$\beta$-decays, e.g. \citealt{2011PhRvD..84h4007R}). In this paper we
compute the first set of constant rest-mass ultra-massive O/Ne white
dwarf evolutionary models which fully take into account the effects of
general relativity on their structural and evolutionary
properties. Furthermore, we consider realistic initial chemical
profiles as predicted by the progenitor evolutionary history. We
employ the La Plata stellar evolution code, {\tt LPCODE}, to compute
the full evolutionary sequence of 1.29, 1.31, 1.33, 1.35, and 1.369
$M_{\sun}$ white dwarfs. The standard equations of stellar structure
and evolution solved in this code have been modified to include the
effects of general relativity. For comparison purposes, the same
sequences have been computed but for the Newtonian gravity case.
We assess the resulting cooling
times and provide precise time dependent mass-radius relations for
relativistic ultra-massive white dwarfs. We also provide magnitudes
in Gaia, Sloan Digital Sky Survey and Pan-STARRS passbands, using the
model atmospheres of \cite{2010MmSAI..81..921K,2019A&A...628A.102K}.
This set of cooling sequences, together with the models calculated in
\cite{2019A&A...625A..87C} and \cite{2022MNRAS.511.5198C}, provide a
solid theoretical framework to study the most massive white dwarfs in
our Galaxy.
This paper is organized as follows. In Sect. \ref{equations}
we describe the modifications to our code to incorporate the effects of general relativity.
In Sect. \ref{models} we detail the main constitutive physics
of our white dwarf sequences. Sect. \ref{results} is devoted to describe the impact of general relativity effects on the relevant evolutionary properties of our massive white dwarfs. In this section we also compare and discuss the predictions of our new white dwarf sequences with observational data of ultra-massive white dwarfs, in particular with the recently reported faint blue branch of ultra-cool and ultra-massive objects revealed by {\it Gaia} space mission.
Finally, in Sect. \ref{conclusions} we summarize the main finding of the paper.
\section{The equations of stellar structure and evolution in general relativity}
\label{equations}
Our set of ultra-massive O/Ne white dwarf evolutionary sequences has been computed with
the stellar evolution code {\tt LPCODE} developed by La Plata group, which has been widely used
and tested in numerous stellar evolution contexts of low-mass stars and particularly in white dwarf stars \citep[see][for details]
{2003A&A...404..593A,2005A&A...435..631A, 2013A&A...555A..96S, 2015A&A...576A...9A,
2016A&A...588A..25M,2020A&A...635A.164S,
2020A&A...635A.165C}. For this work, the stellar structure and evolution equations have
been modified to include the effects of general relativity, following the formalism given in \cite{1977ApJ...212..825T}. Within this formalism, the fully general relativistic partial differential equations governing the evolution of a spherically symmetric star are presented in a way they resemble the standard Newtonian equations of stellar structure \citep{2012sse..book.....K}. Specifically, the structure and evolution of the star is specified by the Tolman-Oppenheimer-Volkoff (TOV) equation of hydrostatic equilibrium, the equation of mass distribution, the luminosity equation, and the energy transport equation:
\begin{equation}
\frac{\partial P}{\partial m}= -\frac{G m}{4 \pi r^4}\ \mathscr{H G V} \ ,
\label{TOV}
\end{equation}
\begin{equation}
\frac{\partial r}{\partial m}= (4 \pi r^2 \varrho\ \mathscr{V})^{-1} \ ,
\label{MD}
\end{equation}
\begin{equation}
\frac{1}{\mathscr{R}^2}\frac{\partial (L \mathscr{R}^2)}{\partial m}= -\varepsilon_\nu - \frac{1}{\mathscr{R}}
{\frac{\partial u} {\partial t}} + \frac{1}{\mathscr{R}} {\frac{P} {\varrho^2}\frac{\partial \varrho} {\partial t}}\ ,
\label{lumistandard}
\end{equation}
\begin{equation}
\frac{\partial (T \mathscr{R})}{\partial m}= -\frac{3}{64 \pi^2 ac}\frac{\kappa L}{r^4 T^3}\mathscr{R} \qquad {\rm if} \quad \nabla_{\rm rad} \leq \nabla_{\rm ad} \ ,
\label{T_rad}
\end{equation}
\begin{equation}
\frac{\partial \rm{ln} T}{\partial m}= \nabla\ \frac{\partial \rm {ln} P}{\partial m} \qquad {\rm if} \quad \nabla_{\rm rad} > \nabla_{\rm ad} \ ,
\label{T_conv}
\end{equation}
where $t$ is the Schwarzschild time coordinate, $m$ is the rest mass inside a radius $r$ or baryonic mass, i.e., the mass of one hydrogen atom in its ground state
multiplied by the
total number of baryons inside $r$, and $\varrho$ is the density of rest mass. During the entire cooling process, the
total baryonic mass remains constant. $c$ is the speed of light, $u$ is the internal energy per unit mass, and $\varepsilon_\nu$ is the energy lost by neutrino emission per unit mass. $\mathscr{H, G, V,}$ and $\mathscr{R}$ are the dimensionless general relativistic correction factors, which turn to unity in the Newtonian limit. These factors correspond, respectively, to
the enthalpy, gravitational acceleration, volume, and redshift correction factors, and are given by
\begin{align}
\mathscr{H} &= \frac{\varrho^t}{\varrho} + \frac{P}{\varrho c^2},\\
\mathscr{G} &= \frac{ m^t + 4 \pi r^3 P/c^2} {m}, \\
\mathscr{V} &= \left(1 - \frac{ 2 G m^t}{ r c^2}\right)^{-1/2}, \\
\mathscr{R} &= e^{\Phi/c^2}, \\
\label{R-fact}
\end{align}
\noindent where $m^t$ is the mass-energy inside a radius $r$ and includes contributions from the
rest-mass energy, the internal energy, and the gravitational potential energy, which is negative. $\varrho^t$
is the density of total nongravitational mass-energy, and includes the density of rest mass plus contributions from
kinetic and potential energy density due to particle interactions (it does not include the gravitational potential energy density), that is $\varrho^t= \varrho + (u \varrho)/ c^2 $.
Since the internal and
gravitational potential energy change during the course of evolution, the stellar
mass-energy is not a conserved quantity. $\Phi$ is the general
relativistic gravitational potential related to the temporal metric coefficient. At variance with
the Newtonian case, the gravitational potential
appears explicitly in the evolution equations. We note that
the TOV hydrostatic equilibrium equation differs
markedly from its Newtonian counterpart, providing a steeper
pressure gradient. Also we note that the presence of $\mathscr{V}$ in that equation prevents $m^t$ from being larger than $rc^2/2G$.
The radiative gradient $\nabla_{\rm rad}$ is given by
\begin{equation}
\nabla_{\rm rad} = \frac{3}{16 \pi ac G}\frac{\kappa L P}{m T^4}\frac{1}{\mathscr{H G V}} + \left(1 - \frac{\varrho^t/\varrho}
{\mathscr{H}} \right) \ .
\end{equation}
In Eq. (\ref{T_conv}), $\nabla$ is the convective temperature gradient, which, in the present work, is given by the solution of the mixing length
theory. We mention that in ultra-massive white dwarfs the occurrence of convection is restricted exclusively to a very narrow
outer layer\footnote{This may not be true if neutrino cooling via the
Urca process is considered, in which case an inner convection zone is expected, see \cite{2021ApJ...916..119S}.}, being mostly adiabatic. We follow \cite{1977ApJ...212..825T} to generalize the mixing length
theory to general relativity. In Eq. (\ref{lumistandard}) we have omitted the energy generation by nuclear reactions since these are not happening in our models. However, they should be added when taking into account Urca processes.%
To solve Eqs. (\ref{TOV})-(\ref{T_conv}) we need two additional equations that relate $m^t$ and $\Phi$ with $m$. These two equations, which
are not required in the Newtonian case, have to be solved simultaneously with Eqs. (\ref{TOV})-(\ref{T_conv}). These extra equations are given
by \citep[see][]{1977ApJ...212..825T}
\begin{equation}
\frac{\partial m^t }{\partial m}= \frac{\varrho^t}{\varrho} \frac{1}{\mathscr{V}}\ ,
\label{grav_mass}
\end{equation}
\begin{equation}
\frac{\partial \Phi}{\partial m}= \frac{G m}{4 \pi r^4 \varrho}\ \mathscr{G V} \ .
\label{phi}
\end{equation}
\subsection{Boundary conditions}
The rest mass, total mass-energy, and radius of the star correspond, respectively, to the values of $m$, $m^t$, and $r$ at the surface of the star. We denote them by
\begin{equation}
M_{\rm WD}=m \ , \qquad M_{\rm G}=m^t \ , \qquad R=r \qquad {\rm at \ the \ surface} \ .
\end{equation}
$M_G$ is the total gravitational mass, i.e., the stellar mass that would be measured by a distant observer, which turns out to be
less than the total baryonic mass of the white dwarf.
Outer boundary conditions for our evolving models are provided
by the integration of
\begin{equation}
\frac{d P}{d \tau}= \frac{g^t}{\kappa} \ ,
\label{atm}
\end{equation}
\noindent and assuming a gray model atmosphere. $\tau$ is the optical depth and $g^t$ is the "proper" surface gravity of the star (as measured on the
surface) corrected by general relativistic effects and given by
\begin{equation}
g^t= \frac{G M_G}{R^2} \mathscr{V} \ .
\label{grav}
\end{equation}
In addition, the general relativistic metric for spacetime in the star interior must match to the metric outside created by the star (Schwarzschild
metric). The match requires that $\Phi$ satisfies the surface boundary condition
\begin{equation}
\Phi= \frac{1}{2} c^2 \ln \left(1-\frac{2 G M_G }{R c^2} \right)\qquad {\rm at} \quad m=M_{\rm WD} \ .
\label{phi_sup}
\end{equation}
At the stellar center, $m=0$, we have $m^t=0$, $r=0$, and $L=0$.
\section{Initial models and input physics}
\label{models}
We have computed the full evolution of 1.29, 1.31, 1.33, 1.35, and
1.369 $M_{\sun}$ white dwarfs assuming the same O/Ne core abundance
distribution for all of them. The adopted core composition
corresponds to that of the 1.29 $M_{\sun}$ hydrogen-rich white dwarf
sequence considered in \cite{2019A&A...625A..87C}, which has been
derived from the evolutionary history of a 10.5 $M_{\sun}$ progenitor
star \citep{2010A&A...512A..10S}. In this work, we
restrict ourselves to O/Ne-core massive white dwarfs, thus extending
the range of O/Ne white dwarf sequences already computed in
\cite{2019A&A...625A..87C} in the frame of Newtonian theory of stellar
interior. O/Ne core white dwarfs are expected as a result of
semi-degenerate carbon burning during the single evolution of
progenitor stars that evolve to the Super Asymptotic Giant Branch
\citep{1997ApJ...485..765G,2005A&A...433.1037G,2006A&A...448..717S,2010MNRAS.401.1453D,2011MNRAS.410.2760V}.
Recent calculations of the remnant of a double white dwarf merger also
predict O/Ne core composition as a result of off-center carbon burning
in the merged remnant, when the remnant mass is larger than 1.05
M$_\sun$ \citep[see][]{2021ApJ...906...53S}. In particular, it is
thought that a considerable fraction of the massive white dwarf
population is formed as a result of stellar mergers
\citep{2020A&A...636A..31T,2020ApJ...891..160C,2022MNRAS.511.5462T}.
We note however that the existence of ultra-massive white dwarfs with
C/O cores resulting from single evolution cannot be
discarded \cite[see][]{2021A&A...646A..30A, 2022arXiv220202040W}.
The adopted input physics for our relativistic white dwarf models is the
same as that in \cite{2019A&A...625A..87C}. In brief, the equation of
state for the low-density regime is that of
\cite{1979A&A....72..134M}, and that of \cite{1994ApJ...434..641S} for
the high-density regime, which takes into account all the important
contributions for both the solid and liquid phases. We include
neutrino emission for pair, photo, and Bremsstrahlung processes using
the rates of \cite{1996ApJS..102..411I}, and of \cite{1994ApJ...425..222H}
for plasma processes. The energetics
resulting from crystallization processes in the core has been
included as in \cite{2019A&A...625A..87C}, and it is based on the
two-component phase diagram of dense O/Ne mixtures appropriate for
massive white dwarf interiors, \cite{2010PhRvE..81c6107M}. As shown
by \cite{2021ApJ...919...87B}, $^{23}$Na and $^{24}$Mg impurities have
only a negligible impact on the O/Ne phase diagram and the
two-component O/Ne phase diagram can be safely used to assess the
energetics resulting from crystallization. We have not
considered the energy released by $^{22}$Ne sedimentation process,
since it is negligible in O/Ne white dwarfs \citep{2021A&A...649L...7C}.
\section{General relativity effects on the evolution of massive white dwarfs}
\label{results}
\begin{table*}[t]
\centering
\begin{tabular}{lccccccc}
\hline
\hline \\[-4pt]
$M_{\rm WD}$ & $M_{\rm G}$ & $R^{\rm Newt}$ & $R^{\rm GR}$ & log g$^{\rm Newt}$ & log g$^{\rm GR}$ & $\varrho_c^{\rm Newt}$ & $\varrho_c^{\rm GR}$\\
$M_\odot$ & $M_\odot$ & km & km & cm s$^{-2}$ & cm s$^{-2}$ & g cm$^{-3}$ & g cm$^{-3}$ \\
\hline \\[-4pt]
1.29 & 1.28977 & 2685.40 & 2608.86 & 9.375 & 9.401 & $ 6.71\times 10^{8}$ & $7.51 \times 10^{8}$ \\
1.31 & 1.30976 & 2426.04 & 2326.17 & 9.470 & 9.507 & $ 9.98 \times 10^{8}$ & $1.17 \times 10^{9}$ \\
1.33 & 1.32974 & 2156.90 & 2004.60 & 9.579 & 9.643 & $ 1.57 \times 10^{9}$ & $2.06 \times 10^{9}$ \\
1.35 & 1.34972 & 1829.29 & 1542.51 & 9.728 & 9.878 & $ 2.90 \times 10^{9}$ & $5.36 \times 10^{9}$ \\
1.369 & 1.36871 & 1408.77 & 1051.16 & 9.961 & 10.217 & $7.42 \times 10^{9}$ & $2.11 \times 10^{10}$ \\
\hline
\\
\end{tabular}
\caption{Relevant characteristics of our sequences a $T_{\rm eff}$=10,000K. $M_{\rm WD}$: total baryonic
mass. $M_{\rm G}$: total gravitational mass. $R^{\rm Newt}$: stellar radius in the Newtonian case. $R^{\rm GR}$:
stellar radius in the general relativity case. g$^{\rm Newt}$: surface gravity in the Newtonian case. g$^{\rm GR}$: surface
gravity in the general relativity case. $\varrho_c^{\rm Newt}$: central density of rest mass in the Newtonian case. $\varrho_c^{\rm GR}$:
central density of rest mass in the general relativity case. }
\label{table1}
\end{table*}
Here, we describe the impact of general relativity effects on the
relevant properties of our constant rest-mass evolutionary tracks. We
begin by examining Fig. \ref{factors}, which displays the general
relativistic correction factors $\mathscr{H, G, V}$, and $\mathscr{R}$
(black, blue, red, and pink lines, respectively) in terms of the
fractional radius for the 1.29, 1.33, 1.35, and 1.369$M_{\sun}$ white
dwarf models at log $L/L_{\sun}=-3$. Dashed lines in the
bottom right panel illustrate
the run of the same factors for a 1.369$M_{\sun}$ white dwarf model at
log $L/L_{\sun}=-0.4$ (log $T_{\rm eff}=5$).
We recall that these
factors are unity in the Newtonian limit. As expected, the importance
of general relativistic effects increases as the stellar mass is
increased. We note that $\mathscr{V}$ is unity at the center and
attains a maximum value at some inner point in the star. The
relativistic factor $\mathscr{R}$ decreases towards the center,
departing even more from unity, meanwhile the other factors,
$\mathscr{G}$ and $\mathscr{H}$ increase towards the center of the
star. The behavior of the relativistic correction factors can be
traced back to curvature effects, as well as the fact that the
pressure and the internal
energy appear as a source for
gravity in general relativity. For maintaining hydrostatic
equilibrium, then, both density and pressure gradients are steeper
than in Newtonian gravity. This makes the factors $\mathscr{G}$ and
$\mathscr{H}$, which depend directly on density and pressure, to
increase towards the center of the star. The relativistic factor
$\mathscr{V}$, which can be interpreted as a correction to the volume,
would be unity at the center of the star where the volume is zero, and
increase because of the increasing of density in general relativity
respect to the density in Newtonian gravity. However, as the
departures from the Newtonian case decrease towards the surface of the
star, $\mathscr{V}$ decreases towards the outside, achieving a
maximum value in between. We note that the relativistic factors
depend slightly on the effective temperature.
The impact of relativistic effects on the mass-radius relation
at two different effective temperatures
can be appreciated in Fig.\,\ref{mr}. We note that for
the most massive white dwarfs, at a given
gravitational mass, the radius is markedly smaller in the case that
the general relativity effects are taken into account.
At a stellar mass of 1.369 $M_{\sun}$ the
stellar radius becomes only 1050 km, 25\% smaller than predicted by the
Newtonian treatment (see Table \ref{table1}). As in the Newtonian
case, the effect of finite temperature on the stellar radius is still
relevant in very massive white dwarfs. We mention that general relativistic
corrections become negligible for stellar masses smaller than
$\approx$ 1.29 $M_{\sun}$. In particular, for stellar masses below that value,
the stellar radius results below 2
\% smaller
when general relativity effects are taken into account .
In our calculations, O/Ne white
dwarfs more massive than 1.369 $M_{\sun}$ become gravitationally
unstable (which occurs at a given finite central density) with respect
to general relativity effects, in agreement with the findings for
zero-temperature models reported in \cite{2011PhRvD..84h4007R} for a
pure-oxygen white dwarf (1.38024 $M_{\sun}$) and
\cite{2017RAA....17...61M} for white dwarfs composed of oxygen (1.3849
$M_{\sun}$) or of neon (1.3788 $M_{\sun}$), although their values are
slightly higher\footnote{Preliminary computations we performed for
oxygen-rich core white dwarfs show that they become
unstable at 1.382 $M_{\sun}$.}. We mention that for the 1.369
$M_{\sun}$ white dwarf model, the central density in the
general relativity case reaches $2.11 \times 10^{10}$ g cm$^{-3}$ (see
Table \ref{table1}). Such density is near the density threshold for
inverse $\beta-$decays. We have not considered that matter inside our
white dwarf models may experience instability against the inverse
$\beta-$decay. O or Ne white dwarfs are expected to become unstable
against the inverse $\beta-$decay process at a stellar mass near the
critical mass resulting from general relativity effects, of the order
of 1.37 $M_{\sun}$ \citep[see][]{2011PhRvD..84h4007R,
2017RAA....17...61M}.
The inner profile of rest mass and density of rest mass for the
1.369$M_{\sun}$ white dwarf model in the general relativity and
Newtonian cases are shown in the upper and bottom panel of
Fig. \ref{mass-density}, respectively. For such massive white dwarf
model, general relativity effects strongly alter the stellar
structure, causing matter to be much more concentrated toward the
center of the star and the central density to be larger than in the
Newtonian case. The impact remains noticeable towards lower stellar
masses, although to a lesser extent, as can be noted for the case of
1.35$M_{\sun}$ white dwarf model shown in the bottom panel of
Fig. \ref{mass-density} (dotted lines). In view of this, the run of
the gravitational field versus radial coordinate for the general relativity
case differs markedly from that resulting from the Newtonian
case. This is shown in Fig. \ref{gravity} for 1.369$M_{\sun}$,
1.35$M_{\sun}$, and 1.29$M_{\sun}$ white dwarf models. In particular,
the gravitational field in the general relativistic case as measured
far from the star is given by
\begin{equation}
g^{\rm GR}= \frac{G m}{r^2} \mathscr{G} \mathscr{V}^2 \ .
\label{grav_sup}
\end{equation}
Clearly, the gravitational field in the most massive of our models is
strongly affected by general relativity. In the stellar interior,
large differences arise in the gravitational field due to the
inclusion of general relativity effects. We note that such differences
do not arise from the relativistic correction factors $\mathscr{G}
\mathscr{V}^2 $ (see Fig. \ref{factors}) to the Newtonian
gravitational field $g^{\rm New}= G m /r^2$ that appear explicitly in
Eq. (\ref{grav_sup}), but from the solution of the relativistic
equilibrium instead, which gives a different run for $m(r)$ compared
to the Newtonian case.
Additionally, the surface gravity and stellar radius are
affected by the effects of general relativity. These quantities are
shown in Fig. \ref{grav-surface} in terms of the effective temperature
for all of our sequences for the general relativity and Newtonian
cases, using solid and dashed lines, respectively. In the most massive
sequences, general relativity effects markedly alter the surface
gravity and stellar radius. In this sense, we infer that
general relativity
effects lead to a stellar mass value about 0.015$M_{\sun}$ smaller
for cool white dwarfs with measured surface gravities of log g $\approx$ 10.
The photometric measurements of
\cite{2021MNRAS.503.5397K} for the radius of the ultra-massive white
dwarfs in the solar neighborhood are also plotted in this figure. For
the more massive of such white dwarfs, the stellar radius results
2.8-4$\%$ smaller when general relativity effects are taken into
account.
We note that most of our sequences display a sudden increase in their
surface gravity at high effective temperatures. As noted in
\cite{2019A&A...625A..87C}, this is related to the onset of core
crystallization (marked with blue filled circles in each sequence
depicted in Fig. \ref{grav-surface}), which modifies the distribution
of $^{16}$O and $^{20}$Ne. Specifically, the abundance of $^{20}$Ne
increases in the core of the white dwarf as crystallization proceeds,
leading to larger Coulomb interactions and hence to denser cores, and,
therefore, to higher surface gravities. This behavior can also be
regarded as a sudden radius decrease (bottom panel of
Fig. \ref{grav-surface}). In this context, we note that the density
increase due to the increase in the core abundance of $^{20}$Ne during
crystallization eventually causes O/Ne white dwarf models with stellar
masses larger than $\gtrsim 1.36 M_\sun $ to become gravitationally
unstable against general relativity effects. In order to explore the
mass range of stable white dwarfs in the absence of this processes,
the 1.369$M_{\sun}$ relativistic sequence was computed disregarding
the effect of phase separation (but not latent heat) during
crystallization.
\subsection{General relativity effects on the white dwarf cooling times}
The cooling properties of the ultra-massive white dwarfs are also
markedly altered by general relativity effects, in particular the
the most massive ones. This is illustrated in Fig. \ref{age},
which compares the cooling times of our models for the general
relativity and Newtonian cases, solid and dashed lines
respectively. The cooling times are set to zero at the beginning of
cooling tracks at very high effective temperatures. Gravothermal
energy is the main energy source of the white dwarfs, except at very
high effective temperatures where energy released during the
crystallization process contributes %
to the budget of the star. As noticed in
\cite{2021A&A...649L...7C}, ultra-massive O/Ne-core white dwarfs
evolve significantly fast into faint
magnitudes. General relativity effects cause
ultra-massive white dwarfs to evolve faster than in the Newtonian
case at advanced stages of evolution. In particular, the $1.369
M_\sun$ relativistic sequence reaches $\log(L/L_\sun)$=-4.5 in only
$\sim 0.5$ Gyrs, in contrast with the $\sim 0.9$ Gyrs needed
in the Newtonian case. The larger internal
densities inflicted by general relativity make the Debye cooling phase
more relevant than in the Newtonian case at a given stellar mass, thus
resulting in a faster cooling for the sequences that include general
relativity effects.
The fast cooling of these
objects, together with their low luminosity and rare formation rates,
would make them hard to observe. The trend in
the cooling behavior is reversed at
earlier stages of evolution, where white dwarfs computed in the
general relativity case evolve slower than their Newtonian
counterparts. This is because white dwarfs computed in the general
relativity case crystallize at higher luminosities (because of their
larger central densities), with the consequent increase in the cooling
times at those stages. In the 1.369$M_{\sun}$ relativistic sequence,
the whole
impact of crystallization on the cooling times results smaller, due to
the fact that we neglect the process of phase separation during
crystallization in that sequence.
We mention that we neglect the neutrino emission resulting from Urca
process, which is relevant in O/Ne white dwarfs at densities in
excess of $10^{9}$ g cm$^{-3}$ \citep{2021ApJ...916..119S} . In our modeling, such densities are
attained at models with stellar masses $\gtrsim 1.33 M_\sun $, see
Table \ref{table1}. Hence, the depicted cooling times for the
sequences with stellar masses above this value may be overestimated at
high and intermediate luminosities. A first attempt to include Urca
cooling process from $^{23}$Na-$^{23}$Ne urca pair in our stellar code
leads to the formation of a mixing region below the Urca shell, as
reported by \cite{2021ApJ...916..119S}. Because of the
temperature inversion caused by Urca process, our most massive
white dwarf models develop off-centered crystallization.
We find numerical
difficulties to model the interaction of crystallization and
the Urca process-induced mixing that prevent us from a consistent computation of white
dwarf cooling during these stages. As recently shown by
\cite{2021ApJ...916..119S}, the cooling of such massive white dwarfs
is dominated by neutrino cooling via the Urca process during the first
100 Myr after formation. Our focus in
this work is on the effects of general relativity on
ultra-massive white dwarfs, so we leave the problematic treatment of
Urca-process impacts on the structure of relativistic white dwarfs for an
upcoming work.
\subsection{Observational constrains on ultra-massive white dwarf models}
The ESA {\it Gaia} mission has provided an unprecedented wealth of
information about stars \citep[see][and references therein]{GaiaEDR32021}. In
particular, nearly $\approx$359,000 white dwarf candidates have been
detected \citep{Fusillo2021}, being estimated that the sample up to
100 pc from the Sun can be practically considered as complete
\citep{Jimenez2018}. The extreme precision of astrometric and
photometric measures allow us to derive accurate color-magnitude
diagrams where to test our models. Some unexpected peculiar features
have been already observed in the {\it Gaia} white dwarf
color-magnitude diagram \citep{GaiaDR22018}.
In particular, the Q branch, due to crystallization and sedimentation
delays, has been extensively analyzed
\citep{Cheng2019,2019Natur.565..202T,2021A&A...649L...7C}. However, a
new branch, called faint blue branch has been reported by
\cite{2022RNAAS...6...36S}. This faint blue branch is formed by nearly
$\sim$60 ultracool and ultra-massive objects, which have been
astrometric and photometric verified and cross validated with the {\it
Gaia} catalogue of nearby stars \citep{GaiaNSC2021} and the white
dwarf catalogue of \cite{Fusillo2021}. It is important also to
mention that some of these objects that form this peculiar feature in
the color-magnitude diagram have already been reported \cite[][and
references therein]{Kilic2020}. Most of these white dwarfs exhibit a
near-infrared flux deficit that has been attributed to the effects of
molecular collision-induced absorption in mixed hydrogen-helium
atmospheres, \cite{Bergeron2022}. Some issues still remain to be
clarified under this assumption and not all the objects in
\cite{2022RNAAS...6...36S} are present in the analysis of
\cite{Bergeron2022}. Consequently, for our purpose here, which is not
in contradiction with the analysis done in \cite{Bergeron2022}, we
adopted hydrogen-pure atmosphere models for the analysis of the whole
\cite{2022RNAAS...6...36S} sample, where particular objects are
treated individually.
In the left panel of Fig. \ref{gaia} we show a color-magnitude diagram
for the 100 pc white dwarf {\it Gaia} EDR3 population (gray dots)
together with the faint blue branch objects from
\cite{2022RNAAS...6...36S} (solid red circles). The color-magnitude
diagram selected is absolute magnitude $G$ versus $G_{\rm BP}-$G,
instead of $G_{\rm BP}-$G$_{\rm RP}$, minimizing in this way the
larger errors induced by the $G_{\rm RP}$ filter for faint objects.
We also provide the magnitudes for our relativistic and Newtonian
models (black and cyan lines, respectively) in {\it Gaia} EDR3 passbands (DR2,
Sloan Digital Sky Survey, Pan-STARRS and other passbands are also
available under request) by using the non-gray model
atmospheres of
\cite{2010MmSAI..81..921K,2019A&A...628A.102K}. Isochrones of 0.25,
0.5, 1 and 2, Gyr for our relativistic model are also shown (dashed
black line) in Fig. \ref{gaia}. An initial inspection of the {\it
Gaia} color-magnitude diagram reveals that our new white dwarf
sequences are consistent with most of the ultra-massive white dwarfs
within 100 pc from the Sun.
In addition, the relativistic white dwarf sequences are fainter than
Newtonian sequences with the same mass. Therefore, general relativity
effects must be carefully taken into account when determining the mass
and stellar properties of the most massive white dwarfs through {\it
Gaia} photometry. Not considering such effects would lead to an
overestimation of their mass and an incorrect estimation of their
cooling times. Finally, we check that faint-blue branch objects do not
follow any particular isochrone, thus ruling out a common temporal
origin of these stars.
A closer look to the faint blue branch is depicted in the right panel of Fig. \ref{gaia}. The vast majority of faint blue branch white dwarfs appear to have masses larger than $\sim 1.29\, M_\odot$. Thus, this sample is ideal for testing our models, in particular, those objects which present the largest masses or, equivalently, the smallest radii. Hence, for the analysis presented here and for reasons of completeness we estimated the error bars for those objects which lie on the left of the Newtonian 1.369 $M_{\sun}$ track. Errors are propagated from the astrometric and photometric errors provided by {\it Gaia} EDR3. Although correlations in {\it Gaia} photometry are very low we have assumed that some correlation may exist between parameters. This way errors are added linearly and not in quadrature, thus obtaining an upper limit estimate of the error bars. The parameters corresponding to the 20 selected ultra-massive white dwarf candidates of the faint blue branch are presented in Table \ref{t:FBcandidates}. In the first column we list the {\it Gaia} EDR3 source ID with a label for an easy identification in Fig. \ref{gaia}. Columns second to fifth present the parallax, apparent and absolute $G$ magnitudes, and color $G_{\rm BP}-G$ with their corresponding error, respectively. Columns sixth and seventh represent the observational distance within the color-magnitude diagram measured in $\sigma$ deviations to the limiting 1.369$M_{\sun}$ cooling track when the general relativity model or the Newtonian model, respectively, is used. Finally, the last column is a 5 digits number flag. The first digit indicates if the relative flux error in the G$_\mathrm{BP}$ band is larger or equal to 10\% (1) or smaller (0). %
The second digit indicates if the relative flux error in the G$_\mathrm{RP}$ band is larger or equal to 10\% (1) or smaller (0). %
The third digit indicates if the $\beta$ parameter as defined by \citet{Riello2021} is $\geq 0.1$ (1) or $<0.1$ (0); if 1 then the object is affected by blending. The fourth digit is set to (0) if the renormalized unit vector ruwe \citep{Lindegren2018} is $<1.4$ (indicative that the solution corresponds to a single object) or set to (1) if it is $\geq 1.4$ (bad solution or binary system). The fifth digit indicates if the object passes (1) or not (0) a 5$\sigma$ cut on the corrected G$_\mathrm{BP}$ and G$_\mathrm{RP}$ flux excess ($C^{*}$; \citealt{Riello2021}). %
An ideal case will show a 00000 flag.
The detailed analysis of the color-magnitude distance to the limiting 1.369$M_{\sun}$ relativistic and Newtonian tracks shown in the sixth and seventh columns, respectively, indicates that, on average, the selected faint blue branch objects are more compatible with the general relativistic model than with the Newtonian model. Six of them $\{a,b,c,f,g,m\}$ lie below the limiting 1.369$M_{\sun}$ relativistic track while they are $1\sigma$ compatible with the Newtonian model. Moreover, up to four objects $\{h,j,n,s\}$ are compatible with the relativistic model at the $1\sigma$ level, but only marginally at a $2\sigma$ level with the Newtonian model. In particular, objects $\{j,s\}$ are ideal candidates to confirm relativistic models given that they present a 00000 flag, which is indicative of a reliable photometry and astrometry. The rest of objects $\{d,i,k,o,p,r,t\}$ lie at a distance $2\sigma$ or $3\sigma$ (the last two) for the relativistic model, but at larger distances for the Newtonian model (up to $4\sigma$). According to our study, these objects with such a small radius or larger masses should be unstable against gravitational collapse. However, any conclusion on this should be taken with caution. On one hand,
although some of these objects belong to the sample analyzed by \cite{Bergeron2022} ($d$, J1612$+$5128; $j$, J1251+4403, also named WD1248+443 \citep{Harris2008}; $o$, J1136$-$1057; and $s$, J0416$-$1826) and some near-infrared flux deficit has been reported for them, a more detailed spectroscopic analysis for all of our candidates is deserved for a precise mass and radius estimation.
On the other hand, the presence of strong internal magnetic fields or a rapid rotation, not considered in this paper, could allow these objects to support the enormous gravity. It has been shown, in the general relativity framework, that including strong magnetic fields and/or a rapid rotation could lead to a smaller radius and/or a larger limiting-mass for the most massive white dwarfs \citep[e.g.][]{2013ApJ...762..117B,2016MNRAS.456.3375B,2015MNRAS.454..752S}. Indeed, the existence of super-Chandrasekhar white dwarfs, with masses $2.1-2.8\,M_\odot$ has been proposed as a possible scenario to explain the over-luminous Type Ia supernovae SN 2003fg, SN 2006gz, SN 2007if, SN 2009dc (e.g. Howell et al. 2006; Hicken
et al. 2007; Yamanaka et al. 2009; Scalzo et al. 2010; Silverman et al.
2011; Taubenberger et al. 2011). A detailed follow up of these objects is, in any case, deserved and, at the same time, general relativistic models as the ones presented in this work but for white dwarfs with carbon-oxygen cores are expected to play a capital role in the understanding of the true nature of these objects.
\begin{table*}[t]
\centering
\begin{tabular}{cccccccl}
\hline
\hline \\[-4pt]
{\it Gaia} EDR3 & $\varpi\pm\sigma_{\varpi}$ & $G\pm\sigma_{\rm G}$ & $M_{\rm G}\pm\sigma_{M_{\rm G}}$ & ($G_{\rm BP}-G)\pm\sigma{_{\rm (G_{BP}-G)}}$ & Rel. & New. & flags \\
source ID & (mas) & (mag) & (mag) & (mag) & model & model & \\
\hline \\[-4pt]
$6565940122868224640^a$ & $ 11.717 \pm 0.592 $ & $ 20.275 \pm 0.005 $ & $ 15.619 \pm 0.115 $ & $ -0.008 \pm 0.100 $ & $<1$ & 1 & 00100 \\
$1983698716601024512^b$ & $ 10.761 \pm 0.934 $ & $ 20.549 \pm 0.009 $ & $ 15.708 \pm 0.198 $ & $ -0.001 \pm 0.084 $ & $<1$ & 1 & 01000 \\
$6211904903507006336^c$ & $ 15.411 \pm 0.501 $ & $ 19.893 \pm 0.006 $ & $ 15.832 \pm 0.076 $ & $ -0.014 \pm 0.079 $ & $<1$ & 1 & $00000^1$ \\
$1424656526287583744^d$ & $ 11.523 \pm 0.685 $ & $ 20.668 \pm 0.009 $ & $ 15.976 \pm 0.138 $ & $ -0.236 \pm 0.150 $ & 2 & 2 & $11000^1$ \\
$3585053427252374272^e$ & $ 16.874 \pm 0.464 $ & $ 20.054 \pm 0.005 $ & $ 16.190 \pm 0.065 $ & $ -0.022 \pm 0.070 $ & 1 & 1 & $01000^1$ \\
$4377579209528621184^f$ & $ 14.828 \pm 0.860 $ & $ 20.379 \pm 0.007 $ & $ 16.235 \pm 0.133 $ & $ 0.043 \pm 0.078 $ & $<1$ & 1 & 01000 \\
$1505825635741455872^g$ & $ 29.084 \pm 0.190 $ & $ 19.022 \pm 0.004 $ & $ 16.340 \pm 0.018 $ & $ 0.041 \pm 0.029 $ & $<1$ & 1 & $00100^{1,2}$ \\
$3480787358063803520^h$ & $ 13.189 \pm 1.365 $ & $ 20.769 \pm 0.010 $ & $ 16.370 \pm 0.235 $ & $ -0.116 \pm 0.132 $ & 1 & 2 & 11000 \\
$4461423190259561728^i$ & $ 12.908 \pm 2.082 $ & $ 20.829 \pm 0.011 $ & $ 16.383 \pm 0.361 $ & $ -0.135 \pm 0.096 $ & 2 & 2 & 01000 \\
$ \bf{5064259336725948672^j}$ & $\bf 30.638 \pm 0.219 $ & $ \bf 19.005 \pm 0.004 $ & $ \bf 16.436 \pm 0.019 $ & $ \bf 0.027 \pm 0.026 $ & \bf{1} & \bf{2} & $\bf{00000}^1$ \\
$534407181320476288^k$ & $ 15.218 \pm 0.640 $ & $ 20.533 \pm 0.008 $ & $ 16.445 \pm 0.099 $ & $ -0.127 \pm 0.080 $ & 2 & 3 & $01000$ \\
$5763109404082525696^l$ & $ 16.279 \pm 0.949 $ & $ 20.424 \pm 0.007 $ & $ 16.482 \pm 0.134 $ & $ -0.021 \pm 0.136 $ & 1 & 1 & $11000^1$ \\
$2858553485723741312^m$ & $ 16.357 \pm 0.715 $ & $ 20.452 \pm 0.009 $ & $ 16.521 \pm 0.104 $ & $ 0.026 \pm 0.109 $ & 1 & 1 & $01000^1$ \\
$6178573689547383168^n$ & $ 17.098 \pm 0.946 $ & $ 20.362 \pm 0.009 $ & $ 16.527 \pm 0.129 $ & $ -0.057 \pm 0.116 $ & 1 & 2 & $01000^1$ \\
$3586879608689430400^o$ & $ 17.572 \pm 1.299 $ & $ 20.369 \pm 0.007 $ & $ 16.593 \pm 0.168 $ & $ -0.193 \pm 0.114 $ & 2 & 3 & $01000^1$ \\
$1738863551836243840^p$ & $ 19.444 \pm 0.933 $ & $ 20.296 \pm 0.007 $ & $ 16.740 \pm 0.112 $ & $ -0.117 \pm 0.093 $ & 2 & 3 & 01000 \\
$6385055135655898496^q$ & $ 16.607 \pm 0.924 $ & $ 20.670 \pm 0.009 $ & $ 16.771 \pm 0.129 $ & $ 0.013 \pm 0.130 $ & 1 & 1 & 11000 \\
$283928743068277376^r$ & $ 27.731 \pm 0.332 $ & $ 19.636 \pm 0.004 $ & $ 16.850 \pm 0.030 $ & $ -0.133 \pm 0.052 $ & 3 & 4 & 00100 \\
$\bf{1528861748669458432^s}$ & $ \bf 20.585 \pm 0.614 $ & $\bf 20.325 \pm 0.006 $ & $ \bf 16.892 \pm 0.070 $ & $ \bf -0.043 \pm 0.082 $ & \bf 1 & \bf 2 & $\bf{00000}^{1,3}$ \\
$1674805012263764352^t$ & $ 19.661 \pm 1.347 $ & $ 20.792 \pm 0.015 $ & $ 17.260 \pm 0.164 $ & $ -0.131 \pm 0.076 $ & 3 & 4 & 01000 \\
\hline
\end{tabular}
\caption{Ultra-massive white dwarf candidates selected from the sample of faint blue white dwarfs of \cite{2022RNAAS...6...36S}. Sixth and seventh columns indicate the distance within the color-diagram of Fig. \ref{gaia} measured in $1\sigma$ deviations form the selected objects to the limiting 1.369 $M_\odot$ cooling tracks for relativistic and Newtonian models, respectively. Objects $j$ and $s$, marked in bold, are ideal candidates with no flags to confirm relativistic models. See text for rest of columns and details. }
\label{t:FBcandidates}
\begin{minipage}{\textwidth}
$^{1}$\cite{Bergeron2022}, $^2$\cite{Gates2004} $^3$\cite{Harris2008}
\end{minipage}
\end{table*}
\section{Summary and conclusions}
\label{conclusions}
In this paper, we present the first set of constant rest-mass ultra-massive O/Ne white dwarf cooling tracks with masses
$M_{\star} > 1.29 M_\sun$, which fully take into account the effects of general relativity on their structural and evolutionary properties. Ultra-massive white dwarfs are relevant in different astrophysical contexts, such as type Ia supernovae explosions, stellar merger events, and
the existence of high magnetic field white dwarfs. In addition, they provide insights into the physical processes in the Super Asymptotic Giant Branch phase preceding their formation. In
the last few years, the existence of such ultra-massive white dwarfs in the solar neighborhood has been reported in several studies, including the recent discover of a branch of faint blue white dwarfs in the color-magnitude diagram \citep{Kilic2020,2022RNAAS...6...36S}. Although some of these objects present an infrared flux deficit, it is also thought to be composed by ultra-massive white dwarfs with masses larger than $1.29\, M_\odot$.
It should be noted that shortly, it is very likely that $g$-mode pulsating ultra-massive white dwarfs with masses $M_{\star} \gtrsim 1.29 M_\sun$ will be discovered thanks to space missions such as {\sl TESS} and {\sl Plato} space telescopes, and it will then be possible to study them through asteroseismology.
We have computed the complete evolution of 1.29, 1.31, 1.33, 1.35, and 1.369 $M_{\sun}$ hydrogen-rich white dwarfs models, assuming an O/Ne composition for
the core. Calculations
have been performed using the La Plata stellar evolution code, {\tt LPCODE}, for which the
standard equations of stellar structure and evolution have been modified to include the effects of general relativity. To this end, we have followed the formalism given in \cite{1977ApJ...212..825T}. Specifically, the fully general relativistic partial differential equations governing the evolution of a spherically symmetric star are solved in a way they resemble the standard Newtonian equations of stellar structure. For comparison purposes, the same sequences have been computed but for the Newtonian case. Our new white dwarf models include the energy released during the crystallization process, both due to latent heat and the induced chemical redistribution. We provide cooling times and time dependent mass-radius relations for relativistic ultra-massive white dwarfs. We also provide magnitudes in Gaia, Sloan Digital Sky Survey and Pan-STARRS passbands, using the model atmospheres of \cite{2010MmSAI..81..921K,2019A&A...628A.102K}.
This set of cooling sequences, together with those calculated in \cite{2019A&A...625A..87C} and \cite{2022MNRAS.511.5198C} for lower stellar masses than computed here, provide an appropriate theoretical framework to study the most massive white dwarfs in our Galaxy, superseding all existing calculations of such objects.
As expected, we find that the importance of general relativistic effects increases as the
stellar mass is increased. According to our calculations, O/Ne white dwarfs more
massive than 1.369 $M_{\sun}$ become gravitationally unstable with respect to general relativity effects. When core chemical distribution due to phase separation on crystallization is considered, such instability occurs at somewhat
lower stellar masses, $\gtrsim 1.360 M_\sun $.
For our most massive sequence, the stellar radius becomes 25\% smaller than predicted
by the Newtonian treatment. The evolutionary properties of our ultra-massive white dwarfs are also modified by general relativity effects. In particular, at advanced stages of evolution, the cooling times for our most massive white dwarf sequence result in about a factor of two shorter than in the Newtonian case. In addition, not considering general relativity effects when estimating the properties of such objects through photometric and spectroscopic techniques would lead to an overestimation of their mass of 0.015$M_\sun$ near the critical mass.
We have compared in the color-magnitude diagram our theoretical sequences with the white dwarfs composing the faint blue white dwarf branch \citep{2022RNAAS...6...36S}. We conclude that, regardless the infrared deficit flux that some particular objects may exhibit, several white dwarfs of this branch can present masses larger than $\sim 1.29 M_\sun $ and that it does not coincide with any isochrone nor with any evolutionary track. We found that seven of the white dwarfs in this branch should have a smaller radius than our most massive cooling sequence and should be gravitationally unstable against collapse. However, apart from the need of a more detailed spectroscopic study to accurately characterize the possible effects of the infrared flux deficit in some of these objects, the presence of strong magnetic fields and a rapid rotation, not considered in this study, could favor the stability of such objects, thus supporting the existence of super-Chandrasekhar white dwarfs, that, in the case of CO-core white dwarfs, should likely be the progenitors of the over-luminous Type Ia supernovae SN 2003fg, SN 2006gz, SN 2007if, SN 2009dc. Consequently, a detailed follow-up of these seven objects is required within the framework of the general relativity models exposed here.
\\
As discussed throughout this work, our new ultra-massive white dwarf
models for O/Ne core-chemical composition constitute an improvement
over those computed in the framework of the standard Newtonian theory
of stellar interiors. Therefore, in support of previous studies, the
effect of general
relativity must be taken into account to ascertain the true nature of
the most massive white dwarfs, in particular, at assessing their
structural and evolutionary properties.
\begin{acknowledgements}
We thank Detlev Koester for extending his atmosphere models to the high surface gravities that characterize our relativistic ultra-massive white dwarf models.
We also thank the comments of an anonymous referee that improved the original
version of this paper.
Part of this work was supported by PICT-2017-0884 from ANPCyT, PIP
112-200801-00940 grant from CONICET, grant G149 from University of La Plata, NASA grants 80NSSC17K0008 and 80NSSC20K0193. ST and ARM acknowledge support from MINECO under the PID2020-117252GB-I00 grant. ARM acknowledges support from Grant RYC-2016-20254 funded by MCIN/AEI/10.13039/501100011033 and by ESF Investing in your future. This research has made use of NASA Astrophysics Data System. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
\end{acknowledgements}
\bibliographystyle{aa}
\bibliography{ultramassiveCO}
|
Title:
Comparing Reflection and Absorption Models for the Soft X-ray Variability in the NLS1 AGN UGC 11763 |
Abstract: We present a spectral analysis of two XMM-Newton observations of the
narrow-line Seyfert 1 galaxy UGC 11763. UGC 11763 shows very different soft
X-ray spectral shapes in the two observations separated by 12 years. Three
spectral models are considered to explain the multi-epoch X-ray variability of
UGC 11763, one based on the relativistic disc reflection model, one based on
multiple partially-covering absorbers combined with the warm corona model, and
a hybrid model. In the first model, the X-ray variability of UGC 11763 is
caused by the emission from a compact coronal region with a variable size. The
resulting disc reflection component changes accordingly. A warm absorption
model with a modest column density is required in this model too. In the
partially-covering absorption scenario, the X-ray variability of UGC 11763 is
caused by the variable covering factors of two absorbers located within a
region of $r<\approx100r_{\rm g}$. Moreover, the temperature and strength of
the warm corona have to change significantly too to explain the variable
underlying soft X-ray emission. Lastly, we investigate the possibility of
variable intrinsic power-law emission from the hot corona combined with
variable absorption in UGC 11763 without changing the geometry of the corona in
the third model. This hybrid model provides a slightly better fit than the
partially-covering absorption model with improvements in fitting the iron
emission band. Current CCD-resolution data cannot distinguish these spectral
models for UGC 11763. Future high-resolution X-ray missions, e.g. Athena and
XRISM, will test them by resolving different spectral components.
| https://export.arxiv.org/pdf/2208.12177 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
accretion, accretion discs\,-\,black hole physics, X-ray: galaxies, galaxies: Seyfert
\end{keywords}
\section{Introduction} \label{intro}
Narrow-line Seyfert 1 galaxies (NLS1s) are a class of peculiar Seyfert 1 galaxies (Sy1s) with strong Fe~\textsc{ii} emission, narrow H$\beta$ emission and weak [O~\textsc{iii}] emission in the optical band in comparison with other Sy1s \citep{goodrich89}. It is believed that NLS1s host low-mass supermassive black holes (BHs) that are accreting near or around the Eddington limit \citep[e.g.][]{boroson02, grupe04a}, although some studies point out that the BH mass measurements of NLS1s might be biased towards low values when radiation pressure is ignored in the broad line region model for the H$\beta$ emission of NLS1s \citep{marconi08}.
In the X-ray band, the nuclei of NLS1s often show rapid, large-amplitude X-ray variability \citep[e.g.][]{mchardy95, boller03, smith07, jin17, alston19} and steep X-ray continuum emission \citep{puchnarewicz92,boller96,grupe98}. \citet{gallo06} finds that NLS1s show complex spectral features in the X-ray band, especially during their low flux state. In particular, NLS1s often show strong excess emission below 2--3\,keV in addition to the hard X-ray continuum extrapolated to the soft X-ray band \citep{boller96, piconcelli05}.
Different models have been proposed to explain the broad-band X-ray spectra of NLS1s \citep[see a review of this topic in][]{gallo18}. One of them is the disc reflection model \citep[e.g.][]{fabian04,larsson08, miniutti09, brenneman11,reis12, tan12, risaliti13, gallo13, walton13, parker14,marinucci14, jiang19,jiang20b}. The surface of the optically thick disc produces reprocessed emission of the illuminating coronal emission in the soft X-ray band and the Compton-scattering hump in the hard X-ray band. They are often referred to as the disc `reflection' component \citep{fabian89}. The most prominent feature of a disc reflection component is the Fe~K$\alpha$ emission line around 6.4\,keV, which is broadened by relativistic effects in the vicinity of the BH \citep{tanaka95}. The disc reflection model of Seyfert active galactic nuclei (AGNs) is also supported by the discoveries of X-ray reverberation lags in the soft X-ray band \citep[e.g.][]{fabian09,demarco13}, the Fe~K band \citep[e.g.][]{kara16} and the hard X-ray band \citep[e.g.][]{zoghbi14,kara15}
Another model used to explain X-ray spectra of Sy1s is the double partially-covering absorption model \citep{tanaka04}. In this model, multiple high-column density absorbers crosses our line of sight towards the X-ray emission region. The absorbers require a tricky geometry to partially cover the compact X-ray emission region as suggested by X-ray data \citep{reynolds09}. In this model, the absorbers produce strong Fe~K edge in the spectrum, which is used to explain the steep spectra of some NLS1s \citep[e.g.][]{gallo15}.
An additional component is still required to fit the soft excess emission when one uses the absorption model to fit the data in the Fe~K band. For instance, a soft power-law model was used to explain the soft excess emission of the NLS1 1H~0707$-$495 in combination with the absorption model in \citet{tanaka04}. Such a soft power-law component is proposed to originate in a warm coronal region with a high optical depth of $\tau_{\rm T}=10-20$ and a low temperature of $kT_{\rm e}<1$\,keV \citep{magdziarz98, petrucci01, czerny03, jin17, ursini20}. At such a low temperature, atomic opacity dominates over the Thomson opacity. Strong emission and absorption features could be shown in the spectrum of the warm corona in contradiction to the data of Sy1s \citep{garcia18}. Simulations by \citet{petrucci20}, however, suggest that the warm corona has to be heated by hard X-ray continuum from the hot corona and the disc emission from below, where the ions in the upper layer of the warm corona can be completely ionised showing no or weak emission and absorption lines. Photoionisation models also find that a higher accretion rate in the accretion disc leads to a warm corona producing stronger soft excess emission, similar to what is suggested by the X-ray data of Sy1s \citep{ballantyne20}.
In this work, we investigate how the models introduced above may explain the \xmm\ observations of the \red{active galaxy} \src\ at $z=0.063$ \citep[23 32 27.8, +10 08 19, ][]{clements81, huchra99}. \red{\src\ was classified as a NLS1 galaxy by \citet{boroson92,constantin03}. The H$\beta$ emission of this NLS1 has a width of 2250--2800 km\,s$^{-1}$ \citep{boroson92,grupe04,mullaney08}, which is higher than the values of typical NLS1s.} The mass of the central BH in \src\ is estimated to be around $4.57\times10^{8}M_{\odot}$ \citep{peterson04} \red{using broad emission-line reverberation-mapping data. \citet{ho08} further lowered this mass measurement by a factor of 1.8 to $2.5\times10^{8}M_{\odot}$ for consistency with the virial mass zero point\footnote{\red{See Footnote 4 in \citet{greene05}.}} adopted by \citep{greene05}.} These measurements are near the upper limit of the BH mass distribution of NLS1s \citep{grupe04}. \citet{peterson04} estimated the luminosity of \src\ at 5100\AA\ to be $\log(\lambda L_{\lambda})=44.46\pm0.04$. Assuming $L_{\rm Bol}=9\lambda L_{\lambda}$ \citep{kaspi00} and a BH mass of $4.6\times10^{8}$\,$M_{\odot}$ \citep{peterson04}, the Eddington ratio of \src\ is around 5\%.
In the X-ray band, \citet{cardaci09} reported a steep X-ray emission from \src\ and strong soft excess emission as well as the \textit{EXOSAT} data of the same source \citep{singh91}. \citet{cardaci09} also found Fe `Unresolved Transition Array' (UTA) absorption in the soft X-ray band of \src, which indicates the existence of photoionised absorbers, e.g. warm absorption commonly seen in AGNs \citep[e.g.][]{reynolds97,george98}. Large-amplitude X-ray variability of \src\ has been realised for decades and was first discovered by \citet{singh91,grupe01}. For instance, the X-ray flux of this source measured by \textit{ROSAT} varies by 50\% in the 0.1--2\,keV band in only one year.
In this paper, we present spectral analysis of the \xmm\ data of \src\ using two archival observations. We consider both the reflection-based and the absorption-based model. In Section \ref{data}, we introduce our data reduction processes; in Section \ref{analysis}, we present three models for the spectra of \src; in Section \ref{discuss}, we discuss our results; in Section \ref{conclude}, we conclude our work.
\section{Data Reduction} \label{data}
\begin{table*}
\centering
\caption{The list of \xmm\ observations analysed in this work. $F_{\rm 0.3-3keV}$ and $F_{\rm 3-10keV}$ are the observed flux of \src\ measured by pn in corresponding energy bands.}
\label{tab_obs}
\begin{tabular}{cccccc}
\hline\hline
Name & Obs ID & Date & Time & $\log(F_{\rm 0.3-3keV})$ & $\log(F_{\rm 3-10keV})$\\
& & & ks & \ergs & \ergs \\
\hline
obs1 & 0150470701 & 2003-05-16 & 38 & $-11.467\pm0.002$ & $-11.535^{+0.009}_{-0.007}$ \\
obs2 & 0744370201 & 2015-05-01 & 33 & $-11.141\pm0.005$ & $-11.483\pm0.019$\\
\hline
\end{tabular}
\end{table*}
We use the European Photon Imaging Camera (EPIC) observations for X-ray continuum modelling. A full list of observations used in our work is in Table\,\ref{tab_obs}.
The EPIC data are reduced using \red{V20.0} of the \xmm\ Science Analysis System (SAS) software package. The version of the calibration files is v.\red{20220407}. We first generate a clean event file by running EMPROC (for EPIC-MOS data) and EPPROC (for EPIC-pn data). Then, we select good time intervals by filtering out the intervals that are dominated by flaring particle background. These high-background intervals are where the single event (PATTERN=0) count rate in the >10~keV band is larger than 0.35~counts~s$^{-1}$ (0.4 counts~s$^{-1}$) for MOS (pn) data. By running the EVSELECT task, we select single and double events for EPIC-MOS (PATTERN<=12) and EPIC-pn (PATTERN<=4, FLAG==0) source event lists from a circular source region of 35 arcsec. Background spectra are extracted from a nearby circular region of 60 arcsec. No obvious evidence of pile-up effects has been found in obs1 in 2003. The pn and MOS instruments were operated in the full frame mode in 2015. Some evidence of pile-up effects were found. An annulus region with an inner radius of 10 arcsec and an outer radius of 35 arcsec is used to extract source spectra. The inner radius is chosen according to the EPATPLOT tool in SAS\footnote{\red{We use EPATPLOT to estimate the full-band observed-to-model ratios for single and double events based on expected pattern distribution functions from the latest calibration data. This ratio is $0.96\pm0.01$ for singles and $1.12\pm0.02$ for doubles when a circular region is used to extract source products from the pn observation, suggesting evidence of pile-up. Singles and doubles ratios are respectively $0.99\pm0.02$ and $1.03\pm0.04$ when an annulus region with an inner radius of 10 arcsec is used. Similar conclusions are found for MOS observations.}}. Last, we create redistribution matrix files and ancillary response files by running RMFGEN and ARFGEN. In this work, we consider the full 0.3--10\,keV band of EPIC data. The spectra are grouped to have a minimum of 20 counts per bin and oversample by a factor of 3.
\section{Soft X-ray Variability of \src}
Fig.\,\ref{pic_swift} shows the long-term X-ray lightcurves of \src\ in the 0.5--2\,keV and 2--10\,keV bands. In particular, the \xmm\ observations analysed in this work are marked with the red squares in the figure. The X-ray flux of \src\ shows variability on timescales of months and years . For instance, the 0.5--2\,keV flux of this source increased from the reported minimum flux of $1.6\times10^{-12}$\,\ergs\ in April 2011 to the maximum flux $8.3\times10^{-12}$\,\ergs\ in April 2014. X-ray variability with a similar amplitude was realised in previous observations \citep{singh91,grupe01}.
In this work, we focus on the \xmm\ observations of this source. The first observation was taken during a low flux state and the second was taken during a higher flux state in the soft X-ray band. In comparison with the soft X-ray band, the 2--10\,keV flux of \src\ does not show large amplitude variability. This leads to the question--what may cause the X-ray variability of \src\ confined below 2\,keV.
We show EPIC-pn lightcurves extracted from the two \xmm\ observations in Fig.\,\ref{pic_xmm_lc}. Unlike other extreme NLS1s \citep[e.g.][]{mchardy95, boller03, smith07, jin17, alston19}, \src\ does not show rapid and large-amplitude variability on timescales of kiloseconds in the X-ray band.
The lack of rapid variability on kilosecond timescales and the evidence of large-amplitude variability on longer timescales might be related to the relatively higher BH mass of \src\ \citep[a few times $10^{8}M_{\odot}$,][]{peterson04, ho08} in comparison with other NLS1s.
\section{Spectral Analysis} \label{analysis}
We use the XSPEC software (v.12.12.1) for spectral analysis \citep{arnaud96}. We start our analysis by fitting the spectra above 2\,keV with an absorbed power law. The \texttt{tbnew} model is used to account for Galactic absorption \citep{wilms00}, which is estimated to be $N_{\rm H}=4.6\times10^{20}$\,cm$^{-2}$ \citep{willingale13}. Corresponding data/model ratio plots are shown in Fig.\,\ref{pic_pl}. A zoom-in of the 3--10\,keV band is shown in the lower panel. Both epochs show evidence of Fe~K$\alpha$ emission in the Fe~K band. Meanwhile, two epochs show different spectral shape in the soft X-ray band.
In the rest of the section, we focus on modelling the spectra of \src\ using the absorption, disc reflection and hybrid models.
\subsection{Relativistic Disc Reflection Model (\red{Model 1})}
\subsubsection{Model set-up}
In the disc reflection scenario, the soft excess is interpreted as part of the reprocessed emission from the inner accretion disc \citep[e.g.][]{crummy07,jiang20b}. To model the disc reflection component, we use the \texttt{relxilld} model \citep[\red{Model 1},][]{garcia16}. A broken power-law emissivity profile parameterised by q1, q2 and $R_{\rm b}$ is considered. Other free parameters include the spin of the BH, the inclination angle, the iron abundance and the electron number density of the disc. The reflection fraction parameter of \texttt{relxilld} is set to be a free, positive number, so the model returns both the hot coronal emission and the corresponding disc reflection spectrum.
By applying \texttt{relxilld} to obs1, we find evidence of absorption features at 0.8\,keV. See Fig.\,\ref{pic_warm} for corresponding data/model ratio plots. The goodness of this fit is $\chi^{2}/\nu=$\red{551.62/424}. The absorption features correspond to Fe UTA, suggesting the existence of photoionised absorbers in a low ionisation state, e.g. warm absorbers commonly seen in AGNs \citep[e.g.][]{lee01,ebrero16}. \citet{cardaci09} pointed out similar residuals in the spectra of \src\ extracted from obs1 when applying a simple model including Galactic absorption, a power law, a black body and Fe~K$\alpha$ emission line.
To fit the absorption features, we use a tabulated version of the \texttt{xabs} photoionised absorption model \citep{steenbrugge03} from SPEX \citep{kaastra96}, implemented as an XSPEC table model by \citet{parker19} and available from www.michaelparker.space/xspec\_models. The version used here assumes the photoionisation is driven by a $\Gamma=2$ power-law input spectrum, and covers ionisations from $\log(\xi)=-4$ to 5, with parameters for the column density, velocity broadening, and covering fraction.
When fitting the warm absorption features in the data, we assume low velocity broadening of 100\,km\,s$^{-1}$ and a full-covering geometry. The assumption for this geometry is based on the long distance between warm absorbers and the X-ray emission region: typical warm absorbers are estimated to be near the broad line region \citep[e.g.][]{reynolds95}.
By including the \texttt{xabs} model, the fit is significantly improved, e.g. $\Delta\chi^{2}=45$ and two more free parameters. We, therefore, conclude that our best-fit model is \texttt{tbnew * xabs * relxilld} (\red{Model 1}). We only obtain an upper limit of the Galactic column density ($N_{\rm H}<6\times10^{20}$\,cm$^{-2}$). This parameter is thus fixed at the nominal value calculated by \citet{willingale13} ($4.6\times10^{20}$\,cm$^{-2}$). Similar conclusions are found for obs2.
\subsubsection{Multi-epoch analysis} \label{ref_multi}
\red{We fit the two observations of \src\ simultaneously with Model 1 to better understand the spectral variability in this object.} Best-fit parameters are shown in the first two columns of Table\,\ref{tab_ref}.
Some of the parameters for the two observations are consistent within their uncertainty ranges. For instance, we only obtain an upper limit of the density of the reflection surface of the disk for each observation. Both values are consistent with a low density of $10^{15}$\,cm$^{-3}$, which was commonly assumed in the disc reflection modelling of AGN data \citep[e.g.][]{ross05}. In addition, the spin of the central BH, the inclination angle and iron abundances of the accretion disc are not expected to change on observable timescales. We obtain consistent measurements of these parameters for two observations, which increases our confidence in the choice of our reflection model. Lastly, the ionisation state of the warm absorber also remains consistent in these two observations.
We conduct multi-epoch spectral analysis with all the parameters mentioned above linked between two observations. By doing so, we obtain a good fit for both observations with $\chi^{2}/\nu=$\red{867.37/721}. The best-fit parameters are shown in Table\,\ref{tab_ref} and the best-fit models are shown in the upper panel of Fig.\,\ref{pic_ref}. Corresponding data/model ratio plots are shown in the lower panels of Fig.\,\ref{pic_ref}.
\subsubsection{Results} \label{ref_measure}
By comparing the best-fit parameters of \red{Model 1} for the two \xmm\ observations of \src, we find that the coronal emission shows a softer-when-brighter pattern--the photon index of the coronal emission increases from 2.26 to 2.41. This is commonly seen in other AGN too \citep[e.g.][]{jiang18,wu20}.
The fit of the disc reflection spectrum in obs2 requires a flatter disc emissivity profile, i.e. a lower q1 and a higher $R_{\rm b}$, suggesting a change in the coronal geometry, e.g. a more extended corona in \src\ during obs2 than obs1 \citep{dauser13}. The anti-correlation between the X-ray flux and the reflection fraction parameter suggests a similar conclusion. When the coronal region is more compact, more coronal photons are lost to the event horizon. On the other hand, disc reflected photons from a more extended emission region is less affected by light-bending effects, resulting in a higher reflection fraction in the spectrum \citep[e.g.][]{miniutti03}.
In summary, the spectral difference between obs1 and obs2 is explained by the variable coronal emission in \red{Model 1}. The disc reflection component changes accordingly. During the low flux state (obs1), the coronal region is more compact as suggested by the steeper emissivity profile and the higher reflection fraction parameter. Meanwhile, the variable column density of the warm absorber also contributes to the soft X-ray variability while the ionisation state of the warm absorber remains consistent.
The spin of the BH, the inclination angle of the inner disc and the iron abundances of the disc are not expected to change on observable timescales. By linking these parameters of two observations, we obtain $i=32\pm2^{\circ}$, $a_{*}>0.97$ and $Z_{\rm Fe}=4.8\pm1.2Z_{\odot}$.
\begin{table*}
\centering
\begin{tabular}{ccccccc}
\hline\hline
Model & Parameter & Unit & \multicolumn{2}{|c|}{obs1 \& 2} \\
\hline
\texttt{xabs} & $N_{\rm H}$ & $10^{20}$\,cm$^{-2}$ & $25\pm4$ & $7\pm3$ \\
& $\log(\xi)$ & erg cm s$^{-1}$ &\multicolumn{2}{|c|}{$1.7\pm0.2$} \\
\hline
\texttt{relxilld} & q1 & - & $8.0\pm0.3$ & $4.0\pm0.4$ \\
& q2 & - & $2.9\pm0.2$ & $3.1^{+0.7}_{-0.6}$ \\
& $R_{\rm b}$ & $r_{\rm g}$ & $3.4^{+0.2}_{-0.4}$ & $5\pm2$ \\
& $a_{*}$ & - & \multicolumn{2}{|c|}{>0.97}\\
& $i$ & deg & \multicolumn{2}{|c|}{$32\pm2$} \\
& $Z_{\rm Fe}$ & $Z_{\odot}$ & \multicolumn{2}{|c|}{$4.8\pm1.2$} \\
& $\log(n_{\rm e})$ & cm$^{-3}$ & \multicolumn{2}{|c|}{$<15.7$} \\
& $\log(\xi)$ & erg cm s$^{-1}$ & $1.20\pm0.15$ & $1.3^{+0.3}_{-0.2}$\\
& $\Gamma$ & - & $2.26\pm0.02$ & $2.49\pm0.06$ \\
& $f_{\rm refl}$ & - & $10\pm4$ & $3.0\pm1.5$ \\
& $\log(F_{\rm X})$ & \ergs & $-11.105\pm0.008$ & $-10.865\pm0.015$\\
\hline
& $\chi^{2}/\nu$ & - & \multicolumn{2}{|c|}{867.37/721} \\
\hline\hline
\end{tabular}
\caption{Best-fit parameters obtained by using Model 1. $F_{\rm X}$ is the unabsorbed flux of the model in the 0.3--10\,keV band. }
\label{tab_ref}
\end{table*}
The best-fit \red{Model 1} of \src\ suggests that the column density of the warm absorber decreases from $2.5\pm0.4\times10^{21}$\,cm$^{-3}$ in obs1 to $1.0\pm0.3\times10^{21}$\,cm$^{-3}$ in obs2 while its ionisation state remains consistent. We investigate whether it is possible to explain the X-ray spectral variability of \src\ by varying only the intrinsic continuum emission. By linking the column density parameters of two observations, the fit is significant worse below 1~keV with $\chi^{2}/\nu=$\red{910.63/723}. Therefore, although the X-ray variability is dominated by the variable intrinsic continuum emission in \red{Model 1}, the contribution of the variable line-of-sight column density of the warm absorber in \src\ cannot be ignored.
\subsection{Partially-Covering Absorption Model (\red{Model 2})}
\subsubsection{Model set-Up}
In this section, we \red{consider a model based on multiple partially-covering absorbers} \citep[e.g.][]{tanaka04}. In this model, the residuals between 4--8\,keV as shown in Fig.\,\ref{pic_pl} are explained by the Fe~K absorption edge of two high-$N_{\rm H}$ absorbers \citep[e.g.][]{waddell19}.
We first fit the soft excess emission of \src\ to a soft Comptonisation model, similar to the model for another NLS1 Mrk~335 in \citet{gallo15}. The \texttt{comptt} model \citep{titarchuk94,marshall03} is used for this purpose. The combination of warm and hot corona models provides a fit to the obs1 data, for example, with $\chi^{2}/\nu=$\red{630.42/430}. See the first panel of Fig.\,\ref{pic_abs_step} for the corresponding data/model ratio plot. Residuals are seen at 6.4\,keV and <1\,keV, suggesting the existence of Fe~K$\alpha$ emission and low-ionisation absorption.
Similar to the absorption model in \citet{tanaka04,gallo15}, we consider a low-ionisation partially-covering model to fit the negative residuals at 7\,keV and the absorption features below 1\,keV. The same \texttt{xabs} model as in \red{Model 1} is used to model the photoionisation absorption in the data.
One additional \texttt{xabs} component improves the fit by $\Delta\chi^{2}=$\red{30} with three more free parameters ($N_{\rm H}$, $\log(\xi)$ and $f_{\rm cov}$). See the second panel of Fig.\,\ref{pic_abs_step} for the corresponding data/model ratio plot. The fit of Fe UTA at 0.8 keV is improved by adding \texttt{xabs}. To further improve the fit below 1\,keV, we add a second \texttt{xabs} model, which decreases $\chi^{2}$ by \red{65} with three more free parameters.
We then add an additional \texttt{xillver} model \citep{garcia10} with the ionisation parameter fixed at $\log(\xi)=0$ to account for narrow Fe~K$\alpha$ emission from a distant reflector. The additional reflection component improves the fit by $\Delta\chi^{2}=$\red{20} with one more free parameter. The final best-fit model is \texttt{tbnew * xabs1 * xabs2 * (comptt + powerlaw + xillver)} (\red{Model 2}) in XSPEC notations. Similar conclusions are found for obs2. In summary, \red{Model 2} needs two layers of low-ionisation absorption in combination with the warm corona model.
Note that some positive residuals are still seen at 6\,keV of obs1 using \red{Model 2} (see the last panel of Fig.\,\ref{pic_abs_step}). The combination of narrow Fe~K$\alpha$ emission from the distant reflection model and the Fe~K absorption from partially-covering absorbers is unable to fit the spectra in the iron emission band perfectly. Relativistic correction for the reflection component is still required, although \red{Model 2} is able to provide an acceptable fit to the X-ray continuum emission of \src. In Section\,\ref{hybrid}, we will further improve the fit in the iron emission band by considering a hybrid model including both absorption and disc reflection.
Lastly, \red{Model 1} offers obs1 a better fit than \red{Model 2} with $\Delta\chi^{2}=$\red{9}. The difference of two fits are in not only the iron emission band as described above but also the 8--10\,keV band in the observed frame. Positive residuals are seen when fitting the spectra of obs1 with \red{Model 2}. In comparison, \red{Model 1} fits the spectra better near the upper limit of the EPIC energy range. Future hard X-ray observations, e.g. from \nustar\ \citep{harrison13} or \textit{HEX-P} \citep{madsen19}, may help distinguishing two models in the >10\,keV band.
\subsubsection{Multi-epoch analysis} \label{abs_multi}
By applying \red{Model 2} to two observations, we also obtain reasonably good fits for the continuum emission. The best-fit parameters are shown in Table \,\ref{tab_abs}. \red{By doing so, we find a good fit for both observations with $\chi^{2}/\nu=880.46/721$. Best-fit parameters are shown in Table\,\ref{tab_abs} and the best-fit models are shown in the upper panel of Fig.\,\ref{pic_abs}. Corresponding data/model ratio plots are shown in the lower panels of Fig.\,\ref{pic_abs}.}
In \red{Model 2}, the ionisation states of the two absorbers are consistent in two epochs: the first absorber has an ionisation state of $\log(\xi)\approx2.5$; the second absorber has an ionisation state of $\log(\xi)\approx0.7$. The covering factor of the first absorber decreases from 0.5 during obs1 to 0.3 during obs2; the same parameter of the second absorber decreases from 0.76 during obs1 to 0.6 during obs2. Furthermore, the optical depth of the warm corona remains consistent with 20 while its temperature increases from \red{0.15}\,keV to \red{0.26}\,keV.
\subsubsection{Results}
In \red{Model 2}, the soft X-ray variability of \src\ is explained by variable line-of-sight absorption and intrinsic continuum emission including the soft excess emission and the hot coronal emission.
Two partially-covering absorbers in a low-ionisation state, \texttt{xabs1} and \texttt{xabs2}, are needed in \red{Model 2}. The first absorber \texttt{xabs1} has a higher column density, a higher ionisation state and a lower covering factor than \texttt{xabs2}. For instance, \texttt{xabs1} has a column density of \red{2.8}$\times10^{23}$\,cm$^{-2}$, which is approximately \red{18} times the column density of \texttt{xabs2}. The ionisation parameter of \texttt{xabs1} is around \red{64} times the same parameter\footnote{Note that the ionisation parameter is reported in log in Table\,\ref{tab_abs}.} of \texttt{xabs2}. The covering factor of \texttt{xabs1} is lower than that of \texttt{xabs2}.
The column density and covering factor of the first absorber \texttt{xabs1} increase from the high flux state during obs2 to the low flux state during obs1. The best-fit value of the column density changes by a factor of \red{3}. The covering factor of \texttt{xabs1} increases by a factor of \red{1.6}. The column density of the second absorber \texttt{xabs2} decreases by a factor of 2 from obs2 to obs1 while the covering factor increases by a factor \red{1.3}.
In addition to variable absorption, \red{Model 2} also requires the intrinsic emission to be variable to fit the data. The photon index of the hot coronal emission increases dramatically from \red{1.8} in obs1 to \red{2.5} in obs2. The temperature and the strength of the warm corona also increase from obs1 to obs2. In particular, the unabsorbed flux of the warm coronal emission increases by a factor of 13 in the \xmm\ energy band.
We investigate the possibility of fitting the spectra of \src\ using \red{Model 2} without the need of changing the intrinsic emission. The following additional parameters are linked between two epochs: $F_{\rm w}$, $kT$, $F_{\rm h}$, $\Gamma$ and $F_{\rm x}$. By doing so, we obtain a worse fit with $\chi^{2}/\nu=$\red{927.46/726} than the fit presented in Table\,\ref{tab_abs} and Fig.\,\ref{pic_abs} ($\chi^{2}/\nu=$\red{880.46/721}). This model requires intermediate values of the parameters for the linked components: for instance, the flux of the warm corona and the hot corona is $\log(F_{\rm w})=-11.40^{+0.12}_{-0.19}$ and $\log(F_{\rm h})=-10.87^{+0.06}_{-0.07}$; the temperature of the warm corona is $0.21^{+0.03}_{-0.02}$\,keV. Based on the high value of $\chi^{2}$ of this fit, we argue that \red{Model 2} requires both the intrinsic emission and the absorption to be variable to explain the data.
\begin{table*}
\centering
\begin{tabular}{ccccccc}
\hline\hline
Model & Parameter & Unit & \multicolumn{2}{c}{obs1 \& obs2}\\
\hline
\texttt{xabs1} & $N_{\rm H}$ & $10^{22}$\,cm$^{-2}$ & $28^{+9}_{-6}$ & $10\pm7$ \\
& $\log(\xi)$ & erg cm s$^{-1}$ & \multicolumn{2}{c}{$2.5\pm0.2$} \\
& $f_{\rm cov}$ & - & $0.50\pm0.04$ & $0.31^{+0.17}_{-0.11}$ \\
\texttt{xabs2} & $N_{\rm H}$ & $10^{22}$\,cm$^{-2}$ & $1.6\pm0.3$ & $3.4^{+0.7}_{-0.2}$ \\
& $\log(\xi)$ & erg cm s$^{-1}$ & \multicolumn{2}{c}{$0.7\pm0.2$} \\
& $f_{\rm cov}$ & - & $0.76\pm0.04$ & $0.60\pm0.05$ \\
\texttt{comptt} & $\tau$ & - & \multicolumn{2}{c}{$20\pm3$} \\
& kT & keV & $0.145^{+0.015}_{-0.012}$ & $0.26\pm0.02$ \\
& $\log(F_{\rm w})$ & \ergs & $-11.72\pm0.02$ & $-10.6^{+0.3}_{-0.4}$\\
\texttt{powerlaw} & $\Gamma$ & - & $1.81\pm0.07$ & $2.50^{+0.13}_{-0.15}$ \\
& $\log(F_{\rm h})$ & \ergs & $-11.16^{+0.03}_{-0.02}$ & $-10.72^{+0.26}_{-0.12}$\\
\texttt{xillver} & $\log(F_{\rm x})$ & \ergs & $-12.49^{+0.15}_{-0.12}$ & $-11.8\pm0.3$ \\
\hline
& $\chi^{2}/\nu$ & - & \multicolumn{2}{c}{880.46/721} \\
\hline\hline
\end{tabular}
\caption{Best-fit parameters obtained by using Model 2.}
\label{tab_abs}
\end{table*}
\subsection{Hybrid Model} \label{hybrid}
In previous two sections, we introduce two spectral models to fit the \xmm\ spectra of \src: one is based on the relativistic disc reflection model (\red{Model 1}) and the other is based on double partially-covering absorption model (\red{Model 2}). The evidence of Fe~K$\alpha$ emission and soft excess in the spectra of \src\ motivates the choice of \red{Model 1}. An additional warm absorption model with a modest column density of 1--$2.5\times10^{21}$\,cm$^{-2}$ is required to fit the Fe UTA at 0.8\,keV. Alternatively, one may fit the soft excess of \src\ with the warm corona model which is included in \red{Model 2}.
In this section, we first compare \red{Model 1} and \red{Model 2} and summarise the interpretations of the soft X-ray variability of \src\ based on two models. We then introduce a hybrid model where the intrinsic emission is described by the relativistic disc reflection model and additional absorption models are used to explain the variability. Such a model can improve the fit in the iron emission and 8--10\,keV bands in comparison to \red{Model 2}.
\subsubsection{\red{Variable reflection or/and absorption?}}
\src\ shows significant long-term variability in the <2\,keV band on timescales of months and years (see Fig.\,\ref{pic_swift}). We conduct multi-epoch spectral analysis for the two \xmm\ observations of this object based on \red{Model 1} and \red{Model 2} to study the origin of such variability.
In \red{Model 1}, the soft X-ray variability is dominated by the intrinsic emission from the hot corona and the reflected emission from the inner accretion disc. The best-fit unabsorbed \red{Model 1} for two \xmm\ observations is shown in the top panel of Fig.\,\ref{pic_intrin}. The X-ray continuum emission is softer during obs2 when the flux is high. The variable disc emissivity profile and reflection fraction of the disc reflection model agrees with the light-bending model \citep{miniutti03}, where the size of the coronal region plays an important role. Variability in the column density of the warm absorption, which shows a consistent ionisation state, contribute to the soft X-ray variability too.
In \red{Model 2}, both absorption and intrinsic emission contribute to the soft X-ray variability of \src. Two low-ionisation partially-covering absorbers are required to fit the broad band spectra. They show a higher covering factor when the observed soft X-ray flux of \src\ is low during obs1. Meanwhile, they remain consistent ionisation states. The unabsorbed intrinsic emission of \red{Model 2} is shown in the lower panel of Fig.\,\ref{pic_intrin}. Emission from both hot and warm corona changes: the hot coronal emission becomes softer and the temperature and strength of the warm corona increase. In comparison with \red{Model 1}, \red{Model 2} requires the photon index of the hot coronal emission to increase by a larger amplitude: $\Gamma$ in \red{Model 1} increases from 2.26 in obs1 to \red{2.49} in obs2; $\Gamma$ in \red{Model 2} increases from \red{1.81} to \red{2.50}.
Furthermore, \red{Model 1} provides a slightly better fit to the spectra of two observations than \red{Model 2} by $\Delta\chi^{2}=$\red{13} with the same number of parameters. The difference in the goodness of their fits is in the 6--10\,keV band, where the Fe~K emission line is not well fit by \red{Model 2} and some positive residuals are still seen above 8\,keV (see Fig.\,\ref{pic_abs_step} and \ref{pic_warm}).
In addition to \red{Model 2} and \red{Model 1}, we propose a hybrid model where the disc reflection model is used to fit the intrinsic emission and absorption models are still needed to explain the variability. Such a model can improve the fit in the 6--10\,keV band by modelling the broad Fe~K$\alpha$ emission with a relativistic disc model. We also investigate whether the spectral variability can be explained by variable absorption and power-law emission without the need for changes in the size of the corona in this hybrid model.
\subsubsection{Model set-up}
In this section, we present a multi-epoch analysis of the \xmm\ spectra of \src\ based on a hybrid model. The relativistic disc reflection model \texttt{relxilld} is used to model the intrinsic emission. The same model is used in Section \ref{ref_multi}. Two \texttt{xabs} models as in Section \ref{abs_multi} are included to account for absorption.
In \red{Model 1}, one \texttt{xabs} model is included to fit the Fe UTA of the full-covering warm absorber in \src. Although the variability is dominated by the varying intrinsic emission, the variable column density of the warm absorber cannot be ignored (see Section\,\ref{ref_multi}). In particular, \red{Model 1} also suggests that the size of the coronal region has to change to explain the variable disc emissivity profiles and reflection fractions. In this hybrid model, we study whether it is possible to keep the geometry of the coronal region consistent and interpret the soft X-ray variability with absorption together with changes in the illuminating coronal emission.
To achieve the goals above, we link most of the parameters in \texttt{relxilld} between two observations. They include the emissivity profile, density, ionisation and reflection fraction of the disc. Besides, the spin of the central BH, the inclination angle and iron abundances of the disc are not expected to change on observable timescales. So, they are linked too. We try to fit the data with linked photon index, but the fit is significantly worse. We, therefore, allow the photon index and normalisation of \texttt{relxilld} to be different in two observations. These two parameters describe the illuminating spectrum of the disc.
The first absorption model \texttt{xabs1} is required to fit the spectrum of obs1 but not necessary for obs2 when flux is high. We, therefore, link the column density and ionisation parameter of \texttt{xabs1} for two observations. Only an upper limit of the covering factor is found for obs2 ($f_{\rm cov}<0.09$). The second absorption model \texttt{xabs2} remains a consistent ionisation state. The ionisation parameter is thus linked too. We obtain only a lower limit of the covering factor for \texttt{xabs2} at 0.97, suggesting a full-covering geometry. The best-fit ionisation parameter is around 0.9, which is lower than the value of \red{Model 1} but still consistent with typical values in most AGN \citep{reynolds95,laha14}. Similar to \red{Model 1}, the column density of \texttt{xabs2} increases from $6\times10^{20}$\,cm$^{-2}$ during the high flux state (obs2) to $1.1\times10^{21}$\,cm$^{-2}$ during the low flux state (obs1).
\subsubsection{Results}
\begin{table*}
\centering
\begin{tabular}{ccccc}
\hline\hline
Model & Parameter & Unit & \multicolumn{2}{|c|}{obs1 \& 2} \\
\hline
\texttt{xabs1} & $N_{\rm H}$ & $10^{22}$\,cm$^{-2}$ & \multicolumn{2}{c}{$1.7^{+0.7}_{-0.6}$} \\
& $\log(\xi)$ & erg cm s$^{-1}$ & \multicolumn{2}{c}{$1.52^{+0.20}_{-0.15}$}\\
& $f_{\rm cov}$ & - & $0.47\pm0.07$ & <0.09 \\
\hline
\texttt{xabs2} & $N_{\rm H}$ & $10^{21}$\,cm$^{-2}$ & $1.1^{+0.2}_{-0.3}$ & $0.6^{+0.2}_{-0.3}$ \\
& $\log(\xi)$ & erg cm s$^{-1}$ &\multicolumn{2}{c}{$0.9\pm0.2$}\\
& $f_{\rm cov}$ & - & \multicolumn{2}{c}{>0.97} \\
\hline
\texttt{relxilld} & q1 & - & \multicolumn{2}{c}{$5.6^{+0.4}_{-0.5}$} \\
& q2 & - & \multicolumn{2}{c}{$2.6\pm0.6$} \\
& $R_{\rm b}$ & $r_{\rm g}$ & \multicolumn{2}{c}{$5.5^{+2.0}_{-1.4}$} \\
& $a_{*}$ & - & \multicolumn{2}{|c|}{>0.95}\\
& $i$ & deg & \multicolumn{2}{|c|}{$32\pm3$} \\
& $Z_{\rm Fe}$ & $Z_{\odot}$ & \multicolumn{2}{|c|}{$4\pm2$} \\
& $\log(n_{\rm e})$ & cm$^{-3}$ & \multicolumn{2}{|c|}{$16.6^{+0.8}_{-1.0}$} \\
& $\log(\xi)$ & erg cm s$^{-1}$ & \multicolumn{2}{c}{$1.06\pm0.02$}\\
& $\Gamma$ & - & $2.29\pm0.02$ & $2.51\pm0.03$ \\
& $f_{\rm refl}$ & - & \multicolumn{2}{c}{$3.2\pm0.3$} \\
& $\log(F_{\rm X})$ & \ergs & $-11.01\pm0.02$ & $-10.82^{+0.03}_{-0.02}$ \\
\hline
& $\chi^{2}/\nu$ & - & \multicolumn{2}{|c|}{872.41/721} \\
\hline\hline
\end{tabular}
\caption{Best-fit parameters obtained by using a hybrid model. $F_{\rm X}$ is the unabsorbed flux of the model in the 0.3--10\,keV band. }
\label{tab_hybrid}
\end{table*}
The hybrid model introduced above provides a good fit to both observations with $\chi^{2}/\nu=$\red{872.41/721}. The fit is slightly worse than \red{Model 1} by $\Delta\chi^{2}=5$ with the same number of free parameters, and better than \red{Model 2} by $\Delta\chi^{2}=$\red{8} also with the same number of parameters. Best-fit models are shown in Fig.\,\ref{pic_hybrid} and best-fit parameters are shown in Table\,\ref{tab_hybrid}. The fit in the iron emission band is improved compared to \red{Model 2}. Some residuals are still seen above 8\,keV in obs1 but reduced from the fit of \red{Model 2}.
Assuming the geometry of the coronal region remains consistent, we obtain an intermediate value of reflection fraction for the disc (\red{3.2}$\pm0.3$), which lies between the values for two observations inferred by \red{Model 1}. Similar conclusion is found for the emissivity profile of the disc, which is flatter than the one for obs1 in \red{Model 1} but steeper than the one for obs2. Consistent measurements for the spin of the BH, the inclination and iron abundances of the disc are achieved. Tentative evidence shows that a higher disc density is required when fitting two spectra with the same reflection model. The 90\% confidence range of the density parameter is $\log(n_{\rm e}/{\rm cm}^{-3})=16.6^{+0.8}_{-1.0}$. Considering a 3-$\sigma$ uncertainty range\footnote{The hard lower limit of $n_{\rm e}$ in \texttt{relxilld} is $10^{15}$\,cm$^{-3}$.}, we obtain only an upper limit at $10^{18}$\,cm$^{-3}$.
In this hybrid model, the strength and shape of the coronal emission are allowed to be different in two epochs. The resulting reflection spectrum changes accordingly with a consistent flux fraction. We show the best-fit unabsorbed models in Fig.\,\ref{pic_intrin_hybrid}. The intrinsic emission is softer during obs2 than obs1. The photon index of the power-law emission increases from \red{$2.29$} to \red{$2.51$}. The unabsorbed flux of the \texttt{relxilld}, however, does not change a lot ($\log(F)=$\red{-11.01} in obs1 and $\log(F)=$\red{-10.82} in obs2).
Additional variable absorption is needed in the hybrid model. The first absorber \texttt{xabs1} is in a higher ionisation state than the second absorber \texttt{xabs2}. \texttt{xabs1} has a covering factor of 0.47 during obs1 when the soft X-ray flux of \src\ is low. During the high flux state, obs2 requires no \texttt{xabs1}. We estimate the upper limit of its covering factor to be at 0.09. The second absorber is similar to typical warm absorbers seen in other AGN. The covering factor is consistent with 1 (>0.97). The column density of \texttt{xabs2} decreases by a factor of 2 from obs1 to obs2.
In summary, the hybrid model is also able to explain to the multi-epoch variability of \src. This model provides a slightly worse fit than \red{Model 1} but a better fit than \red{Model 2} with improvements of fitting the data in the 6--10\,keV band. In this hybrid model, the soft X-ray variability of \src\ is explained by variable power-law emission from the corona. The reflected emission from the accretion disc changes accordingly without changing the geometry of the corona. During the low flux state, a partially-covering absorber of $\log(\xi)=$\red{1.5} crosses our line of sight to the source with a covering factor of around 47\%. During the high soft X-ray flux state, this absorber moves out of our line of sight. Additional full-covering warm absorber similar to those in other AGN is needed to fit the data. The warm absorber shows a slightly higher column density in obs1 than obs2.
\red{The hybrid model improves the fit in the iron emission band compared to Model 2 by including the relativistic disc reflection model. No additional component is required to fit the soft excess emission. We investigate whether an additional soft Comptonisation component as in Model 2 is able to further improve the fit. The \texttt{comptt} model is used for this purpose. We find that the fit is not significantly improved as the disc reflection component accounts for both the iron emission and the soft excess of \src. We fix the parameters of the \texttt{comptt} model at the best-fit values in Model 2 (see Table\,\ref{tab_abs}) and then obtain an upper limit for the contribution of the soft Comptonisation component in the 0.3--10\,keV band, $F_{\rm w}<1.6\times10^{-13}$~\ergs\ and $<4\times10^{-13}$~\ergs\ respectively for obs1 and obs2.}
\section{Discussion} \label{discuss}
We present different spectral models for the \xmm\ observations of \src. \red{The first model is based on the relativistic disc reflection model and requires a variable size in the coronal region to explain the data. The second model is based on multiple partially-covering absorbers in combination with the warm corona model. The observed soft X-ray variability is interpreted by variable line-of-sight absorption and soft Comptonisation emission. We also propose a third hybrid model, where both multiple absorption and disc reflection models are included. In this scenario, the size of the corona remains consistent while intrinsic variability in its emission is expected. The soft X-ray variability is dominated by one partially-covering absorber in the low X-ray flux state, which leaves our line of sight during the high X-ray flux state.}
\subsection{Variable Disc Reflection (\red{Model 1})}
In \red{Model 1}, the spectral variability is dominated by the Comptonisation emission from the hot corona. The photon index of the coronal continuum emission is higher when the X-ray luminosity is higher, which was often seen in many other Sy1s \citep[e.g.][]{jiang18,wu20}.
The resulting disc reflection component changes according to the variable illuminating coronal emission. In particular, the lower flux state observation (obs1) has a higher reflection fraction than the higher flux state observation (obs2). This can be explained by the light-bending effects of the variable size of the corona in \src\ \citep[e.g.][]{miniutti03, reis15, jiang18}.
The reflection fraction parameter in \texttt{relxilld} is defined as the ratio between the intensity of the coronal component that reaches the disc and the one seen by the observer, and the value of this parameter can easily exceed unity when the corona is compact \citep{dauser16}. The best-fit reflection fraction parameters for obs1 and obs2 are respectively \red{10} and \red{3}. Such a large amplitude change of the reflection fraction parameter was also seen in the multi-epoch variability of other AGNs \citep{jiang18d}. The high values of reflection fraction parameters suggest that the corona is within a region of <3\,$r_{\rm g}$ assuming a simple `lamp-post' geometry\footnote{The `lamp-post' model assumes a point-like, isotropic geometry of the corona located on the rotating axis of the BH.} \citep{dauser16}. The flatter disc emissivity profile also suggests the existence of a more extended coronal region in \src\ in obs2 than obs1. Because the outer region of the accretion disc is more illuminated when the coronal region is larger \citep[e.g.][]{gonzalez17}. \red{The corona is known to be very compact within a few gravitational radii in many AGN as well as \src\ according to advanced timing analyses in the X-ray band \citep[e.g.][]{fabian09,reis13}. Similar conclusions were found in microlensing events of AGN \citep{morgan08,chartas17}. The compact corona also agrees with the predictions of some coronal models, such as magnetic reconnection where the magnetic field increases strength towards smaller radii \citep{merloni01}, the base of the jet \citep{ghisellini04} and pair productions in the magnetosphere around the central BH \citep{hirotani98,chen20}. }
We estimate the distance of the warm absorber in \red{Model 1} from the central BH using the best-fit ionisation parameter $\xi=$\red{50}\,erg\,cm\,s$^{-1}$. The bolometric luminosity is estimated to be $9\times L_{\rm 5100}\approx3.6\times10^{45}$\,erg\,s$^{-1}$ \citep{peterson04}. Assuming a density of $n=10^{9}$\,cm$^{-3}$ \citep{reynolds95} and an isotropic illuminating source, we estimate the location of the warm absorber in \src\ to be around $d=\sqrt{\frac{L_{\rm Bol}}{4\pi n \xi}}\approx\red{6}\times10^{17}$\,cm.
In our calculation, the assumption of $n=10^{9}$\,cm$^{-3}$ for the warm absorber is based on the agreement of recombination timescale and the variability timescale of the illuminating emission in MCG-6-30-15 \citep{reynolds95}. If the recombination timescale is larger than the variability timescale, photoionization equilibrium would not apply. The primary emission of MCG-6-30-15 is much more variable than \src\ on timescales of kiloseconds. So, the density of the warm absorber in \src\ is allowed to be lower than $n=10^{9}$\,cm$^{-3}$ as the recombination timescale is approximately proportional to $n^{-1}$ \citep{reynolds95}. Furthermore, we also assume an isotropic illuminating source and apply a factor of $4\pi$ in the calculation, which can also be lower. So, the estimated distance of $\red{6}\times10^{17}$\,cm is only the lower limit.
Lastly, \red{Model 1} also provides an estimation of the BH spin of \src. By fitting two epochs simultaneously and linking their spin parameters, we achieve a high BH spin of $a_{*}>0.97$ in Section\,\ref{ref_multi}. Previously, \citet{reynolds14} noted the tentative evidence that the most massive black holes ($M_{\rm BH}>10^8M_{\odot}$) and the least massive black holes ($M_{\rm BH}<10^6M_{\odot}$) may have more modest spins by compiling a number of measurements in previous work. The properties of the host galaxy may play an important role in the evolution of the BH spin \citep{senana14}. \src\ may host one of the few massive BHs \citep{peterson04,ho08} which have a high BH spin. Future reflection studies of AGN with similar high BH masses enable us to better understand the spin distribution in a wider range of BH masses.
\subsection{Variable Absorption and Intrinsic Soft X-ray Emission (\red{Model 2})}
\red{Model 2} provides multi-epoch spectra of \src\ a slightly worse fit than \red{Model 1} by $\Delta\chi^{2}=\red{13}$ with the same number of parameters. In \red{Model 2}, the ionisation states and the column densities of two absorbers are also consistent within their 90\% confidence ranges between two observations. The soft X-ray variability is mainly caused by the variable covering factors of the absorbers: the covering factor of the first absorber increases from 0.34 in obs2 to 0.50 in obs1. We estimate the location of the absorbers according to the X-ray variability assuming the absorbers are orbiting around the central BH: the observed soft X-ray flux of \src\ increases from $\log(F)=-11.47$ in the 0.3--3\,keV band in 2003 to $\log(F)=-11.14$ in 2015 (see Table \ref{tab_obs}) by a factor of more than 2 in 12 years. Meanwhile, the X-ray flux of \src\ does not show large-amplitude variability on kilosecond timescales as many other NLS1s.
We extract long-term lightcurves of \src\ based on \swift\ observations in the archive to investigate how rapidly the soft X-ray emission of \src\ varies. The first two panels of Fig.\,\ref{pic_swift} show the X-ray lightcurves. The X-ray flux of \src\ can amplify by a factor of up to 5 during this long-term period of \swift\ observations. The fifth and sixth \swift\ observations are separated by 6 months in 2010. The soft X-ray flux changes by a factor of 2.7. Similar large-amplitude variability on timescales of months is also seen in other intervals. Assuming a BH mass of $4\times10^{8}M_{\odot}$, the orbital period at $r=6.8\times10^{13}$\,m$\approx100r_{\rm g}$ is approximately 6 months. So, if the soft X-ray variability observed by \swift\ results from variable absorption only as inferred by \red{Model 2}, the absorbers need to be located less than $\approx100r_{\rm g}$ from the central BH of \src\ to explain the observed large-amplitude X-ray variability on timescales of months.
In addition to the variability in absorption, dramatic changes in the intrinsic emission are also required by \red{Model 2} to explain the spectral variability. The unabsorbed flux of the warm coronal emission increases by a factor of 13 in the \xmm\ energy range. The temperature of the corona increases from 0.18 keV to 0.23 keV.
\red{In the warm corona model, the soft Comptonisation emission from the warm corona is often found to dominate the extreme-UV band \citep[e.g.][]{jin17}. We extend our best-fit models to the extreme-UV band and calculate their predicted flux in the 0.01--0.1~keV band where the soft Comptonisation emission peaks. The warm corona model suggests that the unabsorbed extreme-UV flux of the non-thermal emission in \src\ increase by a factor of 9 from $4\times10^{-12}$ \ergs\ to $3.6\times10^{-11}$ \ergs. A significant change in the bolometric luminosity is thus expected in Model 2. It is interesting to note that the Eddington ratio of \src\ is estimated to be around 5\% based on the observed 5100\AA\ luminosity (see Section\,\ref{intro}). The soft excess emission plays an important role in the estimation of bolometric luminosities for AGN at a few percent of Eddington when the warm corona model is used \citep{noda18}.}
\red{In comparison, the relativistic disc reflection model in Model 1 suggests that the unabsorbed flux of the non-thermal emission in \src\ increases by a factor of only 2.9 from $2\times10^{-12}$ \ergs\ to $5.7\times10^{-12}$ \ergs\ in the 0.01--0.1\,keV band. A much smaller increase in the extreme-UV luminosity is required in the disc reflection model. Unfortunately, due to Galactic absorption, it is not possible to measure the luminosity of \src\ in this energy band and test each model.}
\red{The Optical Monitor on \xmm\ provides complementary optical and UV views of \src\ at longer wavelengths, although we are unable to measure the extreme-UV flux of this object. The observed magnitude of \src\ increases by a factor of 2 from $13.597\pm0.008$ during obs1 to $13.307\pm0.007$ during obs2 in the UVW1 band and a factor of 2.3 from $13.692\pm0.008$ to $13.337\pm0.007$ in the UVM2 band. Photometric observations with only two filters are not able to constrain the thermal emission from the disc and thus the spectral energy distribution of this object. But we find that the observed UV flux at longer wavelengths varies by a similar factor as the extreme-UV flux predicted by the disc reflection model rather than the warm corona model.}
\subsection{Hybrid Model}
Based on the fits of \red{Model 1} and \red{Model 2}, we propose for a hybrid model, where the intrinsic emission is modelled by disc reflection and absorption is still required to explain the spectral variability in \src. This hybrid model also provides a good fit to the data. Such a model improves the fit in the iron emission band in comparison with the absorption model, \red{Model 2}.
In the hybrid model, the variability of the intrinsic X-ray emission from \src\ is caused by the variable intrinsic power-law emission from the corona. The flux of the coronal emission increases by a small amplitude from $9.5\times10^{-12}$\,\ergs\ to $1.4\times10^{-11}$\,\ergs\ while its photon index increases from $2.29$ to $2.51$. We assume the coronal region remains the same geometry during the two observations in this model. The resulting disc reflection spectrum changes according to the variable power-law emission with a consistent reflection fraction.
Furthermore, two layers of absorption are needed to explain the spectral variability of \src: one full-covering warm absorption with a modest column density of approximately $10^{21}$\,cm$^{-3}$ and one partially-covering absorber in a higher ionisation state of $\log(\xi)\approx1.7$. The partially-covering absorber has a covering factor of 47\% in obs1 when the flux is low and disappear on the line of sight during obs2. Assuming this partially-covering absorber contributes to the variability of \src\ on timescales of months observed by \swift, this absorber needs to be located in the region of $<100$\,$r_{\rm g}$. Variable absorption at a large distance along the line of sight has been seen in other AGN too \citep[e.g.][]{grupe04c,parker14b,kaastra18,miller21}. Objects like NGC~6814 require both disc reflection and absorption models to fit the X-ray data \citep{gallo21}. Variable absorption at a large distance from the innermost X-ray emission region plays an important role in the observed variability of these sources.
\section{Conclusion} \label{conclude}
In this work, we investigate the nature of the soft X-ray variability of the NLS1 AGN \src\ based on two \xmm\ observations in the archive. The soft excess emission of \src\ shows a very different spectral shape in these two observations. We apply two models to the EPIC data of the source, one based on relativistic disc reflection (\red{Model 1}) and the other based on partially-covering absorption in combination with the warm corona model (\red{Model 2}).
In the reflection scenario, the X-ray variability of \src\ is dominated by the variable emission from the hot corona. The disc reflection component changes accordingly. The anti-correlation between the reflection fraction parameter and the X-ray flux suggests a variable coronal geometry in \src. The flatter disc emissivity profile also supports the conclusion that the coronal region of \src\ is more extended during the high X-ray flux state than the low X-ray flux state. The variable modest column density of the line-of-sight warm absorption also contributes to the soft X-ray variability.
In the absorption scenario, the two high-$N_{\rm H}$ absorbers produce strong Fe~K edges in the Fe~K band. An additional low-ionisation reflection component is required to fit the Fe~K$\alpha$ emission of \src. The variability of \src\ is caused by the variable covering factor of the line-of-sight absorbers in this model, while they remain in a consistent low-ionisation state. \swift\ observations suggest that the absorbers have to be located within a region of $r<\approx100r_{g}$ to explain the large-amplitude X-ray variability of \src\ on timescales of months. In addition, the intrinsic emission from the AGN also needs to be variable: the temperature and strength of the warm corona increases.
To further improve the fit in the iron band based on \red{Model 2}, we investigate the possibility of a hybrid model, where 1) we assume that the geometry of the coronal region remains the same during the two observations; 2) the intrinsic power-law emission from the corona is allowed to vary; 3) variable absorption is used to explain the soft X-ray variability. Such a hybrid model offers a slightly better fit than \red{Model 2}. The variable absorbers in the hybrid model have two component, one full-covering warm absorber as those commonly seen in typical AGN \citep{reynolds95} and one partially-covering absorber. The partially-covering absorber has a covering factor of 47\% during obs1 when the observed soft X-ray flux of \src\ is low and completely moves out of our line of sight during obs2 when the observed flux is high.
\red{Model 1, 2} and the hybrid model provide CCD-resolution data of \src\ with a similarly good fit. However, a different number of absorbers in different ionisation states is needed in them.
Unfortunately, archival EPIC data of \xmm\ do not have high enough spectral resolutions to distinguish these models. The RGS observation of obs2 is off-axis. No high-resolution soft X-ray spectrum during obs2 is available for comparison. But future high-resolution X-ray observations, e.g. from \athena\ \citep{barcons17} and \textit{XRISM} \citep{xrism20}, might provide us with a unique opportunity to constrain any of the models for \src\ by resolving multiple spectral components, \red{e.g., from disc reflection \citep{parker22b} and warm absorption \citep{parker22a}}.
\section*{Acknowledgements}
This paper was written during the worldwide COVID-19 pandemic in 2022. We acknowledge the hard work of all the health care workers around the world. We would not be able to finish this paper without their protection. J.J. acknowledges support from the Leverhulme Trust, the Isaac Newton Trust and St Edmund's College, University of Cambridge. \red{This work is based on observations obtained with \xmm, an ESA science mission
with instruments and contributions directly funded by ESA Member States and NASA. This project has made use of
the Science Analysis Software (SAS), an extensive suite to process the data collected by the XMM-Newton observatory.}
\section*{Data Availability}
All the data can be downloaded from the HEASARC website at https://heasarc.gsfc.nasa.gov. \red{The \texttt{relxill} package can be downloaded at https://www.sternwarte.uni-erlangen.de/dauser/research/relxill/index.html.}
\bibliographystyle{mnras}
\bibliography{ugc} %
\bsp %
\label{lastpage} |
Title:
Neutrinos from near and far: Results from the IceCube Neutrino Observatory |
Abstract: Instrumenting a gigaton of ice at the geographic South Pole, the IceCube
Neutrino Observatory has been at the forefront of groundbreaking scientific
discoveries over the past decade. These include the observation of a flux of
TeV-PeV astrophysical neutrinos, detection of the first astrophysical neutrino
on the Glashow resonance and evidence of the blazar TXS 0506+056 as the first
known astronomical source of high-energy neutrinos. Several questions, however,
remain, pertaining to the precise origins of astrophysical neutrinos, their
production mechanisms at the source and in Earth's atmosphere and in the
context of physics beyond the Standard Model. This proceeding highlights some
of our latest results, from new constraints on neutrino interactions and
oscillations to the latest measurements of the astrophysical neutrino flux and
searches for their origins to future prospects with IceCube-Gen2.
| https://export.arxiv.org/pdf/2208.01226 |
\begin{center}{\Large \textbf{
Neutrinos from near and far: Results from the IceCube Neutrino Observatory\\
}}\end{center}
\begin{center}
Tianlu Yuan\textsuperscript{1$\star$} for the IceCube Collaboration\footnote[2]{\protect\url{https://icecube.wisc.edu}}
\end{center}
\begin{center}
{\bf 1} Dept. of Physics and Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin–Madison, Madison, WI 53706, USA
\\
* [email protected]
\end{center}
\begin{center}
\today
\end{center}
\definecolor{palegray}{gray}{0.95}
\begin{center}
\colorbox{palegray}{
\begin{tabular}{rr}
\begin{minipage}{0.1\textwidth}
\includegraphics[width=30mm]{TIFR.png}
\end{minipage}
&
\begin{minipage}{0.85\textwidth}
\begin{center}
{\it 21st International Symposium on Very High Energy Cosmic Ray Interactions (ISVHE- CRI 2022)}\\
{\it Online, 23-27 May 2022} \\
\doi{10.21468/SciPostPhysProc.?}\\
\end{center}
\end{minipage}
\end{tabular}
}
\end{center}
\section*{Abstract}
{\bf
Instrumenting a gigaton of ice at the geographic South Pole, the IceCube Neutrino Observatory has been at the forefront of groundbreaking scientific discoveries over the past decade. These include the observation of a flux of TeV-PeV astrophysical neutrinos, detection of the first astrophysical neutrino on the Glashow resonance and evidence of the blazar TXS 0506+056 as the first known astronomical source of high-energy neutrinos. Several questions, however, remain, pertaining to the precise origins of astrophysical neutrinos, their production mechanisms at the source and in Earth’s atmosphere and in the context of physics beyond the Standard Model. This proceeding highlights some of our latest results, from new constraints on neutrino interactions and oscillations to the latest measurements of the astrophysical neutrino flux and searches for their origins to future prospects with IceCube-Gen2.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
The IceCube Neutrino Observatory detects neutrinos interacting with nucleons and electrons in the South Pole ice via Cherenkov radiation produced by charged secondaries. It is instrumented with 5160 Digital Optical Modules (DOM), each with a single downward-facing photomultiplier tube (PMT), arrayed across a cubic kilometer~\cite{Aartsen:2016nxy}. The DOMs are attached to 86 strings --- cables drilled into the ice that provide mechanical and electrical support. DOMs are spaced \SI{17}{\m} apart on standard IceCube strings and \SI{7}{\m} apart on DeepCore strings, a denser infill region of the detector. Standard IceCube strings are spaced approximately \SI{125}{\m} apart. \Cref{fig:detector} illustrates the scale and hexagonal configuration of the in-ice detector (left panel) as well as the absorption versus depth along the detector (right panel).
Dependent on the interaction channel, most neutrino-induced events in IceCube can be classified into three categories~\cite{IceCube:2020wum}: particle showers, or cascades, from a high-energy electron produced in a charged current (CC) $\nu_e$ interaction or a hadronic shower from the breakup of the nucleon in both CC and neutral current (NC) interactions, muon tracks induced by a CC $\nu_\mu$ interaction that travel linearly through a significant portion of the detector, or a double cascade whereby a CC $\nu_\tau$ interaction produces a $\tau$ that decays after a distinctly separable distance ($\sim \SI{50}{\m \per \peta \eV}$). Other unique detector signatures are possible for example at the Glashow resonance~\cite{IceCube:2021rpz} and in the individual PMT waveforms~\cite{IceCube:2015vkp}. This proceeding highlights the broad physics reach of the detector, including neutrino source searches (\Cref{sec:sources}), diffuse astrophysical flux measurements (\Cref{sec:diffuse}), and particle physics (\Cref{sec:pp}). In addition, prospects for a next generation detector ten times the size of IceCube will be discussed in \Cref{sec:gen2}.
\section{IceCube results}
\label{sec:results}
Depending on the physics of interest, IceCube takes distinct approaches in the analysis of its data. Angular resolution is important in neutrino source searches, which typically look for clustering, correlations, or time dependence between neutrinos or other astroparticles. An unbinned likelihood ratio method is used to maximize sensitivity~\cite{Braun:2008bg}. In measurements of the diffuse flux expected event rates are computed from large-scale Monte Carlo (MC) simulations which can be later scaled to match model predictions. Model parameters can then be fitted under a binned likelihood assumption in the observable space. A similar approach can be taken for measurements of particle physics parameters and searches for exotic particles.
\subsection{Source searches}
\label{sec:sources}
The relatively accurate directional pointing (\ang{1} or less median angular resolution) of muon tracks makes them the predominant signal in the search for astrophysical neutrino sources. Coupled to the realtime program~\cite{IceCube:2016cqr}, which alerts our multimessenger partners for follow-up observations, IceCube was able to pinpoint TXS 0506+056 as the first candidate source of high-energy astrophysical neutrinos~\cite{IceCube:2018dnn}. An event view of IC170922, a \SI{290}{\tera \eV} track that occurred on September 22, 2017 and triggered the realtime alert, is shown in \Cref{fig:txs}. Shortly after, \textit{Fermi} and MAGIC observed that the blazar TXS 0506+056 was in a flaring state and consistent with the direction of the IceCube track. The chance coincidence probability, including trials correction for previous alerts, was calculated to be disfavored at the $3\sigma$ level. An analysis of archival IceCube data led to the discovery of an excess of neutrino events in 2014-2015 at the location of the blazar~\cite{IceCube:2018cha}. The excess of events consists of lower energy neutrinos, clustered in a 110-day window at a significance of $3.5\sigma$. No excess was observed around the time of the 2017 alert. These two independent analyses -- the coincidence of a high-energy track with a flaring blazar and the archival neutrino ``flare'' -- are compelling evidence that TXS 0506+056 is a neutrino source.
IceCube has also performed all-sky, time-integrated neutrino source searches. For throughgoing tracks, those that traverse across the detector and typically confer the best directional information, IceCube has the highest sensitivity near the horizon and in the northern sky. The large background of atmospheric muons limits sensitivity in the southern sky. In the most recent publication with ten years of data, three analyses were defined a priori: a full-sky scan, a catalog search, and a stacked search~\cite{IceCube:2019cia}. The most significant result was obtained with the catalog search, where an excess over background was detected coincident with the Seyfert II galaxy NGC 1068 at a post-trial significance of $2.9\sigma$. The all-sky scan is subject to a large trials factor and yielded a post-trial $p=0.099$, while the stacked search yielded $p>0.01$. The catalog search is performed across a list of 110 sources (97 in the northern sky, 13 in the southern sky). Its most significant source, which also corresponds to the hottest spot in the northern sky as shown in \Cref{fig:ngc}, is NGC 1068 at $2.9\sigma$.
Astrophysical neutrinos are likely coupled to ultra-high-energy cosmic rays (UHECR) by their common production mechanisms. In collaboration with ANTARES, Auger, and Telescope Array, IceCube performed a search for correlations between neutrinos and cosmic rays~\cite{ANTARES:2022pdr}. Three analyses were performed, searching for neutrinos inline with UHECR, searching for UHECR inline with neutrinos, and a two-point correlation analysis. While no significant correlation was detected, upper limits were placed on the flux of neutrinos along the direction of UHECRs.
\subsection{Diffuse flux of astrophysical neutrinos}
\label{sec:diffuse}
IceCube has measured the diffuse flux of astrophysical neutrinos using samples that comprise primarily of tracks~\cite{IceCube:2015qii,IceCube:2016umi,IceCube:2021uhz}, cascades~\cite{IceCube:2020acn}, and a mixture of both~\cite{IceCube:2014stg,IceCube:2020wum,IceCube:2018pgc}. For upgoing tracks, the only method to distinguish diffuse astrophysical neutrinos from atmospheric neutrinos is to employ model constraints of their distinct energy spectra. For downgoing events that start within a fiducial region of IceCube, a detector veto is typically required to reject the atmospheric muon background. Such a veto also allows rejection of atmospheric neutrinos, based on the expectation that down-going atmospheric neutrinos can be tagged if accompanied by muons produced in the same air shower~\cite{Schonert:2008is,Gaisser:2014bja,Arguelles:2018awr}. Both approaches have yielded discoveries of an astrophysical flux at TeV-to-PeV energies across the different samples. \Cref{fig:spl} has been adapted from~\cite{IceCube:2021uhz} and shows the current global picture of the astrophysical spectrum, assuming a single-power-law (SPL) flux. All results are consistent at the $2\sigma$ level.
While the SPL model is well motivated and simple, additional model comparisons have been performed under more sophisticated spectral assumptions. These include tests for a spectral cutoff, a broken power-law, and log-parabola fluxes. The latest result using 9.5 years of upgoing tracks~\cite{IceCube:2021uhz} (\Cref{fig:spl} green) found its data consistent with a SPL hypothesis, but sees hints of softening above \SI{1}{\peta \eV} at a $2\sigma$ level. Inline with the upgoing-track results, \Cref{fig:cascades} shows model comparisons using six years (2010-2015) of cascade data~\cite{IceCube:2020acn} with no significant rejection of the SPL model found. Compared to tracks, cascades typically have a much better energy resolution at the expense of a worse angular resolution. Measurements of the diffuse flux can benefit substantially from improved energy reconstruction, while still employing the atmospheric neutrino self-veto in the downgoing region. The analysis relies on a boosted decision tree (BDT) to select for primarily $\nu_e$ and $\nu_\tau$ CC interactions, with a subset of events expected from NC contributions. The specific model tests are unable to reject the SPL hypothesis, though they indicate a spectral softening at high energies at the $2\sigma$ level. In addition to fits for particular functional form of the flux, a piecewise differential measurement was performed whereby the spectral shape can be probed in a more model-independent manner.
The highest-energy, contained cascade was detected at an energy of \SI{2}{\peta \eV}, below the energy of the Glashow resonance. This resonance is an enhancement of the $s$-channel neutrino charged-lepton cross section, and due to the preponderance of electrons in matter it occurs only for electron antineutrinos on Earth~\cite{Glashow:1960zz}. In the electron rest frame $E_R=\SI{6.3}{\peta \eV}$. Due to the increased cross section, it is strongly suppressed by Earth absorption for neutrinos from the northern sky as illustrated in the left panel of \Cref{fig:xs}, which shows the expected arrival-to-surface flux ratio~\cite{Vincent:2017svp}. To increase the possibility of detecting events on resonance IceCube performed a search for partially contained events, expanding outward from the fiducial volume of the previous analyses~\cite{IceCube:2021rpz}. Without the outer layer of the detector as a veto, stringent BDT-based cuts were applied to reject the large downgoing muon background. Sixty years after its initial proposal, IceCube detected an astrophysical neutrino interacting at the resonance energy for the first time. Its visible energy was reconstructed as $\SI{6.05\pm0.72}{\peta \eV}$, consistent with on-resonance production at $2.3\sigma$ significance. The detection opens an additional identification channel of both the neutrino flavor and charge. As a result, sources of high-energy astrophysical neutrinos can be expected to produce both neutrinos and antineutrinos. Even with only one event detected thus far, the diffuse neutrino flux is now expected to extend to the resonance energy.
\subsection{Particle physics}
\label{sec:pp}
As the discussion of the Glashow resonance already alluded to, IceCube is capable of producing exciting results that span multiple subfields of physics. Neutrinos serve not only as astrophysical messengers sent to us from the largest scales in the universe, but also as unique probes of fundamental physics at the smallest scales. These include measurements of the neutrino cross section~\cite{Aartsen:2017kpd,IceCube:2020rnc}, searches for sterile neutrinos~\cite{IceCube:2020phf,IceCube:2020tka} and searches for relativistic magnetic monopoles~\cite{IceCube:2021eye}.
The Earth attenuation of neutrinos starting at roughly \SI{10}{\tera \eV} allows IceCube to measure the neutrino-nucleon deep-inelastic-scattering (DIS) cross section. A modification in the cross section affects the arrival flux at IceCube, thus the data can be used to constrain the cross section in turn. \Cref{fig:xs} illustrates this effect under Standard Model predictions of electron antineutrino cross section~\cite{Cooper-Sarkar:2011jtt}. The color map indicates the ratio of the arrival flux, $\Phi$, to surface flux, $\Phi_0$, over a range of zenith angles and $E_\nu$. A zenith angle of \ang{90} (\ang{180}) corresponds to a path horizontally (diametrically) through the Earth to IceCube. Using a sample of high-energy starting events~\cite{IceCube:2020wum}, an all-flavor measurement of the neutrino-nucleon DIS cross section is measured. In the analysis, the full zenith range from \ang{0} to \ang{180} is used. The result is shown in the right panel of \Cref{fig:xs} and is consistent with the Standard Model predictions. An earlier IceCube measurement based on upgoing tracks is shown as the shaded gray region~\cite{Aartsen:2017kpd}.
IceCube has performed dedicated analyses to search for physics beyond the Standard Model (BSM). In particular, IceCube has searched for sterile neutrinos outside the three-flavor neutrino oscillation paradigm~\cite{IceCube:2020phf} and placed constraints on the $3+1$ parameters $\Delta m^2_{41}$-$\sin ^2(2\theta_{24})$ as shown in the left panel of \Cref{fig:bsm}. The best-fit (star) lies at $\sin ^2(2\theta_{24})=0.10$ and $\Delta m^2_{41} =\SI{4.5}{\eV^2}$ and is consistent with three flavor oscillations at $p=0.08$. The analysis uses eight years of upgoing muon track data, spanning a reconstructed muon energy range from \SIrange{500}{9976}{\giga \eV}.
In the search for more exotic particles, IceCube recently placed the most stringent upper limit on the flux of relativistic magnetic monopoles at $\beta > 0.8$~\cite{IceCube:2021eye}. Such particles are expected to produce a slowly propagating track with uniform light deposition along its length. Zero events were detected that passed all selection cuts, allowing IceCube to place upper bounds on the monopole flux as shown in the right panel of~\Cref{fig:bsm}.
\section{IceCube-Gen2}
\label{sec:gen2}
Building on the success of IceCube, the next generation in-ice neutrino telescope is under development~\cite{IceCube-Gen2:2020qha}. It will comprise an in-ice optical array with 120 additional strings that cover ten times the volume of IceCube. The new optical sensors will increase photocathode coverage using multiple PMTs, thus capturing additional directional information. Two prototypes designs are shown in \Cref{fig:gen2} (right). In addition to the optical array, IceCube-Gen2 will include a surface array~\cite{IceCube-Gen2:2021aek} for cosmic-ray physics and a sparse radio array covering \SI{500}{\km^2} for EeV neutrino detection~\cite{IceCube-Gen2:2021rkf}. Much as IceCube transformed the upper bounds measured by its predecessor AMANDA into measured fluxes, the full IceCube-Gen2 configuration will extend sensitivity above \SI{10}{\peta \eV} and its improved angular resolution for tracks will usher in a new era of precision neutrino astronomy.
\section{Conclusion}
\label{sec:conclusion}
The IceCube Neutrino Observatory continues to produce scientific discoveries in astrophysics and particle physics. Recent results improve on previous measurements while yielding new discoveries. Its excellent uptime and broad energy range ensures that it remains still a one of a kind instrument.
\section*{Acknowledgements}
\paragraph{Funding information}
TY is supported by NSF grant PHY-1913607.
\bibliography{main.bib}
\nolinenumbers
|
Title:
Addition of tabulated equation of state and neutrino leakage support to IllinoisGRMHD |
Abstract: We have added support for realistic, microphysical, finite-temperature
equations of state (EOS) and neutrino physics via a leakage scheme to
IllinoisGRMHD, an open-source GRMHD code for dynamical spacetimes in the
Einstein Toolkit. These new features are provided by two new, NRPy+-based
codes: NRPyEOS, which performs highly efficient EOS table lookups and
interpolations, and NRPyLeakage, which implements a new, AMR-capable neutrino
leakage scheme in the Einstein Toolkit. We have performed a series of strenuous
validation tests that demonstrate the robustness of these new codes,
particularly on the Cartesian AMR grids provided by Carpet. Furthermore, we
show results from fully dynamical GRMHD simulations of single unmagnetized
neutron stars, and magnetized binary neutron star mergers. This new version of
IllinoisGRMHD, as well as NRPyEOS and NRPyLeakage, is pedagogically documented
in Jupyter notebooks and fully open source. The codes will be proposed for
inclusion in an upcoming version of the Einstein Toolkit.
| https://export.arxiv.org/pdf/2208.14487 |
\title{Addition of tabulated equation of state and neutrino leakage support to \igm}
\input{author_list}
\section{Introduction}
\label{sec:introduction}
Magnetized fluid flows in dynamical spacetimes are a key driver of multimessenger phenomena, a prominent example of which was the gravitational-wave signal GW170817~\cite{TheLIGOScientific:2017qsa} and the coincident short gamma-ray burst GRB170817A~\cite{GBM:2017lvd}, originating from a binary system of two merging neutron stars (NSs). Self-consistent simulations of these systems require software capable of modeling the diverse physics of the problem, from relativistic general relativistic magnetohydrodynamics (GRMHD) fluid flows, to the the hot degenerate matter described by a microphysical, finite-temperature equation of state (EOS), to the changes in matter composition and energy due to the emission and absorption of neutrinos and photons, to the rapidly changing spacetime dynamics involved in the merger and black hole (BH) formation.
Given the high demand for accurate simulations of these systems, it is unsurprising that multiple groups have developed their own codes, some of which specialize in GRMHD for stationary spacetimes, which can be used e.g., for studying merger remnants~\cite{Gammie:2003rj,Noble:2005gf,Noble:2008tm,Murguia-Berthier:2021tnt,Anderson:2006ay}; while others are intended to model more generic GRMHD flows in dynamical spacetimes, which can be used for inspiral, merger, and post-merger dynamics~\cite{Bruegmann:2006ulg,OConnor:2009iuz,Thierfelder:2011yi,Giacomazzo:2007ti,Cerda-Duran:2008qfl,Kiuchi:2012qv,Dionysopoulou:2012zv,Radice:2012cu,Radice:2013hxh,Radice:2013xpa,Moesta:2013dna,White:2015omx,Etienne:2015cea,2020ascl.soft04003E,Kidder:2016hev,10.1145/3330345.3330346,2018JCoPh.375.1365F,Most:2019kfe,Cipolletta:2019geh,Cipolletta:2020kgq,2020ApJS..249....4S,Mewes:2020vic,tichy2020numerical}. Having many codes means that a variety of algorithmic choices are made during their development, some of which impact performance, some the code's suitability to model certain physical systems, and others the physical realism of the simulations.
Examples of differences that affect a code's performance and/or its ability to accurately model certain physical systems include the numerical resolution and adopted coordinate system (e.g., Cartesian, spherical, etc.); how spatial derivatives are represented numerically (e.g., finite difference, finite volume, discontinuous Galerkin, spectral, or even hybrid methods); the choice of EOS; and how neutrino effects are modeled; just to name a few.
Regarding the EOS, a common choice made by many codes is a simple ideal gas EOS, accounting for temperature effects through a thermal contribution~\cite{1993A&A...268..360J}. Others improve the description of the cold (as opposed to thermal) portion of the EOS by adopting the so-called piecewise polytropic model (see e.g.,~\cite{Read:2008iy} and references therein). Finally, some codes adopt microphysical, finite-temperature EOS tables constructed using data from astrophysical observations and nuclear physics experiments, like the ones available in the \compose~\cite{Typel:2013rza,Oertel:2016bki,Typel:2022lcx} and Stellar Collapse~\cite{stellarcollapse_website} databases. These tables provide the best description of nuclear matter available to date.
Regarding neutrino effects, one may consider the general relativistic radiation magnetohydrodynamics (GRRMHD) equations and model neutrino transport via Monte Carlo methods---by far the most computationally expensive option. These methods try to solve the seven-dimensional Boltzmann's equation by grouping particles into \emph{packets} that approximate the distribution function of neutrinos at random points. We mention here in particular the works of Foucart \etal~\cite{Foucart:2020qjb,Foucart:2021mcb,Foucart:2021ikp} and Miller \etal~\cite{Miller:2019gig,Miller:2019dpt}, which have used this technique in the context of binary neutron star (BNS) mergers. It is important to note that this method becomes prohibitively expensive in the optically thick regime (high densities and temperatures), requiring e.g., enforcing a hard ceiling on the value of the absorption opacity of the fluid~\cite{Foucart:2021mcb}.
Another approximate method for modeling neutrino physics is moment-based radiation transport. In this technique, the Boltzmann equation is recast as a $3+1$ system~\cite{1981MNRAS.194..439T,Shibata:2011kx}, which is then solved using similar numerical techniques to those for the GRMHD equations. Unlike the GRMHD equations, however, the system cannot be closed with an EOS, making its accuracy dependent on the choice of closure (see e.g.,~\cite{Richers:2020ntq}). This method has also been used in many studies, including core-collapse~\cite{Obergaulinger:2014nna,OConnor:2014sgn,Kuroda:2015bta,OConnor:2015rwy,Roberts:2016lzn,Skinner:2018iti,Glas:2018oyz,Rahman:2019yxy,Laiu:2021pha} and BNS~\cite{Foucart:2015vpa,Foucart:2015gaa,Foucart:2016rxm,Radice:2021jtw,Sun:2022vri}.
Leakage schemes are perhaps the most popular approach for modeling neutrino physics~\cite{vanRiper:1981mko,Ruffert:1995fs,Rosswog:2002rt,Rosswog:2003rv,Sekiguchi:2010ep,Sekiguchi:2011zd}. In this approach, experimental data are used to parameterize analytic formulas for the neutrino emission and cooling rates in terms of the optical depths, resulting in a computationally inexpensive algorithm that has been adopted rather broadly~\cite{OConnor:2009iuz,Ott:2012kr,Neilsen:2014hha,Radice:2016dwd,Siegel:2017jug,Endrizzi:2019trv,Murguia-Berthier:2021tnt}.
While these studies differ in how neutrinos are modeled, one aspect most share in common is that they were performed using software that is not freely available to everyone. Exceptions include the GRMHD codes \groned~\cite{OConnor:2009iuz,OConnor:2014sgn}, \whiskythc~\cite{Radice:2012cu,Radice:2013hxh,Radice:2013xpa}, \grhydro~\cite{Moesta:2013dna}, \igm~\cite{Etienne:2015cea,2020ascl.soft04003E}, \spectre~\cite{Kidder:2016hev,spectrecode}, and \spritz~\cite{Cipolletta:2019geh,Cipolletta:2020kgq,giacomazzo_bruno_2020_4350072}; the neutrino leakage codes \zelmanileak~\cite{OConnor:2009iuz,Ott:2012kr} and \thcleak~\cite{Radice:2016dwd}; the moment-based radiation transport codes \zelmanimone~\cite{Roberts:2016lzn} and \monegrey~\cite{Kidder:2016hev,spectrecode}; and the GRRMHD Monte Carlo code \nubhlight~\cite{Miller:2019gig}.
Here we introduce a major update to \igm---a concise open-source rewrite of the Illinois numerical relativity group's GRMHD code~\cite{Duez:2005sf} (henceforth \ogm), which exists within the \etk. This new version, which is also open-source~\cite{igm_github}, supports both finite-temperature, microphysical EOSs---via a new \nrpy~\cite{Ruchlin:2017com} module called \nrpyeos~\cite{igm_github}---and neutrino physics via a leakage scheme---using the recently developed \nrpy-based code \nrpyleakage~\cite{igm_github}. This updated version will be proposed for inclusion in a future \etk release.
For well over a decade, both \ogm and \igm have been used to model a plethora of astrophysical scenarios, including magnetized binary neutron stars (BNS)~\cite{Liu:2008xy,Paschalidis:2012ff,Ruiz:2017inq,Tsokaros:2019anx,Raithel:2021hye,Sun:2022vri,Armengol:2021mbt}, binary BH-NS~\cite{Etienne:2007jg,Etienne:2008re,Etienne:2011ea,Etienne:2012te,Paschalidis:2014qra,Ruiz:2018wah,Ruiz:2020elr}, BH accretion disks~\cite{Farris:2011vx,Farris:2012ux,Gold:2013zma,Gold:2014dta,Khan:2018ejm,EventHorizonTelescope:2019pcy,Wessel:2020hvu}, binary white dwarf-NS~\cite{Paschalidis:2010dh,Paschalidis:2011ez}, rotating NSs~\cite{Etienne:2006am,Espino:2019xcl}, gravitational collapse of supermassive stars~\cite{Sun:2017voo}, magnetized Bondi accretion~\cite{Etienne:2010ui}, and magnetized hypermassive neutron stars (HMNS)~\cite{Duez:2005cj,Duez:2006qe,Liu:2007cf,Shibata:2005mz,Shibata:2006hr,Stephens:2006cn,Ruiz:2020via}, to name a few. We also highlight \texttt{Frankfurt}/\igm~\cite{Most:2019kfe}, whose feature set exceeds that of the original \igm, but like \ogm is currently a closed-source code.
In contrast to these codes, \igm is open source and part of the \etk~\cite{Loffler:2011ay}, and in this work we introduce \etk \emph{thorns} (or modules) for \nrpyeos and \nrpyleakage, named \nrpyeoset and \nrpyleakageet, respectively.%
\footnote{We will refer to these as simply \nrpyeos and \nrpyleakage unless discussing thorn-exclusive features.}
This new version of \igm, along with \nrpyeos and \nrpyleakage, will be proposed for inclusion in a future release of the \etk, but in the meantime all codes are freely available for download at~\cite{igm_github}.
\nrpyeos is a pedagogically documented and infrastructure-agnostic \nrpy module that generates table interpolation routines based on the \etk's core EOS driver thorn \eosomni, which is itself based on the original code by O'Connor \& Ott~\cite{eosdrivercxx_repo}.
\nrpyeos provides a clean and clear user interface, generating specialized routines that compute only needed hydrodynamic quantities, avoiding unnecessary interpolations and thus greatly increasing the overall performance of GRMHD simulations that make use of tabulated EOSs.
When computing the neutrino opacities and emission and cooling rates, for example, \nrpyleakage requires five table quantities: $\left(\mue,\mun,\mup,X_{\rm n},X_{\rm p}\right)$, which are the chemical potentials of the electron, neutron, and proton, and the neutron and proton mass fractions, respectively. The general-purpose routine \texttt{EOS\char`_Omni\char`_full} is the only one available in \eosomni to compute such quantities. This routine, however, interpolates a total of 17 table quantities, 11 of which are unused for our purposes, resulting in it being twice as slow as the specialized routine generated by \nrpyeos.
\nrpyleakage, like \nrpyeos, is also pedagogically documented and infrastructure-agnostic. It implements the neutrino leakage scheme of~\cite{Ruffert:1995fs,Galeazzi:2013mia,Siegel:2017jug}, while also considering neutrino production from nucleon-nucleon Bremsstrahlung as in~\cite{Burrows:2004vq,OConnor:2011pxx}. In contrast to \zelmanileak, which computes the optical depths using a spherical ray-by-ray integration algorithm, optical depths are computed using the more generic local algorithm of~\cite{Neilsen:2014hha,Siegel:2017jug,Murguia-Berthier:2021tnt}. Further, its \etk version has been carefully designed to work seamlessly with the Cartesian AMR grids provided by \carpet~\cite{Schnetter:2003rb}. The user is given the option of disabling any of the neutrino production channels in \nrpyleakage, thus allowing the code to be fully compatible with the neutrino leakage scheme of \harmnuc~\cite{Murguia-Berthier:2021tnt}, which does not include nucleon-nucleon Bremsstrahlung.
Compatibility with \harmnuc remains a high priority for the authors so that in the post-merger phase, after the spacetime has become sufficiently stationary, the \handoff code~\cite{Armengol:2021mbt} is used to transfer simulation data from \igm to \harmnuc. We thus replace the Cartesian grid used by \igm---which is not ideal for modeling accretion disks, as the plasma flows obliquely to coordinate lines and has its angular momentum spuriously sapped by numerical errors---by the spherical-like coordinate system used by \harmnuc. Because this coordinate system has been optimized to model BH accretion disks, it enables us to accurately and reliably evolve the remnant spacetime for the relatively long time scales associated with multimessenger astronomy.
This paper is organized as follows. \secref{sec:basic_equations} provides an overview of the mathematical formulation of the GRMHD equations and of the neutrino leakage scheme. \secref{sec:numerical_methods} describes the numerical methods and technical aspects of our codes. \secref{sec:results} contains a series of challenging validation tests of the code, as well as results from simulations of single NSs, and magnetized, equal-mass BNS systems, with eventual BH formation. \secref{sec:conclusions} contains closing remarks and plans for future work.
\section{Basic equations}
\label{sec:basic_equations}
Throughout the paper Greek letters $\mu,\nu,\rho,\ldots$ are used to denote spacetime indices (range 0--3) and lowercase Roman letters $i,j,k,\ldots$ to denote spatial indices (range 1--3), assuming Einstein summation convention. Unless stated otherwise, geometrized units \mbox{$G = c = 1$} are adopted, additionally assuming that \mbox{$\Msun=1$}. Temperatures are measured in ${\rm MeV}$.
The spacetime evolution is governed by Einstein's equations,
\begin{equation}
G^{\mu\nu} = 8\pi T^{\mu\nu}\;,\label{eq:Einstein}
\end{equation}
where $G^{\mu\nu}$ is the Einstein tensor and $T^{\mu\nu}$ is the total stress-energy tensor. As written, these equations are not in a form immediately suitable for numerical integration. One such form is the initial value formulation built upon first splitting $g_{\mu\nu}$ into the $3+1$ Arnowitt--Deser--Misner (ADM) form~\cite{Arnowitt:1962hi}
\begin{equation}
ds^{2} = -\alpha^{2}dt^{2}+\gamma_{ij}\bigl(dx^{i}+\beta^{i}dt\bigr)\bigl(dx^{j}+\beta^{j}dt\bigr)\;,\label{eq:three_plus_one_metric}
\end{equation}
where $\alpha$, $\beta^{i}$, and $\gamma_{ij}$ are the lapse function, the shift vector, and the metric, all defined on spatial hypersurfaces of constant coordinate time $t$. With this decomposition Einstein's equations can be split into sets of hyperbolic (time-evolution) and elliptic (constraints) PDEs, similar to Maxwell's equations in differential form (see e.g.,~\cite{Baumgarte:2010ndz} for a pedagogical review). The resulting ADM evolution and constraint equations are in fact not numerically stable, and must be reformulated further. The Baumgarte--Shapiro--Shibata--Nakamura (BSSN) formulation~\cite{Nakamura:1987zz,Shibata:1995we,Baumgarte:1998te} is one such reformulation in which additional auxiliary and conformal variables are introduced in order to make the resulting set of equations strongly hyperbolic, enabling stable, long-term time integration of Einstein's equations on the computer.
The remainder of this section is dedicated to the GRMHD formulation used by \igm, as well as an overview of the neutrino leakage scheme adopted by \nrpyleakage.
\subsection{General relativistic magnetohydrodynamics}
\label{sec:grmhd}
Assuming infinite conductivity (ideal MHD, where \mbox{$u_{\mu}F^{\mu\nu}=0$}), the GRMHD equations can be written as the system:
\begin{align}
\nabla_{\mu}\left(\nb u^{\mu}\right) &= 0\;,\label{eq:baryon_number_conservation}\\
\nabla_{\mu}\left(\ne u^{\mu}\right) &= \RR\;,\label{eq:lepton_number_conservation}\\
\nabla_{\mu}T^{\mu\nu} &= \QQ u^{\nu}\;,\label{eq:enmom_conservation}\\
\nabla_{\mu}\Fdual^{\mu\nu} &= 0\;,\label{eq:maxwell}
\end{align}
which are the conservation of baryon number, conservation of lepton number, conservation of energy-momentum, and homogeneous Maxwell's equations, respectively. In the above, $\nb$ ($\ne$) is the baryon (electron) number density, $u^{\mu}$ is the fluid four-velocity, $\mb$ is the baryon mass, \mbox{$\Fdual^{\mu\nu}=(1/2)\tilde{\epsilon}^{\mu\nu\rho\sigma}F_{\rho\sigma}$} is the dual of the Faraday tensor $F^{\mu\nu}$, and $\tilde{\epsilon}^{\mu\nu\rho\sigma}$ is the Levi-Civita tensor. The source terms $\RR$ and $\QQ$ account for changes in lepton number and energy, respectively, due to the emission and absorption of neutrinos. The precise form of $\RR$ and $\QQ$ is detailed in the next section on neutrino leakage.
The energy-momentum tensor is assumed to be that of a perfect fluid, plus an electromagnetic contribution, given by
\begin{equation}
T^{\mu\nu} = \left(\rhob h + b^{2}\right)u^{\mu}u^{\nu} + \left(P + \frac{b^{2}}{2}\right)g^{\mu\nu} - b^{\mu}b^{\nu}\;,
\end{equation}
where \mbox{$\rhob = \mb\nb$} is the baryon density, \mbox{$h = 1 + \epsilon + P/\rhob$} is the specific enthalpy, $\epsilon$ is the specific internal energy, $P$ is the fluid pressure, $b^{\mu}=(4\pi)^{-1/2}B^{\mu}_{(u)}$ is the rescaled 4-magnetic field in the fluid frame, where
\begin{align}
B^{0}_{(u)} &= u_{i}B^{i}/\alpha\;,\\
B^{i}_{(u)} &= \bigl(B^{i}/\alpha + B^{0}_{(u)}u^{i}\bigr)/u^{0}\;,
\end{align}
and $B^{i}$ is the magnetic field in the frame normal to the hypersurface.
To source Einstein's equations $T^{\mu\nu}$ must be updated from one time step to the next, which requires evolving the matter fields in time. To accomplish this, \mbox{Eqs.~(\ref{eq:baryon_number_conservation})--(\ref{eq:maxwell})} are rewritten in conservative form:
\begin{equation}
\partial_{t}\convec + \partial_{i}\fluxvec^{i} = \sourcevec\;,\label{eq:grmhd_conservative_form}
\end{equation}
where $\fluxvec$ and $\sourcevec$ are the flux and source vectors, respectively. Furthermore, $\convec=\convec(\primvec)$ is the vector of \emph{conservative} variables, with $\primvec$ the vector of \emph{primitive} variables. \igm adopts the Valencia formalism \cite{Anton:2005gi,Banyuls:1997zz}, whereby
\begin{equation}
\primvec = \left[
\begin{array}{c}
\rhob\\
\ye\\
T\\
P\\
v^{i}\\
B^{i}
\end{array}
\right].\label{eq:prims}
\end{equation}
Here, $\ye\equiv\ne/\nb$ is the electron fraction, $T$ is the temperature, and \mbox{$v^{i}=u^{i}/u^{0}$} is the fluid three-velocity. Notice that this choice of primitive three-velocity differs from the Valencia three-velocity $v_{(n)}^{i}$ used in other codes (see e.g.,~\cite{Baiotti:2010zf,Baiotti:2010zf,Giacomazzo:2007ti,Moesta:2013dna,Cipolletta:2019geh,Cipolletta:2020kgq}); \igm adopts the 3-velocity that appears in the induction equation \eqref{eq:induction}. These two velocities are related via
\begin{equation}
v_{(n)}^{i} = \alpha^{-1}\bigl(v^{i}+\beta^{i}\bigr)\;.
\end{equation}
The conservative variables can be written in terms of the primitive variables as
\begin{equation}
\renewcommand{\arraystretch}{1.1}
\convec \!=\!
\left[
\begin{array}{c}
\rhostar\\
\yestar\\
\tautilde\\
\tilde{S}_{i}\\
\tilde{B}^{i}
\end{array}
\right]
\!\equiv\!
\sqrtgamma
\left[
\begin{array}{c}
D\\
D\ye\\
\tau\\
S_{i}\\
B^{i}
\end{array}
\right]
\!\equiv\!
\sqrtgamma
\left[
\begin{array}{c}
W\rhob\\
D\ye\\
\alpha^{2}T^{00} - D\\
\alpha T^{0}_{\ i}\\
B^{i}
\end{array}
\right],
\label{eq:conservs}
\renewcommand{\arraystretch}{1}
\end{equation}
where $\gamma = \det(\gamma_{ij})$ and $W=\alpha u^{0}$ is the Lorentz factor.
There are many numerical advantages associated with writing the GRMHD equations in conservative form. First, ignoring neutrino effects, the source terms $\sourcevec$ vanish in flat space with Cartesian coordinates and this form of the equations, when combined with an appropriate finite-volume scheme, guarantees conservation of total rest mass, lepton number, energy, and momentum to roundoff error. Second, conservation is guaranteed up to errors in $\sourcevec$ when the sources are nonzero. Third, \igm adopts finite-volume methods, which are superior at handling ultrarelativistic flows as compared to easier-to-implement and more-efficient artificial viscosity schemes~\cite{Marti:1991wi,Anninos:2002gz}. Fourth, this conservative formulation enables us to easily use a high-resolution shock-capturing (HRSC) scheme, designed to minimize Gibbs' oscillations near shocks, to name a few advantages.
An important consideration when implementing a GRMHD code is how to handle the magnetic induction equation, obtained from the spatial components of \eqref{eq:maxwell}. In conservative form, this equation may be written as
\begin{equation}
\partial_{t}\tilde{B}^{i} + \partial_{j}\left(v^{j}\tilde{B}^{i} - v^{i}B^{j}\right) = 0\;.\label{eq:induction}
\end{equation}
If \eqref{eq:induction} were to be propagated forward in time, evaluating its spatial partial derivatives without special techniques, violations of the ``no magnetic monopoles'' condition (which follows from the time component of \eqrefalt{eq:maxwell})
\begin{equation}
\partial_{i}\tilde{B}^{i} = 0\;,\label{eq:no_monopoles}
\end{equation}
will grow with each iteration. Ensuring this constraint remains satisfied, particularly on AMR grids, is a nontrivial endeavor. We adopt the same strategy as the one used in~\cite{Etienne:2010ui,Etienne:2011re}, which is to evolve the electromagnetic (EM) four-potential $\AA_{\mu}$ instead of the magnetic fields directly. In this case, numerical errors associated with evolving $\AA_{\mu}$ do not impact violations of \eqref{eq:no_monopoles}, as the magnetic field is computed as the ``curl'' (i.e., a Newtonian curl with definition appropriately generalized for GR) of the vector potential, and the divergence of the curl is by definition zero (here to roundoff error, once a numerical approximation for partial derivative is chosen).
The $3+1$ decomposition of the vector potential gives (see e.g.,~\cite{Baumgarte:2021skc} for a pedagogical review)
\begin{equation}
\AA_{\mu} = n_{\mu}\Phi + A_{\mu}\quad \text{and}\quad \tilde{B}^{i} = \epsilon^{ijk}\partial_{j}A_{k}\;,\label{eq:em_four_potential}
\end{equation}
where $n_{\mu}$ is the unit vector normal to the spatial hypersurface, $A_{\mu}$ is the purely spatial (i.e., $n^{\mu}A_{\mu}=0$) magnetic potential, $\Phi$ is the electric potential, and $\epsilon^{ijk}$ is the totally antisymmetric Levi-Civita symbol, with $\epsilon^{123}=1$. Thus \eqref{eq:induction} becomes
\begin{equation}
\partial_{t}A_{i} = \epsilon_{ijk}v^{j}\tilde{B}^{k} - \partial_{i}\left(\alpha\Phi - \beta^{j}A_{j}\right)\;.\label{eq:induction_magnetic_potential}
\end{equation}
We fix the EM gauge by adopting a covariant version of the ``generalized Lorenz gauge condition'',
\begin{equation}
\nabla_{\mu}\AA^{\mu} = \xi n_{\mu}\AA^{\mu}\;,
\end{equation}
which was first introduced by the Illinois relativity group in~\cite{Etienne:2011re,Farris:2012ux}. Here $\xi$ is a parameter with units of inverse length, chosen so that the Courant--Friedrichs--Lewy (CFL) condition is always satisfied. Typically $\xi$ is set to $1.5/\Delta t_{\rm max}$, where $\Delta t_{\rm max}$ is the time step of the coarsest refinement level (see \cite{Mewes:2020vic} for further details). This gauge condition results in the additional evolution equation
\begin{equation}
\partial_{t}\tilde{\Phi} + \partial_{j}\left(\alpha\sqrt{\gamma}A^{j} - \beta^{j}\tilde{\Phi}\right) = -\xi\alpha\tilde{\Phi}\;,\label{eq:lorenz_gauge_evolution}
\end{equation}
where $\tilde{\Phi} \equiv \sqrt{\gamma}\Phi$.
Except for Eqs.\,(\ref{eq:induction_magnetic_potential}) and (\ref{eq:lorenz_gauge_evolution}), the remaining GRMHD equations are evolved using \eqref{eq:grmhd_conservative_form} and a HRSC scheme, as described in~\cite{Etienne:2015cea}. For completeness, the remaining components of the flux vector are given by
\begin{equation}
\bm{F}^{i}
=
\left[
\begin{array}{c}
\rhostar v^{i}\\
\yestar v^{i}\\
\alpha^{2}\sqrt{\gamma}T^{0i}-\rhostar v^{i}\\
\alpha\sqrt{\gamma}T^{i}_{\ j}
\end{array}
\right]\;,
\label{eq:grmhd_fluxes}
\end{equation}
and those of the source vector are given by
\begin{equation}
\bm{S}
=
\left[
\begin{array}{c}
0\\
\alpha\sqrt{\gamma}\RR\\
s + \alpha\sqrt{\gamma}\QQ u^{0}\\
\alpha\sqrt{\gamma}\left(\tfrac{1}{2}T^{\mu\nu}g_{\mu\nu,i} + \QQ u_{i}\right)
\end{array}
\right]\;,
\label{eq:grmhd_sources}
\end{equation}
where
\begin{equation}
\begin{split}
s = \alpha\sqrt{\gamma}\Bigl[\bigl(T^{00}\beta^{i}\beta^{j}+2T^{0i}&\beta^{j}+T^{ij}\bigr)K_{ij}\\
&-\left(T^{00}\beta^{i}+T^{0i}\right)\partial_{i}\alpha\Bigr]\;,
\end{split}
\end{equation}
and $K_{ij}$ is the extrinsic curvature.
Specifying the matter EOS closes this system of equations. To this end, \igm supports both analytic, hybrid EOSs~\cite{1993A&A...268..360J} as well as microphysical, finite-temperature, fully tabulated EOSs. For tabulated EOSs, hydrodynamic quantities are given as functions of the density $\rhob$, the electron fraction $\ye$, and the temperature $T$. As needed, the temperature can be recovered from the pressure, specific internal energy, or entropy by using either a Newton--Raphson method or bisection, with the latter yielding superior results in highly degenerate regions of parameter space.
For hybrid EOSs, hydrodynamic quantities are analytic functions of the density and the specific internal energy. The pressure is given by
\begin{equation}
P(\rhob,\epsilon) = P_{\rm cold}(\rhob) + P_{\rm thermal}(\rhob,\epsilon)\;,
\end{equation}
where
\begin{equation}
P_{\rm thermal}(\rhob,\epsilon) = \left(\Gamma_{\rm th}-1\right)\rhob\left[\epsilon-\epsilon_{\rm cold}(\rhob)\right]\;,
\end{equation}
accounts for thermal effects, with $\Gamma_{\rm th}$ a constant parameter that determines the conversion efficiency of kinetic to thermal energy at shocks, and $P_{\rm cold}(\rhob)$ and $\epsilon_{\rm cold}(\rhob)$ are computed assuming either a gamma-law or piecewise polytropic EOS (see e.g.,~\cite{Read:2008iy}).
\subsection{Neutrino leakage}
\label{sec:neutrino_leakage}
\nrpyleakage---a new neutrino leakage code generated by \nrpy and fully documented in pedagogical \jupyter notebooks---enables us to incorporate basic neutrino physics in our simulations. Our implementation follows the prescription of~\cite{Ruffert:1995fs,Galeazzi:2013mia,Siegel:2017jug} to compute the neutrino number and energy emission rates, as well as the neutrino opacities. We also consider nucleon-nucleon Bremsstrahlung~\cite{Burrows:2004vq} following the \zelmanileak code~\cite{OConnor:2009iuz,Ott:2012kr,OConnor:2011pxx}. Unlike~\cite{Ruffert:1995fs} and \zelmanileak, however, we adopt a local, iterative algorithm in order to compute the neutrino optical depths following~\cite{Neilsen:2014hha,Siegel:2017jug} (see also~\cite{Murguia-Berthier:2021tnt}). This algorithm enables far more efficient computations of optical depths than that implemented in \zelmanileak when the modeled system is far from spherical symmetry.
Neutrinos are accounted for via the following reactions: \mbox{$\beta$-processes}, i.e., electrons ($\ee$) being captured by protons ($p$) and positrons ($\ae$) being captured by neutrons ($n$),
\begin{align}
\ee + p &\to n + \nue\;,\\
\ae + n &\to p + \anue\;;
\end{align}
electron-positron pair annihilation,
\begin{equation}
\ee + \ae \to \nui + \anui\;;
\end{equation}
transverse plasmon ($\tilde\gamma$) decay,
\begin{equation}
\tilde\gamma \to \nui + \anui\;;
\end{equation}
and nucleon-nucleon Bremsstrahlung,
\begin{equation}
N + N \to N + N + \nui + \anui\;.
\end{equation}
In the reactions above $\nui=\left\{\nue,\nu_{\mu},\nu_{\tau}\right\}$ are the electron, muon, and tau neutrinos, with $\anui$ their antineutrinos, and $N$ heavy nuclei. Our implementation assumes that the contributions from heavy lepton neutrinos and antineutrinos are all the same, and we use the notation $\nux$ to refer to any one species throughout.
The production of neutrinos via these processes lead to changes in the electron fraction and energy of the system, which are accounted for in the source terms $\RR$ and $\QQ$ of Eqs.\,(\ref{eq:lepton_number_conservation}) and (\ref{eq:enmom_conservation}). More specifically,
\begin{align}
\RR &= - \rate{\RR}{\nue}{\rm eff}
+ \rate{\RR}{\anue}{\rm eff}\;,
\label{eq:R_source}\\
\QQ &= - \rate{\QQ}{\nue}{\rm eff}
- \rate{\QQ}{\anue}{\rm eff}
- 4\rate{\QQ}{\nux}{\rm eff}\;,
\label{eq:Q_source}
\end{align}
where the effective rates are given by
\begin{align}
\rate{\RR}{\nui}{\rm eff} &= \rate{\RR}{\nui}{\rm free}\left[1 + t^{\nui,\RR}_{\rm diff}/t^{\nui,\RR}_{\rm free}\right]^{-1}\;,\label{eq:effective_emission_rates}\\
\rate{\QQ}{\nui}{\rm eff} &= \rate{\QQ}{\nui}{\rm free}\left[1 + t^{\nui,\QQ}_{\rm diff}/t^{\nui,\QQ}_{\rm free}\right]^{-1}\;,\label{eq:effective_cooling_rates}
\end{align}
and the diffusion time scales using
\begin{equation}
t^{\nui,j}_{{\rm diff}} = D_{\rm diff}\left(\kappa^{\nui}_{{\rm t},j}\right)^{-1}\left(\tau^{\nui}_{j}\right)^{2}\;,\label{eq:diffusion_time_scale}
\end{equation}
where $\tau^{\nui}_{j}$ are the neutrino optical depths, $\kappa^{\nui}_{{\rm t},j}$ the total neutrino transport opacity, \mbox{$D_{\rm diff}=6$}~\cite{Rosswog:2003rv,OConnor:2009iuz,Siegel:2017jug,Murguia-Berthier:2021tnt}, and $j=\RR,\QQ$. The free emission time scales are given by
\begin{equation}
t^{\nui,\RR}_{\rm free} = \frac{n_{\nui}}{\rate{\RR}{\nui}{\rm free}}\;,\quad t^{\nui,\QQ}_{\rm free} = \frac{\varepsilon_{\nui}}{\rate{\QQ}{\nui}{\rm free}}\;,
\end{equation}
where $n_{\nui}$ and $\varepsilon_{\nui}$ are the neutrino number and energy density, respectively, and the total free emission and cooling rates are given by
\begin{align}
\rate{\RR}{\nui}{\rm free} &= \rate{\RR}{\nui}{\beta}
+ \rate{\RR}{\nui,\anui}{\rm Pair}
+ \rate{\RR}{\nui,\anui}{\rm Plasmon}
+ \rate{\RR}{\nui,\anui}{\rm Bremss}\;,
\label{eq:total_nui_emission_rate}\\
\rate{\QQ}{\nui}{\rm free} &= \rate{\QQ}{\nui}{\beta}
+ \rate{\QQ}{\nui,\anui}{\rm Pair}
+ \rate{\QQ}{\nui,\anui}{\rm Plasmon}
+ \rate{\QQ}{\nui,\anui}{\rm Bremss}\;.
\label{eq:total_nui_cooling_rate}
\end{align}
Note that $\beta$-processes only contribute when $\nui=\left\{\nue,\anue\right\}$.
For small densities and temperatures---the optically thin regime---the optical depths vanish and the medium is essentially transparent to neutrinos. Diffusion occurs on much shorter time scales than free-streaming, and therefore the effective rates become the free ones. For large densities and temperatures---the optically thick regime---optical depths are large and the neutrinos interact with the matter, such that diffusion happens on long time scales and free-streaming on short ones due to the increase in the emission and cooling rates, implying \mbox{$\rate{\RR}{\nui}{\rm eff}\to n_{\nui}/t^{\nui,\RR}_{{\rm diff}}$} and \mbox{$\rate{\QQ}{\nui}{\rm eff}\to\varepsilon_{\nui}/t^{\nui,\QQ}_{{\rm diff}}$}. We postpone the details of how the optical depths are computed in \nrpyleakage until \secref{sec:computation_of_optical_depths}
\section{Numerical methods}
\label{sec:numerical_methods}
Most of the core numerical algorithms in \igm remain the same as in the original version announced in 2015; these algorithms were reimplemented from \ogm as they were found most robust when modeling a large variety of astrophysical scenarios. Core algorithms, as described in~\cite{Etienne:2015cea}, include the HRSC scheme, the Harten--Lax--van Leer approximate Riemann solver~\cite{doi:10.1137/1025002}, the piecewise parabolic method~\cite{Colella:1982ee} used to reconstruct the primitive variables at the cell interfaces, the staggering of the electric and magnetic potentials and of the magnetic field, the algorithm to compute the magnetic fields from the magnetic potential, the Runge--Kutta time integration, and outflow boundary conditions.
Key algorithmic changes introduced here include an updated conservative-to-primitive infrastructure, which has been expanded to include new effective 1D routines that are well-suited for tabulated EOSs, and the interface with the newly developed codes \nrpyeos and \nrpyleakage. We reserve the remainder of this section to discussing these updates in detail.
\subsection{Conservative-to-primitive recovery}
\label{sec:conservative_to_primitive}
The energy-momentum tensor in the GR field equations and the GRMHD equations are written as functions of the primitive variables. So after updating the conservative variables at each time iteration, the primitive variables (``primitives") must be computed from the conservative variables (``conservatives"). This is a nontrivial step, as there are no algebraic expressions to compute the conservatives from the primitives, requiring the implementation of a root-finding algorithm to solve a set of coupled nonlinear equations.
As numerical errors---such as truncation error originating from spatial and temporal finite differencing, as well as interpolation and prolongation operations---can cause the conservative variables to stray away from their valid range, sometimes this inversion becomes impossible. Because of this, we perform a series of checks on conservative variables to ensure they are valid before attempting to recover the primitive variables. We refer the reader to Appendix A of~\cite{Etienne:2011ea} for details on how these bounds are checked and enforced.
For gamma-law and hybrid EOSs, the primary primitive recovery routine used in \igm is the 2D scheme of Noble~\etal~\cite{Noble:2005gf} (henceforth ``Noble 2D").%
\footnote{The dimensionality of the scheme is associated with how many equations are used to recover the primitive variables: 1D schemes use one equation and one unknown, 2D schemes use two equations and two unknowns, etc.}
We have also implemented the 1D scheme of the same reference, as well as 1D routines that replace the energy by the entropy, which is passively advected alongside the other variables assuming a conservation equation, as in~\cite{Noble:2008tm}. The user is then given the option to use one or more of these last three routines as backups to the Noble 2D one.
The entropy routines perform well in regions of high magnetization and low densities, where the other two can be less robust, but we stress that they should only be used as backups, as the entropy is not conserved at shocks and therefore cannot always be reliably used to recover the primitives. If the Noble 2D and backup routines are unable to recover the primitives, a final backup routine due to Font~\etal~\cite{Font:1998hf} is used, for which the pressure is reset to its cold value and thus an inversion is guaranteed (see Appendix A of~\cite{Etienne:2011ea} for more details).
For tabulated EOSs, Newton--Raphson-based routines become very sensitive to the initial guesses provided for the primitive values. \igm does not keep track of the values of the primitives at the previous time step, making it difficult to use routines such as the Noble 2D and the closely related routines by Ant\'on~\etal~\cite{Anton:2005gi}, Giacomazzo \& Rezzolla~\cite{Giacomazzo:2007ti}, and Cerd\'a-Dur\'an~\etal~\cite{Cerda-Duran:2008qfl} (see also~\cite{Murguia-Berthier:2021tnt}).
As reviewed in~\cite{Siegel:2017sav}, some routines require better initial guesses than others. In particular, we find that the 1D routines of Neilsen~\etal~\cite{Neilsen:2014hha} and Palenzuela~\etal~\cite{Palenzuela:2015dqa}, as well as the one by Newman \& Hamlin~\cite{NewmanHamlin}, which only require an initial guess for the temperature, are very robust at recovering primitive variables even for relatively poor initial guesses. Our implementation of these routines is based on the open-source infrastructure by Siegel~\cite{grmhd_con2prim_repo}, and we extend the implementation by adding to the routines the option of using the entropy instead of the energy during primitive recovery.
Performing an EOS table inversion with the entropy yields far smaller temperature errors than when using the energy---particularly in regions of high densities and low temperatures---and therefore, unsurprisingly, the modified routines recover the primitive variables with far smaller errors than the original. Unfortunately, because the entropy evolution assumes that entropy is conserved (an approximation that completely fails near shocks), these new routines are also only suitable as backup routines. As the entropy backup is rarely applied, and generally applied only far from shocks, we find it quite useful.
A primitive recovery step begins with guesses $W_{\rm guess}=1$ and $T_{\rm guess}=T_{\rm atm}$ or $T_{\rm guess}=T_{\rm max}$, corresponding to the atmospheric and maximum temperatures allowed in the simulation, which exist at or within the EOS table bounds. In this paper the values \mbox{$T_{\rm atm}=0.01~\mathrm{MeV}$} and \mbox{$T_{\rm max}=90~\mathrm{MeV}$} are adopted. A density guess is obtained using \mbox{$\rho_{\rm guess} = \rhostar/\left(\sqrt\gamma W_{\rm guess}\right)$} and the electron fraction is recovered analytically from \mbox{$\ye = \yestar/\rhostar$}. All other primitives are computed from these using the EOS.
Both the Palenzuela~\etal and Newman \& Hamlin routines are iterative and result in updates to the Lorentz factor and specific internal energy (or entropy), requiring EOS table inversions to compute the temperature at every iteration. Starting with the energy version of the routines and \mbox{$T_{\rm guess}=T_{\rm atm}$}, a primitive recovery is attempted using the Palenzuela~\etal routine. A failure leads to a new attempt using the Newman \& Hamlin routine. In case this fails, the temperature guess is reset to \mbox{$T_{\rm guess}=T_{\rm max}$} and the previous steps are repeated. If all previous steps fail, the process is repeated with the entropy version of the routines. If at the end of this step primitive recovery is still unsuccessful, the point is flagged and the recovery continues for the remaining grid points.
After sweeping the numerical grid once, we loop over flagged points, look at their neighbors and check for how many of them the primitive recovery has succeeded. If not enough neighbors are found the run is terminated, as clusters of failures indicate serious problems in the evolution. When the number of neighbors is sufficient, the conservative variables at the flagged points are set to
\begin{equation}
\bm{C}_{\rm new} = (1-w)\bm{C}_{\rm flagged} + w\bar{\bm{C}}_{\rm neighbors}\;,
\end{equation}
where $\bar{\bm{C}}_{\rm neighbors}$ denotes the average of the conservative variables at the neighboring points for which primitive recovery succeeded. This is repeated up to four times, successfully increasing the weight $w$ to $\sfrac{1}{4}$, $\sfrac{1}{2}$, $\sfrac{3}{4}$, and $1$ in each new attempt. If primitive recovery is unsuccessful at this point, they are reset to their atmosphere values ($\rhob=\rho_{\rm atm}$, $\ye=\ye^{\rm atm}$, and $v^{i}=0$).
We note that recovery failures are quite rare---particularly those in which all of the backup techniques fail---and typically occur in dynamically irrelevant regions: in the low density atmosphere or deep inside BH horizons. However, because production-quality simulations involve ${\sim} 10^12$ primitive recovery attempts, occasional failures are a near certainty. In the case of failures, adjusting the fluid 3-velocity components to zero is rather undesirable as it could greatly and discontinuously influence the magnetic field dynamics in regions that are magnetically dominated. With this in mind, we have dedicated considerable effort in making our backup strategies robust, avoiding velocity resets as much as possible.
Finally, we note that we have not extended the Font~\etal routine to work with tabulated EOS, but this will be done in a future work. We also plan on using the tabulated EOS version of \texttt{RePrimAnd}~\cite{Kastaun:2020uxr} once it becomes available.
\subsection{Computation of optical depth}
\label{sec:computation_of_optical_depths}
In \nrpyleakage we consider the following reactions contribute to the total transport opacities $\kappa^{\nui}_{{\rm t},j}$:
\begin{alignat}{7}
& n\medspace &&+\medspace && \nue && \to\thickspace && \ae && +\medspace && p\;,\\
& p\medspace &&+\medspace && \anue && \to\thickspace && \ae && +\medspace && n\;,\\
& n\medspace &&+\medspace && \nui && \to\thickspace && n && +\medspace && \nui\;,\\
& p\medspace &&+\medspace && \nui && \to\thickspace && p && +\medspace && \nui\;.
\end{alignat}
Explicit formulas for the neutrino emission and cooling rates appearing in Eqs.\,(\ref{eq:total_nui_emission_rate}) and (\ref{eq:total_nui_cooling_rate}), as well as for the neutrino opacities, can be found in Appendix B of~\cite{Ruffert:1995fs} and in~\cite{Burrows:2004vq}.
Computing the local neutrino optical depths $\tau^{\nui}_{j}$ generally involves a global integration of $\kappa^{\nui}_{{\rm t},j}$ along some path $\mathcal{P}$, i.e.,
\begin{equation}
\tau^{\nui}_{j} = \int_{\mathcal{P}}ds\,\kappa^{\nui}_{{\rm t},j}\;.\label{eq:optical_depth_integral}
\end{equation}
One common option, implemented in the open-source code \zelmanileak, is to integrate along radial rays (see e.g.,~\cite{OConnor:2009iuz,Ott:2012kr,OConnor:2011pxx}), which for simulations that use Cartesian coordinates require an auxiliary spherical grid. One interpolates data to the spherical grid, computes the opacities, and then performs the integration. This approach is particularly well-suited to study core collapse and other nearly spherically symmetric systems. However, for systems far from spherical symmetry, such as BNS and BH accretion disks, computing the optical depths this way can be very inefficient, as the resolution of the auxiliary spherical grids would need to be increased tremendously to produce accurate optical depths.
In order to have an algorithm that is both efficient and more generally applicable, we instead compute the optical depths with the local approach proposed in~\cite{Neilsen:2014hha} (see also~\cite{Siegel:2017jug,Murguia-Berthier:2021tnt}). The optical depths are first integrated to all of their nearest neighbors and then updated using the results that lead to the smallest optical depths, i.e.,
\begin{equation}
\tau^{\nui}_{j} = \min_{\rm neighbors}\left(\tau^{\nui}_{j} + \sqrt{\gamma_{mn}\dx^{m}\dx^{n}}\kappa^{\nui}_{{\rm t},j}\right)\;,\label{eq:path_of_least_resitance}
\end{equation}
where $\dx^{i}$ is the grid spacing along the $i$\textsuperscript{th}-direction. For simplicity, we do not integrate diagonally. In this way neutrinos are allowed to explore many possible paths out of regions with relatively high optical thickness, following the path of least resistance. In our implementation, we assume that the outer boundary of the computational domain is transparent to neutrinos and thus have zero optical depths.
Because the opacities themselves depend on the optical depths, to compute the {\it initial optical depths} at all gridpoints in the simulation domain, we implement the following iterative approach. First the optical depths are initialized to zero, leading to an initial estimate of the opacities. This initial estimate is used to update the optical depths according to \eqref{eq:path_of_least_resitance}, which in turn allows us to recompute the opacities, and so on. One might also interpret this algorithm as considering only nearest neighbors in the first iteration, next-to-nearest neighbors in the second, next-to-next-to-nearest neighbors in the third, etc. In this way our algorithm enables neutrinos to map out paths of least resistance through arbitrary media.
When the grid structure contains multiple refinement levels, we adopt a multi-grid ``V" cycle at each iteration: the optical depths are computed on a given refinement level and, unless we are at the finest one, the solution is prolongated to the next finer one; we move to the next finer refinement level and repeat the previous step; once the finest refinement level is reached the solution is restricted to the coarser levels, completing one iteration.
The algorithm is stopped once an equilibrium is reached, measured by the overall relative change in the optical depths between consecutive iterations $(n,n+1)$,
\begin{equation}
E = \left[\sum_{j}\sum_{\nui}\sum_{\rm interior}\left(\frac{\tau^{\nui}_{j,n+1}-\tau^{\nui}_{j,n}}{\tau^{\nui}_{j,n+1}}\right)^{2}\right]^{1/2}\;.
\end{equation}
Once $E$ falls below a user-specified threshold (typically $10^{-8}$) or the prescribed maximum allowed number of iterations (typically $2048$) is exceeded, the algorithm stops.
\section{Results}
\label{sec:results}
This section presents stress tests of these new algorithms implemented in \igm, with the aim of demonstrating both the correctness of our implementation as well as its robustness when modeling challenging astrophysical scenarios.
To this end, we first present results from challenging unit tests of the conservative-to-primitive infrastructure and the optical depth framework. Further, to validate \nrpyleakage, we evolve a simple optically thin gas and compare results between \igm and a trusted code with a physically identical but independently developed neutrino leakage scheme, \harmnuc.
Next to demonstrate the reliability of our new tabulated EOS implementation, \nrpyeos, we evolve isolated, unmagnetized NSs without neutrino leakage at different grid resolutions with different EOS tables. We consider the test as passed if numerically driven oscillations converge to zero with increased resolution at an order consistent with the reconstruction scheme (between second and third order for PPM). %
The remaining tests focus on full-scale simulations of physical scenarios that lie at the heart of multimessenger astrophysics. We simulate magnetized, equal-mass BNS systems that lead to a remnant BH with an accretion disk that is evolved for several dynamical timescales. We perform simulations with and without our new neutrino leakage scheme, comparing the qualitative differences between them. Stably modeling these last two systems, in particular, is extremely difficult if the new algorithms are not implemented correctly, thus acting as the most strenuous tests conducted in our study.
In tests involving tabulated EOSs, we use three different fully-tabulated microphysical EOSs: the Lattimer--Swesty EOS with incompressibility modulus $K=220~\mathrm{MeV}$~\cite{Lattimer:1991nc} (henceforth LS220), the Steiner--Hempel--Fisher EOS~\cite{Steiner:2012rk} (henceforth SFHo), and the SLy4 EOS of~\cite{Chabanat:1997un}. For the first three, we use the tables by O'Connor--Ott~\cite{OConnor:2009iuz}, while for the last one we use the table by Schneider--Roberts--Ott~\cite{Schneider:2017tfi}, all of which are freely available at~\cite{stellarcollapse_website}. These choices of EOS were made largely to facilitate direct comparisons with \harmnuc in the case of BH accretion disks.
Finally, we note that the EOS tables were cleaned with a simple script to change the reference mass used in the tables in such a way that the specific internal energy is never negative. We also clean up some of the table entries to avoid superluminal sound speeds. These are standard procedures when using EOS tables from~\cite{stellarcollapse_website}.
\subsection{Primitive variables recovery}
\label{sec:primitive_variables_recovery}
In order to validate our implementation of primitive recovery routines described in \secref{sec:conservative_to_primitive}, we perform similar tests to the ones described in~\cite{Siegel:2017sav,Murguia-Berthier:2021tnt}. First a set of primitive variables is specified and conservatives computed. Then the primitive recovery routine is selected and the conservatives injected into it. The primitives that are output from the recovery routine are compared with the input primitives, producing an estimate of how well the tested routine is able to recover the correct primitive variables. We refer to this as a ``$\bm{P}$ to $\bm{C}$ to $\bm{P}$'' test.
We measure the primitive recovery error at each recovery as a sum of relative errors across all primitive variables in the primitives vector $\bm{P}$
\begin{equation}
E_{\bm{P}\to\bm{C}\to\bm{P}} = \sum_{i}\left|1-\frac{p^{\rm recovered}_{i}}{p^{\rm original}_{i}}\right|\;,\label{eq:EPtoCtoP}
\end{equation}
where $p^{\rm original}_{i}$ and $p^{\rm recovered}_{i}$ represent the original and recovered primitive variables, respectively, and the sum includes all variables in \eqref{eq:prims}.
For a given EOS, we use $N=2^{12}$ points to evenly discretize $\log_{10}\rhob\in\left[-12,-3\right]$ and $\log_{10}T\in\left[-2,2\right]$. We arbitrarily fix the electron fraction to $\ye=0.1$ and compute the pressure, specific internal energy, and entropy using the EOS table. The spatial components of the velocities and magnetic fields are set randomly assuming $W=2$ and $\log_{10}(P_{\rm mag}/P)=-5$, respectively. Finally, we compute the conservative variables using \eqref{eq:conservs} assuming flat space.
We use this setup to test the Palenzuela \etal and the Newman \& Hamlin routines as implemented in \igm. As described in \secref{sec:conservative_to_primitive}, each routine is given two chances to recover the primitives, with initial guesses $T_{\rm guess}=T_{\rm atm}$ and, in case of failure, $T_{\rm guess}=T_{\rm max}$. In \figref{fig:con2prim} we present the test results.
As our implementation allows for one to use either the specific internal energy or the entropy to recover the temperature, we perform tests using both. Our results indicate that using the entropy generally leads to errors that are at least comparable to, but often smaller than, those obtained using the energy. However, because \igm currently adopts an approximate entropy evolution equation (assuming entropy is conserved), we are unable to reliably use the entropy as our default variable for primitive recovery. During actual numerical evolutions, we thus limit use of the entropy variable to a backup when recovery using the specific internal energy fails.
\subsection{Optically thin gas}
\label{sec:optically_thin_gas}
To validate our neutrino leakage implementation \nrpyleakage, we model an isotropic gas of constant density at rest in flat space assuming the SLy4 EOS. In this scenario, the GRMHD equations simplify to
\begin{equation}
\partial_{t}\ye = \RR/\rhob\quad\text{and}\quad\partial_{t}\epsilon = \QQ/\rhob\;.
\label{eq:optically_thin_simple}
\end{equation}
These equations are solved straightforwardly with a standalone code, which supports both \nrpyleakage and the leakage scheme of \harmnuc. We then compare the results from these equations against those generated when \igm evolves the full set of GRMHD equations. Agreement between \igm and the standalone code provides an external validation of \nrpyleakage in the optically thin regime, as well as its integration within \igm.
The solution has the following behavior. When the electron fraction is large (small), electron (positron) capture by protons (neutrons) is favored, resulting in a decrease (increase) of electron neutrinos (antineutrinos) and a decrease (increase) of the electron fraction with time. Note that, by construction, $\QQ\leq0$ and thus the specific internal energy and temperature are always expected to decrease.
We perform two tests to verify the expected behavior of the system as described above. In one test we set the initial electron fraction to \mbox{$\ye(0)=0.5$}, while in the other we set it to \mbox{$\ye(0)=0.005$}. In both cases the density and initial temperature of the gas are set to \mbox{$\rhob=10^{-12}$} and \mbox{$T(0)=1~\mathrm{MeV}$}, respectively. Other hydrodynamic quantities, like the initial specific internal energy, are computed as needed using the SLy4 EOS of~\cite{Schneider:2017tfi}.
In \figref{fig:isotropic_gas_results} we show the excellent agreement between the results obtained by the different codes.%
\footnote{The leakage scheme in \harmnuc assumes that the optical depths are always large when computing the neutrino degeneracy parameters. This is a reasonable assumption, given that most physical systems of interest are not transparent to neutrinos. Nevertheless, this causes a discrepancy between the leakage scheme in \harmnuc and \nrpyleakage. For the sake of the comparison made here, we slightly modified the way \harmnuc computes the neutrino degeneracy parameters to match what is done in \nrpyleakage; i.e., using Eqs.\,(A3) and (A4) in~\cite{Ruffert:1995fs}.}
\subsection{Optically thick sphere}
\label{sec:optically_thick_sphere}
We next demonstrate the robustness of our implementation of the optical depth initialization algorithm on Cartesian AMR grids. To this end, a simple case of a sphere of constant density, electron fraction, and temperature is considered, which implies that the opacities in the sphere are also constant and therefore the optical depths can be determined analytically with \eqref{eq:optical_depth_integral}.
Specifically, the sphere is assumed to have radius $r_{\rm Sph}=2.5$, and constant density \mbox{$\rhob^{\rm Sph}=9.8\times10^{13}~\mathrm{g/cm^{3}}$}, electron fracton \mbox{$\ye^{\rm Sph}=0.1$}, and temperature \mbox{$T^{\rm Sph}=8.0~\mathrm{MeV}$} in an optically thin medium with \mbox{$\rhob^{\rm Ext}=6\times10^{7}~\mathrm{g/cm^{3}}$}, \mbox{$\ye^{\rm Ext}=0.5$}, and \mbox{$T^{\rm Ext}=0.01~\mathrm{MeV}$}. We adopt the SLy4 EOS of~\cite{Schneider:2017tfi}.
Our grid is a Cartesian box of side-length $10r_{\rm Sph}$ with four refinement levels, as illustrated in the upper panel of \figref{fig:const_dens_sphere_results}. We add two refinement centers located at ${\pm}2.5$, each with three levels of refinement. Of course, this grid structure would never be used to simulate a spherical object, but because the surface of the sphere crosses multiple refinement boundaries, it provides a significantly challenging test for our optical depth initialization algorithm as detailed in \secref{sec:computation_of_optical_depths}. We find excellent agreement with the exact results, as shown in the bottom panel of the figure.
\subsection{Tolman--Oppenheimer--Volkoff star}
\label{sec:tov_star}
As our next validation test, we evolve unmagnetized, stable Tolman--Oppenheimer--Volkoff (TOV) NSs. In this test, we disable neutrino leakage to ensure the NS maintains this equilibrium solution in the continuum limit. That is to say, in the limit of infinite numerical resolution, we expect zero oscillations in our simulated NSs.
When placing the stars in our finite resolution numerical grids however, numerical errors induce stellar oscillations. Because these oscillations are largely caused by truncation error associated with \igm's reconstruction scheme, they should converge to zero as we increase the resolution of the numerical grid. To confirm this, we perform simulations at three different resolutions---hereafter low (LR), medium (MR), and high (HR)---and demonstrate that the oscillations converge away at the expected rate.
To obtain initial data, a tabulated EOS is chosen and the TOV equations are solved using \nrpy's TOV solver. Three different EOS tables are used: LS220, SFHo, and SLy4. The initial temperature is fixed to \mbox{$T=0.01~\text{MeV}$}, while the initial electron fraction is determined by imposing the neutrino-free beta-equilibrium condition \mbox{$\mu_{\nu}(\rhob,\ye,T)=0$}, where $\mu_{\nu}$ is the neutrino chemical potential. The initial data are then evolved forward in time with the \etk, using \baikal~\cite{nrpytutorial} and \igm to perform the spacetime and GRMHD evolutions, respectively.
The numerical grid structure contains five factor-of-two levels of Cartesian AMR, with resolutions on the finest level of \mbox{$\dx_{\rm LR}=1.5\dx_{\rm MR}=2\dx_{\rm HR}\approx277~\mathrm{m}$}. The stars are evolved for approximately $60$ dynamical timescales
\begin{equation}
t_{\rm dyn,NS} = \frac{1}{\sqrt{\rho_{0,{\rm max}}}}\;,\label{eq:t_dyn}
\end{equation}
where $\rho_{0,{\rm max}}$ is the maximum initial density (i.e., the density at the center of the NS). We monitor the maximum density on the grid as a function of time, with results from the HR runs displayed on the bottom panel of \figref{fig:tov_noleak_density}.
The top panel of \figref{fig:tov_noleak_density} displays the relative errors, $E$, of the oscillations from the MR and LR runs against the oscillations from the HR run. Assuming $E\propto\dx^{p}$, we find
\begin{equation}
E_{\rm MR} = \left(\frac{\dx_{\rm MR}}{\dx_{\rm LR}}\right)^{p}E_{\rm LR}\;.
\end{equation}
Thus, for a numerical scheme that is $p$-order accurate, we expect that multiplying the relative errors of the LR run by $(\dx_{\rm MR}/\dx_{\rm LR})^{p}$ will yield similar errors as the MR run. Generally we would expect the numerical errors to be dominated by our reconstruction method (PPM), which is between second and third order. This is indeed the observed behavior---the convergence order of our numerical scheme is found to be $p\in[2.5,3]$.
\subsection{Magnetized binary neutron stars}
\label{sec:bns}
With the core new features validated, we now turn our attention to fully dynamical GRMHD simulations of a magnetized, equal-mass BNS system, modeling the inspiral, merger, and the resulting remnant BH. We adopt the LS220 microphysical, finite-temperature EOS, and simulate the system both with and without neutrino leakage enabled. This self-validation test is quite challenging, and is bound to result in unphysical behavior or, in the worst case, code crashes, if there are issues in our implementation.
Initial data are obtained using \lorene~\cite{Gourgoulhon:2000nn,Feo:2016cbs,2016ascl.soft08018G,lorene_website}. The initial separation of the system is $45~\mathrm{km}$ and the gravitational (baryonic) mass of each NS is \mbox{$1.39 M_{\odot}$} (\mbox{$1.59 M_{\odot}$}), while the total ADM mass of the system is \mbox{$M_{\rm ADM}= 2.86 M_{\odot}$}. The interior of each NS is seeded with a poloidal magnetic field with maximum initial value ${\sim}10^{15}~G$ (see Appendix C of~\cite{Etienne:2015cea}). The initial temperature is set to $0.01~\text{MeV}$ and the electron fraction is determined by imposing the neutrino-free beta-equilibrium condition.
\baikal, which solves Einstein's equations in the BSSN formalism, is used to evolve the spacetime. Again the \carpet AMR infrastructure is adopted to set up a grid with eight refinement levels by factors of two, with the resolution at the finest level \mbox{$\dx_{8}\approx185~\mathrm{m}$}. Upon BH formation, two additional refinement levels are added to better resolve the puncture, and thus the highest resolution on the grid becomes \mbox{$\dx_{10}\approx46~\mathrm{m}$}.
Performing the simulation using the \etk gives us access to outstanding diagnostic thorns, of which prominent examples include \ahfd~\cite{Thornburg:2003sf}---used to locate and compute the shape of apparent horizons---and \qlm~\cite{Dreyer:2002mx,Schnetter:2006yt}---used to compute useful quasi-local quantities like the BH mass and spin. Additionally, \igm carefully monitors the number of times the primitive recovery infrastructure resorts to backup strategies, as well as the strategies used, aborting the simulation if any major error is detected. For the two simulations performed in this paper, we have not observed atmosphere resets or conservative averages (see \secref{sec:conservative_to_primitive}), which reflects the robustness of our conservative-to-primitive implementation.
Satisfaction of the Einstein constraints reflect the health of BNS simulations in a holistic sense. As evidence that our updated implementation of \igm is working correctly, we carefully monitor the Hamiltonian constraint violation throughout the numerical evolution. In Fig.~\ref{fig:bns_hamiltonian_constraint_comparison} the magnitude of these violations with the LS220 tabulated EOS is compared against an evolution of an SLy piecewise polytropic EOS (PPEOS) BNS evolution performed with a trusted version of \igm with hybrid equation of state support (adopting a PPEOS for the cold pressure). Both are equal-mass binaries (but run without any symmetries imposed), with each neutron star having a baryonic mass of $1.49\Msun$ and $1.59\Msun$ in the tabulated and PPEOS cases, respectively. The initial separations are different, so as to reuse existing data; this has no bearing on our assessment.
The left panel of \figref{fig:bns_hamiltonian_constraint_comparison} displays side-by-side comparisons of this diagnostic at $t=0$ and, in the right panel, after a full orbit at $t=\tau_{\rm orb}$. Initial violations are smaller for the PPEOS case, as the \lorene initial data possessed higher spectral resolution. The small initial constraint violation difference is quickly dominated by numerical-evolution errors, such that after one full orbit, constraint violations quickly reach a steady-state a few orders of magnitude higher than in the initial data. Generally we would expect that errors due to EOS table interpolations would result in slightly higher constraint violations in the tabulated EOS case, but we find that both runs exhibit quite comparable results.
As in the TOV tests, we also track the maximum density on the grid over time---i.e., the density at the center of the NSs during inspiral. \figref{fig:bns_density_comparison} shows the evolution of this quantity for 70 dynamical timescales, which happens to be shortly before merger in the tabulated EOS (LS220) case. As can be seen in the figure, the maximum density remains constant to 0.75\% of its initial value throughout, indicating that the hydrostatic equilibrium of the NSs, imposed by the initial data, is maintained through merger, and to a degree that is comparable or better when comparing the latest \igm against the trusted PPEOS version. Further, as expected, data in this figure demonstrate that neutrino leakage has no impact on the central densities of the neutron stars.
Next we focus on the merger of the tabulated EOS (LS220) BNS simulations, comparing results both with and without neutrino leakage enabled. BH formation occurs in coincidence with the collapse of the lapse function toward zero. When \mbox{$\alpha_{\rm min}<0.1$}, we trigger \carpet to add additional refinement levels to our grid (as previously described) so that the moderately spinning black hole is sufficiently resolved.
Further, post-merger oscillations of \mbox{$\alpha_{\rm min}(t)$} are monitored as an indication of how close the merger remnant (a very short-lived HMNS in this case) is to BH formation.
As can be seen in \figrefalt{fig:bns_lapse_rest_mass_gw}{a}, our simulations lead to a HMNS that undergoes a single oscillation prior to collapse to a BH, with an apparent horizon detected \mbox{$t_{\rm BH}\approx2.1~\mathrm{ms}$} after merger. Comparing with a couple other equal-mass results in the literature with similar initial NS masses adopting this EOS, we find that both result in a short-lived HMNS remnant that collapses to a BH between \mbox{$t_{\rm BH}\approx8.5~\mathrm{ms}$} (\cite{Kastaun:2014fna}, with each NS having isolated ADM mass \mbox{$M_{\rm NS}=1.41\Msun$}) and \mbox{$t_{\rm BH}\approx48.5~\mathrm{ms}$} (\cite{Bernuzzi:2015opx}, with each NS having isolated ADM mass \mbox{$M_{\rm NS}=1.35\Msun$}). However a clean, apples-to-apples comparison cannot be made, as simulations in these references did not include magnetic fields, chose different initial separations, and adopted different numerical resolutions/grids. Indeed, further work is sorely needed to cleanly validate current HMNS lifetime estimates across different codes, and we plan to perform such comparisons in future work.
As further validation that our conservative GRMHD scheme is working correctly, conservation of rest mass, i.e.,
\begin{equation}
M_{0}=\int W\rho_{\rm b}\sqrt{\gamma}\,dV\; = {\rm constant},\label{eq:rest_mass}
\end{equation}
is carefully monitored. Provided GRMHD flows do not cross AMR refinement levels or a black hole forms, this conservation should be maintained to roundoff error. Indeed, \figrefalt{fig:bns_lapse_rest_mass_gw}{b} demonstrates that the initial value is conserved to 0.001\% throughout the inspiral for over 75 dynamical timescales. During and after BH formation, a significant amount of rest mass forms and falls into the horizon, where ceilings on maximum density are imposed for numerical stability and resulting in a loss of rest mass.
Finally, the gravitational wave signal, from the inspiralling of two NSs through the oscillations of the HMNS, to the ringing BH, is one of the key theoretical predictions derived from these simulations. In \figrefalt{fig:bns_lapse_rest_mass_gw}{c} we display the real component of the dominant, $(2,2)$ mode of $\psi_{4}$ extracted at \mbox{$r_{\rm ext}\approx738.3\ {\rm km}$}. We note that in this panel of the figure we also subtract the time that it takes for the wave to propagate from the center of the grid to $r_{\rm ext}$.
Minor differences in the signal are observed between simulations with and without neutrino leakage, which we attribute to the great increase in neutrino production after the massive shock---and consequent heating---that occurs when the NSs merge. As the neutrinos leak, they carry energy and deleptonize the system, which help explain the small differences in the gravitational wave signals.
The gravitational wave signal already provides a hint that the inspiral phase of both simulations are virtually the same, and we confirm this behavior by looking at other quantities. In \figref{fig:bns_evolution_premerger} snapshots of density, temperature, electron fraction, and $b^{2}=b^{\mu}b_{\mu}$ are plotted on the orbital plane during the inspiral for the simulations with and without neutrino leakage. Indeed we observe no significant differences throughout.
However, the post-merger phase of these simulations exhibit noticeable differences, as shown in \figref{fig:bns_evolution_postmerger}. The BH accretion disk is slightly less neutron rich in the simulation with neutrino leakage enabled, with a large region of larger electron fraction being visible in the outer layers of the accretion disk. Notably, the same behavior is observed in \harmnuc when studying magnetized BH accretion disks (see Fig.~11 of~\cite{Murguia-Berthier:2021tnt}).
In this study we are only presenting results up to ${\approx}5~\mathrm{ms}$ after black hole formation, when the spacetime already appears to be sufficiently static. We plan to use the recently developed \handoff code to transfer simulation data from \igm to \harmnuc and continue the post-merger phase for $\mathcal{O}\left({\sim}\mathrm{seconds}\right)$. Results of these continuation runs will be reported in a future publication.
\section{Conclusions \& Future Work}
\label{sec:conclusions}
Modeling magnetized compact binary systems in particular, and magnetized fluid flows in general, is of paramount importance for multimessenger astronomy. Simulating these systems accurately and reliably may not only lead to insights on phenomena we do not yet fully understand, but also provide crucial reference points for detections of gravitational waves and their electromagnetic and/or neutrino counterparts.
Given the importance of these simulations, many groups have developed GRMHD codes capable of performing them. Among these codes, the original GRMHD code developed by the Illinois numerical relativity group is particularly notable for its reliability and robustness when simulating a very broad range of astrophysical phenomena. Since \igm acts as an open-source, drop-in replacement of the original code, it inherits all of the original code's qualities while being faster and more concise.
The new version of \igm presented in this work aims at improving not only technical aspects of the code, but also the physical realism of the simulations that it can perform. To this end, two key new features were added: support for microphysical, finite-temperature, tabulated EOSs via a new \nrpy-based code---\nrpyeos; and neutrino physics via a leakage scheme using another \nrpy-based code---\nrpyleakage.
\igm has been developed to facilitate widespread community adoption. To this end, it was designed to be user-friendly, modular/extensible, robust, and performant/scalable. The development of this new version of the code shares all of these core principles, and it is with them in mind that we are making the code open-source and freely available for download~\cite{igm_github}.
In terms of user-friendliness, the code is well-documented, properly commented, and requires only basic programming skills to understand and run, traits also shared by \nrpyeos and \nrpyleakage. In the near future, we will release a series of \jupyter notebooks that meticulously document all of these codes.
Designed as thorns for the \etk and with clear separation of key algorithms in mind, all of these codes are both modular and extensible. To preserve the robustness of \igm, every new addition has been rigorously tested to ensure maximum reliability and optimal performance. A systematic study of the scalability of the new version of \igm, however, is not a focus of this work, in part because the core AMR infrastructure in the \etk is undergoing a major upgrade that will greatly improve scalability. \igm will be made compatible with this updated infrastructure in the coming months.
It is widely known that moving from a simple, analytic EOS to a tabulated EOS and adding a neutrino leakage scheme negatively affects the code's overall performance. In the case of \igm, the new version of the code is about $1.8\times$ slower than the hybrid equation of state version. This performance impact is comparable to those observed by the authors of other codes, such as \spritz. A code comparison study that showcases the performance and impact of different algorithmic choices made by different GRMHD codes will be the subject of future work.
The BNS results presented in this paper exist as a part of a larger project. The simulation data will be transferred from \igm to \harmnuc~\cite{Murguia-Berthier:2021tnt} (a code specially designed to accurately and reliably model BH accretion disks) using our recently developed \handoff code~\cite{Armengol:2021mbt}, in which data from \igm is transferred to \harmnuc, and the simulation continued for $\mathcal{O}\left({\sim}\mathrm{seconds}\right)$. Results of these simulations will be presented in a future paper.
\section*{Acknowledgments}
The authors would like to thank E.~O'Connor for his comments on how nucleon-nucleon Bremsstrahlung is implemented in \groned. We would also like to thank M.~Campanelli, Y.~Zlochower, and T.~Piran for useful discussions and suggestions. The plots in this paper have been generated using \mpl~\cite{Hunter:2007}; all plotting scripts can be made available upon request. This work was primarily funded through NASA award TCAN-80NSSC18K1488. L.R.W. and Z.B.E. gratefully acknowledge support from NSF awards PHY-1806596, PHY-2110352, OAC-2004311, as well as NASA award ISFM-80NSSC18K0538. The material is based upon work supported by NASA under award number 80GSFC21M0002. A.M-B is supported by NASA through the NASA Hubble Fellowship grant HST-HF2-51487.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.This research made use of Idaho National Laboratory computing resources which are supported by the Office of Nuclear Energy of the U.S. Department of Energy and the Nuclear Science User Facilities under Contract No. DE-AC07-05ID14517, as well as TACC’s Frontera NSF projects PHY20010 and AST20021. Additional resources were provided by the RIT's Green Pairies Cluster, acquired with NSF MRI grant PHY1726215.
\bibliographystyle{apsrev4-1}
\bibliography{references}
|
Title:
Star formation inefficiency and Kennicutt-Schmidt laws in early-type galaxies |
Abstract: Star formation in disk galaxies is observed to follow the empirical
Kennicutt-Schmidt law, a power-law relationship between the surface density of
gas ($\Sigma_{gas}$) [$\textrm{M}_{\odot}\; \textrm{kpc}^{-2}$] and the star
formation rate ($\Sigma_{SFR}$) [$\textrm{M}_{\odot}\; \textrm{kpc}^{-2} \;
\textrm{Gyr}^{-1}$]. In contrast to disk galaxies, early-type galaxies (ETGs)
are typically associated with little to no star formation and therefore no
Kennicutt-Schmidt law; recent observations, however, have noted the presence of
massive gaseous cold disks in ETGs, raising the question as to why the
conversion of gas into stars is so inefficient. With our latest simulations,
performed with our high-resolution hydrodynamic numerical code MACER, we
reevaluate the traditional classification of ETGs as quiescent, dead galaxies.
We predict the inevitable formation of stellar disks following cooling episodes
of the ISM of the host galaxy in the presence of galactic rotation via a simple
but robust star formation model combining local Toomre instabilities and local
gas cooling timescales. We find that resolved Kennicutt-Schmidt star formation
laws for our simulated ETGs, in both surface density and volumetric forms,
reproduce the observed threshold, slope, and normalization observed in disk
galaxies. At the same time, through analysis of global Kennicutt-Schmidt laws,
we suggest that increased star formation and high gaseous outflows offers a
partial remedy to the observed star formation inefficiency problem.
Observational checks of our star formation predictions are thus essential for
confirming the form of local star formation laws and reassessing star formation
inefficiency in ETGs.
| https://export.arxiv.org/pdf/2208.03735 | command.
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\graphicspath{{./}{figures/}}
\begin{document}
\title{Star formation inefficiency and Kennicutt-Schmidt laws in early-type galaxies}
\author{Brian Jiang}
\affiliation{Department of Astronomy, Columbia University, 550 West 120th St, New York, NY 10027, USA}
\author{Luca Ciotti}
\affiliation{Department of Physics and Astronomy, University of Bologna, Bologna, Italy
}
\author{Zhaoming Gan}
\affiliation{New Mexico Consortium, Los Alamos, NM 87544, USA}
\affiliation{Department of Astronomy, Columbia University, 550 West 120th St, New York, NY 10027, USA}
\author{Jeremiah P. Ostriker}
\affiliation{Department of Astronomy, Columbia University, 550 West 120th St, New York, NY 10027, USA}
\affiliation{Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA.}
\keywords{galaxies: elliptical and lenticular; galaxies: star formation}
\section{Introduction}
Understanding how the physical properties of interstellar gas affect star formation is important for developing models of galactic evolution and explaining the differences in star formation rate (SFR) across different galaxy types. A strong empirical correlation between the SFR and gas density in disk galaxies has been observed, starting 60 years ago when \cite{Schmidt_1959} theorized a star formation law for the Milky Way of the power-law form $\rho_{SFR} \propto \rho_{gas}^n$, with initially $2<n<3$. More recently, power laws relating surface densities of gas and star formation rates have been proposed with both {\it global} \citep{Kennicutt_Jr__1998} and {\it resolved} forms \citep{Kennicutt_1989}, the former of which globally average densities within some radius, and the latter of which average over radial annuli at variable distances from the center of spatially resolved galaxies. Starting with Kennicutt's compilation of H$\alpha$ measurements to trace star formation and HI and CO data to trace atomic and molecular gas, it has been found that both global and resolved laws relating SFR and gas surface densities read $\Sigma_{SFR} \propto \Sigma_{gas}^n$; observations typically find $1<n<3$, with $n \approx 1.4$ as the most accepted value. This empirical power law, now universally known as the Kennicutt-Schmidt law (hereafter KS) forms the critical basis for current theoretical and numerical work on disk galaxies.
Theory and simulations have suggested that in disk galaxies, gravitational instability criterions \citep{Boissier_2003, Kennicutt_Jr__1998} accurately replicate both the KS power-law and the cutoff threshold, while disk thickness \citep{Bacchini_2019}, turbulence \citep{Shetty_2008}, and shear \citep{Davis_2014} contribute to the observed scatter in $n$. Additionally, efforts have also gone into examining modified KS relations with terms including orbital velocities and velocity dispersions with the aim of finding a universal relationship between gas and SFR densities in all types of star-forming galaxies. It should be emphasized, however, that such semi-analytical relationships between gas density and star formation only identify general rather than local features of a star formation law \citep{Bacchini_2019, Kennicutt_1989}.
In contrast to disk galaxies, ETGs are typically classified as ``red and dead" stellar systems, with quenched star formation as a result of a post star-burst depletion of a molecular gas reservoir \citep{Baron_2022}. Recent observations, however, have noted that ETGs possess reservoirs of more cold gas than originally thought, with approximately 50 percent of massive ETGS (stellar mass $M_* \gtrsim 10^{10} \; \textrm{M}_\odot$) containing $10^6-10^9 \; \textrm{M}_\odot$ of cold gas in the form of atomic and molecular hydrogen \citep{Negri_2014}; while these reservoirs are significantly smaller than the $\approx 10^{10} - 10^{11} \textrm{M}_\odot$ \citep{Polletta_2022} gaseous disks observed in disk galaxies, validating the quiescent picture of ETGs, non-negligible star formation is still inevitable given basic disk instability considerations. However, it has been noted that ETGs lie on a significantly lower regime compared to starburst and disk galaxies on the global KS relations \citep{Davis_2014}, implying that conversion from gas to stars in ETGs is more inefficient. This quenching of star formation despite apparently abundant reservoirs of cold gas has been identified as one of the most persistent problems in the field of star formation \citep{Peng_2015}.
Recently, we have substantially explored the parameter space of the input physics behind the evolution of ETGs \citep{Ciotti_2022} with our high-resolution hydrodynamical simulation code \texttt{MACER}, which includes numerical algorithms for the radiative cooling of the ISM, the formation of cold gaseous disks in the presence of ordered rotation, and star formation following simple yet robust input physics. We are therefore in a good position to study the star formation inefficiency problem in ETGs and the form of local star formation laws. Applying \texttt{MACER} towards analyzing the formation of dense star-forming disks close to the galactic nucleus, we have concluded that the observed inefficiency in SFR can be partially explained due to an underestimation of total star formation and the rapid ejection of cold gas. While the degree of star formation is certainly less than that of disk galaxies, we note that the formation of cold, centrifugally supported equatorial disks due to the conservation of angular momentum in the ISM of rotating galaxies creates an environment conductive for star formation following cooling flows \citep{Ciotti_2022}. Dense stellar disks embedded in these cold gaseous disks, if they exist, would lie close to the galactic center beyond the resolution of what most resolved observations have attempted, and would be vulnerable to disintegration via galactic mergers. Given this non-negligible SFR, we conduct the first computational study of spatially resolved SFR laws on ETGs, and find that the surface star formation rate with respect to gas surface densities obeys the KS power law. We also study the properties of a volumetric KS law, finding a strong correlation with the usual surface density KS law, indicating its value for semi-analytical star formation recipes.
This paper is organized as follows: in Section 2, we provide a brief summary of the relevant numerical physics of \texttt{MACER}; in Section 3.1, we discuss the position of ETGs on the global KS law with respect to the cooling flow problem; and in Section 3.2-3, we discuss resolved KS law to examine the dynamics of local SF, followed by a summary of results.
\section{The numerical code and input physics}
With our high-resolution \texttt{MACER} code (Massive AGN Controlled Ellipticals Resolved, which is built upon \texttt{Athena++}, version 1.0.0; \cite{Stone_2020}), we solve the Eulerian hydrodynamical equations of the ISM in the context of an ETG with a supermassive black hole sitting its the center. The ISM hydrodynamics is solved in two-dimensional spherical $(r,\theta)$ coordinates assuming axi-symmetry while allowing for rotation in the $\phi$ direction. The outer boundary is set to 250 kpc from the galactic center on a logarithmic grid, while the inner boundary is set to either 2.5 pc or 25 pc, the latter of which is used for rapid parameter-space exploration; both are significantly lower than the $\approx 150$ pc inner boundary of typical cosmological simulations. The galaxy models used here are the JJe dynamical models consisting of a Jaffe ellipsoidal stellar distribution embedded in a dark matter halo, resulting in a spherical Jaffe density profile \citep{Ciotti_2022}. As mentioned in \S1, the inevitable formation of a dusty gaseous disk given some level of ordered rotation and the resulting star formation are numerically simulated. The gravity of the new stars is also included self-consistently; we describe the stellar disk as a razor-thin Kuzmin disk whose total mass and half-mass radius are computed by the simulations at every timestep, allowing for the evaluation of its gravitational potential with no expense of computational time. For this paper, a high mass galaxy model was chosen with an initial central black hole mass of $7.8 \times 10^{8} \; \textrm{M}_{\odot}$, stellar mass of $7.8 \times 10^{11} \; \textrm{M}_{\odot}$, and total galaxy mass of $ 1.4 \times 10^{13} \; \textrm{M}_{\odot}$ with spatially dependent rotation. We simulate the cosmological evolution of the galaxy for $\Delta t = 12$ Gyr from $t_{\rm start} = 2$ Gyr (for a summary of simulation properties, see Table 1, HM in \cite{Ciotti_2022}).
As we focus on the properties of star formation in this paper, the following subsection emphasizes the implementation of our star formation model in the code. Our \texttt{MACER} model also includes numerical algorithms for CGM infall, dust grain evolution, stellar feedback, AGN activity, and the metallic evolution of the ISM while providing an accurate treatment of stellar dynamics, capturing a significant range of evolution in early-type galaxies. For a general discussion of all simulation physics and their recent updates, see \cite{Ciotti_2022}.
\subsection{Star formation model}
Star formation is only allowed in \texttt{MACER} when the gas density is higher than $10^5 \textrm{atom} \; \textrm{cm}^{-3}$ and when the gas temperature is lower than $4\times 10^4 \textrm{K}$. Under these conditions, the total star formation is computed via two independent channels. The first of these channels is related to the Toomre instability (e.g., see \cite{Toomre_1964}, \cite{Binney_2008}). As is well known, the local stability of a rotating, self-gravitating disk can be understood as the relative importance of temperature/velocity dispersion and surface density. Specifically, a self-gravitating gaseous disk is locally stable when
\begin{equation}
Q(R) = \frac{c_s(R) \kappa(R)}{\pi G \Sigma_g(R)} > 1
\end{equation}
where R is the distance from the galaxy center in the disk plane, $\Sigma_g$ is the gas surface density of the disk, $c_s$ is the sound velocity, and $\kappa$ is the radial epicyclic frequency. In \cite{Gan_2019} the star formation rate $\dot{\rho}_{*,Q}$ associated with Toomre instability is defined as
\begin{equation}
\dot{\rho}_{*,Q} = \eta_{SF,Q}\Delta Q\rho_g\Omega \quad \quad \Delta Q = max(1-Q, 0) \quad \quad \eta_{SF,Q} = 0.02
\end{equation}
where $\Omega(R)$ is the disk angular velocity profile. As a complementary effect of Toomre instabilities, spiral waves developed in the cold disk are capable of transferring angular momentum outward due to a non-axisymmetric gravitational torque, and thus mass is transferred inwards and is eventually accreted on the BH \citep{Bertin_1999}. A semi-analytical algorithm mimicking this process is also included in $\texttt{MACER}$, and is essential for the enhanced ISM infall in the presence of strong ordered rotation. Unstable over-densities in the infalling disk rings trigger bursty phases of star formation, which then decrease the surface density and re-stabilize the ISM.
The second channel of star formation compares the gas cooling timescale $\tau_{cool}$ and the dynamical timescale $\tau_{dyn}$ of the ISM, with the resulted star formation rate defined as follows
\begin{equation}
\dot{\rho}_{*,C} = \frac{\eta_{SF, C}\rho_g}{\tau_{SF,C}} \quad \quad \tau_{SF, C} = max(\tau_{cool}, \tau_{dyn}) \quad \quad \eta_{SF,C} = 0.01
\end{equation}
The cooling timescale is computed as the ratio of total thermal energy to the rate at which the gas cools, the latter of which is determined by bremsstrahlung radiation, compton ionization, and recombination \citep{Gan_2019}. The dynamical timescale is the minimum between the Jeans collapse timescale $\tau_{jeans} = \sqrt{{3\pi}/{32G\rho}}$ and the rotational timescale $\tau_{rot} = 2\pi R/v_{rot}$. If the dynamical time
dominates, $\tau_{SF,C} \propto \rho^{-1/2}$, whereas if the cooling timescale dominates, $\tau_{SF,C} \propto \rho^{-1}$, with the resulting star formation rate proportional to either $\rho^{3/2}$ or $\rho^2$, respectively. When both the Toomre and cooling channels are active, the total star formation is simply computed as their sum: $\dot{\rho}_*(R) = \dot{\rho}_{*,C}(R) + \dot{\rho}_{*,Q}(R)$.
\section{Results}
\subsection{Global Kennicut-Schmidt Laws and Star Formation Inefficiency}
ETGs are conventionally described as ``red and dead" systems with quenched star formation associated with bulge growth and gas depletion after a post-starburst phase \citep{Cappellari_2011}. This notion has been challenged with recent observations noting that the molecular gas reservoirs of ETGs are not entirely depleted as a result of prior starburst; rather, cold gas clouds with masses in the range of $10^7$ to $10^9 \; \textrm{M}_{\odot}$ \citep{Negri_2014, Li_2020, Cappellari_2011} have been observed. This is not surprising, as it is well known that stellar evolution injects a substantial amount of gas over a Hubble time into galaxies \citep{Pellegrini_2012}. As numerical simulations have demonstrated (\cite{Negri_2014}, \cite{Ciotti_2022}), strong ordered rotation enhances ISM instabilities and radiative cooling; because the cooling rate in \S2 grows with density, the increasing ISM density triggers a cooling flow. As the catastrophically cooling ISM looses its thermal pressure, it accumulates onto a circumnuclear disk due to the angular momentum barrier. Through this process, our high mass models with spatially dependent rotation have produced gaseous disks on the order of $10^9 M_{\odot}$, consistent with observations.
Though the gaseous disk masses of ETGs are typically one to two orders of magnitude less that what has been observed for spiral galaxies, disk instabilities and density-dependent cooling rates dictate that nonzero star formation is inevitable. ETGs, however, are noted to be significantly less efficient in turning this cold gas to stars in comparison to spiral galaxies, typically lying over a factor of 10 below the KS relation for disk galaxies \citep{Davis_2014, Baron_2022}. The reason why star formation is so inefficient in ETGs, however, is not well understood. Confirmations of theoretical predictions via observations of star formation in ETGs are hampered by the lack of resolution of inner star-forming disks; recent far-infrared observations by \cite{Baron_2022} have suggested that a potential resolution of this problem can be that much of the star formation is obscured. The resolution of \texttt{MACER} greatly exceeds that of most observations and typical cosmological simulations, enabling the simulation and analysis of star forming disks with half mass radius close to the galactic nucleus. We argue that gas-to-star conversion in our models, while still less than that of disk galaxies, is not as inefficient as previously thought, with rapid gaseous outflow also limiting star formation.
For our high mass galaxy model, \texttt{MACER} simulations have produced a sporadically star forming disk with time-averaged SFR of $1.04\times 10^8 \; \textrm{M}_{\odot} \; \textrm{Gyr}^{-1}$ embedded in a cold gaseous disk with a time-averaged mass of $2.75\times 10^9 \; \textrm{M}_{\odot}$, resulting in a conversion efficiency of $3.79 \%$. The stellar and gasous disks have an average half-mass radius of 0.10 and 0.47 kpc, respectively. The standard quiescent picture of quenched star formation is correct to the extent that total post-2 Gyrs star formation of $1.25 \times 10^9 \; \textrm{M}_{\odot}$ is quenched compared to the $7.8\times 10^{11} \; \textrm{M}_{\odot}$ formed in the first 2 Gyr of starburst, and the size of the stellar disk is extremely small. However, the efficiency with which cold gas is converted to stars is within the lower range predicted by \cite{Kennicutt_Jr__1998} for spiral galaxies. An average global gas to SFR surface density computed via half-mass radii of $(\Sigma_g,\Sigma_{SFR}) = (2.88 \times 10^3 \; \textrm{M}_{\odot} \; \textrm{pc}^{-2} , 3.98 \; \textrm{M}_{\odot} \; \textrm{kpc}^{-2} \; \textrm{yr}^{-1})$ results in a KS amplitude of $\Sigma_{SFR}/\Sigma_g^{1.4} = 10^{-4.24}$, lying a factor of 3 rather than 10 off the Kroupa IMF-corrected KS amplitude of measured for spirals/starbursts \citep{Davis_2014}. Thus, while the conversion rate from gas to stars in ETGs still is lower than that of disk galaxies, the possibility that much of the star formation has not been observed offers a partial resolution to this problem.
\texttt{MACER} also suggests that extensive gas ejection is another reason for low observed star formation rates in comparison to gas densities. We note that $2.1 \times 10^{11} \; \textrm{M}_{\odot}$ of cold gas is ejected over the lifetime of our simulation from stellar winds and supernovae, while $9.0 \times 10^9 \; \textrm{M}_{\odot}$ is accreted by the central black hole. Therefore much of the cold gas is unavailable for star formation due to ejection/accretion, contributing to low efficiency rates. At the same time, the gaseous disk must be constantly replenished by cooled infall via angular momentum transport through viscous effects, stellar mass recycling, ISM cooling, and AGN feedback \citep{Ciotti_2022}. Observational checks of our predictions are imperative for evaluating these conclusions and for determining the evolution of cool gas with respect to the star formation inefficiency problem.
\subsection{Resolved Kennicutt-Schmidt Power Laws}
Even though in principle star formation can happen everywhere over the galaxy model, the simulations revealed that all star formation occurs in rotationally supported, thin, cold disks in the galaxy equatorial plane. To investigate the radial dependence of star formation over this disk, we constructed the surface density resolved KS relation by integrating the mass of cold ISM and new star formation over radial annuli and dividing by the radial surface element. The new star formation was evaluated over 1 Gyr timesteps for computation of the star formation rate; the gas density was the starting gas density over this time interval. In the right panel of Fig. 1, we show the resulting relation between $\Sigma_{gas}$ and $\Sigma_{SFR}$ from repeating this procedure over the entire $\Delta t = 12$ Gyr of simulation time, with individual points associated with a unique $(R, t)$ coordinate. The result is in remarkable agreement with the $\Sigma_{gas} = a\Sigma_{SFR}^{1.4}$ resolved KS relation, indicating that the same empirical SF laws for disk galaxies also hold for ETGs. \cite{Boquien_2011} has observed an amplitude $10^{-3.83} \lesssim a \lesssim 10^{-3.02}$ for spiral galaxies when surface densities/rates are computed in $\textrm{M}_{\odot}$, pc, and years; in the same units, we find $a \approx 10^{-3.3}$ for our simulations, again on the same order of magnitude, suggesting that gas-to-star conversion efficiencies in ETGs are at least comparable to those of disk galaxies. We now consider deviations in the power-law relationship at the boundaries of the star-forming disk, claiming they result from 2D projection effects and gravitational instability considerations.
We note that the star formation rate becomes abruptly horizontal with gas density at lower radii with $\Sigma_{SFR} \gtrsim 10^{9.5} \textrm{M}_{\odot} \textrm{kpc}^{-2}\textrm{Gyr}^{-1}$. The left panel of Fig. 1 separates the total star formation rate into that produced by the Toomre and cooling channels, which upon comparison with the right panel allows us to identify this high SFR regime with cooling. The reason for this, which is further elaborated in the following subsection, is the dependence of the surface density profile on disk scale height, which rapidly evolves over the disk. We identify the regime between $2.5\times 10^{-1}$ and $2.5$ kpc as dominated by the Toomre channel, which produces a SFR identical to the KS power law. \cite{Kennicutt_1989} and more recently, \cite{Boissier_2003}, have both remarked that the Toomre instability criterion remarkably reproduces SFR power laws in sprial galaxies; despite the different gas properties and disk structure of ETGs, the form of local star formation is the same.
At larger radii ($\approx$ 2.5 kpc) we also see an abrupt threshold cutoff at gas densities less than $10^{-3} \; \textrm{g} \; \textrm{cm}^{-2}$. \cite{Kennicutt_1989} remarks that a similar behavior is a consistently observed phenomena for disk galaxies, and that this cutoff threshold is typically found to occur within the range of $10^{-3} - 10^{-4} \; \textrm{g} \; \textrm{cm}^{-2}$ \citep{Kennicutt_1989}, which is incidentally very similar to what we observe for our ETG model in Figure 1. That this feature is also present in our simulations is encouraging regarding the veracity of our star formation recipe and similarity of star-formation laws across different galaxy types. Following \cite{Kennicutt_1989}, we argue that the universal cause for this phenomena arises from stability considerations captured by the simple Toomre star formation channel: at lower densities, the gas is stable against large-scale perturbations, suppressing star formation. High resolution observations of ETGs are essential, therefore, for confirming the stunning similarity of star formation dependence on gas density between disk and early-type galaxies.
\subsection{Volumetric Power Laws}
As mentioned in \S1, volumetric Schmidt power laws have also been proposed and investigated as alternatives to the standard surface density KS law. Owing to the thinness of the stellar disks and difficulties of measuring volume densities in galaxies, surface density star formation laws are more frequently used. However, surface densities are frequently affected by projection effects and the flaring of the disk thickness; thus, volumetric density relations may offer more accurate insight into star formation laws while also being more applicable for simulations. Due to these observational difficulties, however, it is thought that conversion between surface densities and volumetric densities is non-trivial and their correspondence is uncertain \citep{Bacchini_2019}. Our numerical simulations allow us to study volumetric power laws while avoiding observational 2D projection effects.
We study the radial behavior of volumetric star formation laws by averaging SFR and gas densities over radial annuli over 1 Gyr timesteps throughout the 12 Gyr of evolution. The right panel of Fig. 2 contains a plot of such a star formation law fit to a resolved Schmidt power law. We observe that the overall volumetric densities follow a remarkable correspondence with the surface density relationships in Fig. 1, and that the fit is improved in the low radii cooling regime with less horizontal spread. Using Eq. 3, a dynamical timescale on the order of $\rho^{-1/2}$ explains the good $n \approx 1.5$ power law fit; thus, it is the longer dynamical time rather than the cooling time that dominates at star formation in this spatial region, favoring $n \approx 1.5$ rather than the occasionally observed $n \approx 2$ power laws. While the volumetric KS relation corresponds with our physical prescription at low radii, the surface density KS relation matches less well, as noted in \S 3.2. This can be attributed to a rapidly flaring scale height, as seen in the left hand panel of Fig. 2, resulting in projection effects; we note, however, that Toomre instability channel matches well in both cases. The threshold cutoff observed at $\approx$ 2.5 kpc once again is attributed to Toomre instability. Ultimately, we propose that surface-density SF laws, which are easier to measure, can be translated into volumetric-density SF laws, which describe the form of a local SF, given the close correspondence between the two. More importantly, we once again assert that the same empirical SF laws in disk galaxies can be applied towards ETGs. High-resolution observations of ETGs to confirm these power law fits will thus be beneficial for applications towards semi-analytical star formation recipes in simulations and for understanding the local analytic form of star formation laws.
\section{Summary and Conclusions}
In this paper we present a detailed study of star formation laws in in ETGs by analysing the results of high resolution numerical simulations performed with the latest version of our \texttt{MACER} code \citep{Ciotti_2022}. Similar to what happens in rotating disk galaxy models, our simulations lead to the formation of cold and rotating gaseous disks with kpc scale \citep{Ciotti_2022} in the presence of galaxy rotation, gas cooling, and angular momentum. We implement star formation over this cold gas by considering two different channels: one based on Toomre instability, and another determined by the longer of the local cooling and dynamical (Jeans) times of the cold ISM. When the local values of density and temperature of the ISM are respectively larger and smaller than two prescribed threshold values, both channels are active, and the star formation rate is their sum. Remarkably, our star formation recipes result in a SFR-gas scaling law identical to the observed KS relation.
For any axisymmetric ETGs with some level of ordered rotation in their stellar population, both numerical simulations and simple physical arguments have established that the ISM cools and develops large-scale instabilities, inevitably leading to the formation of cold and rotating gaseous disks in the galaxy equatorial plane. This is confirmed by recent observations demonstrating the existence of massive gaseous disks in 50\% of ETGs \citep{Young_2011}.Theoretically, such disks should be prone to star formation, raising the question as to why their observed star formation is so inefficient in comparison to disk galaxies \citep{Negri_2014}. We examine this inefficiency by exploring the form of local star-formation laws with respect to gas density.
The Toomre instability star formation channel reproduces an $n \approx 1.4$ resolved KS star formation power law with a lower cutoff threshold, which is analogous to what has been observed in disk galaxies and can be attributed to gravitational stability considerations. The cooling channel also obeys a power-law density relationship, but deviates from the KS star formation law at higher densities closer to the center of the galaxy, which can be attributed to a rapidly flaring scale height. Volumetric star formation power laws are also explored in effort to remove artificial effects of disk flaring on surface density measurements, and we find a close correspondence between the volumetric and surface density forms of the KS power law. Globally, we find that our simulated ETGs lie close to the same KS relation as disk galaxies, and argue that an increased star formation rate coupled with infall/outflow in part explains their observed inefficiency of star formation.
The similarity of the observed star formation power-laws in disk galaxies with our simple numerical implementation in ETGs is remarkable, implying that these KS laws provide simple recipes that can be used in semi-analytical models of ETGs as well. The prediction of the existence of small star-forming disks in ETGs encourages high-resolution observations of galactic centers as well as further work in understanding their dynamical properties and exploring the form of local - rather than empirical - star formation laws to see if our predictions correspond with reality.
\acknowledgments
ZG is grateful for the financial support from Princeton University via the subcontract to New Mexico Consortium.
\bibliography{sample63}{}
\bibliographystyle{aasjournal}
|
Title:
Constraining Accreted Neutron Star Crust Shallow Heating with the Inferred Depth of Carbon Ignition in X-ray Superbursts |
Abstract: Evidence has accumulated for an as-yet unaccounted for source of heat located
at shallow depths within the accreted neutron star crust. However, the nature
of this heat source is unknown. I demonstrate that the inferred depth of carbon
ignition in X-ray superbursts can be used as an additional constraint for the
magnitude and depth of shallow heating. The inferred shallow heating properties
are relatively insensitive to the assumed crust composition and carbon fusion
reaction rate. For low accretion rates, the results are weakly dependent on the
duration of the accretion outburst, so long as accretion has ensued for enough
time to replace the ocean down to the superburst ignition depth. For accretion
rates at the Eddington rate, results show a stronger dependence on the outburst
duration. Consistent with earlier work, it is shown that urca cooling does not
impact the calculated superburst ignition depth unless there is some proximity
in depth between the heating and cooling sources.
| https://export.arxiv.org/pdf/2208.03347 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
keyword1 -- keyword2 -- keyword3
\end{keywords}
\section{Introduction}
Accreting neutron stars are unique probes of matter at high density and relatively low temperature, as well as extreme neutron-proton asymmetries~\citep{Fuku11,Meis18}. A number of observables provide unique insight into the nature of these ultradense objects, including X-ray bursts, X-ray superbursts, and crust cooling after accretion outbursts~\citep{Wijn17,intZ17,Gall21}. Model-observation comparisons are beginning to provide a handle on bulk properties of the underlying neutron star, such as the mass and radius, as well as evidence for exotic processes and phases of matter deep in the neutron star crust and even into the core~\citep[e.g][]{Brow09,Page13,Deib17,Cumm17,Brow18,Meis19,Good19}. While there have been several successes in modeling the aforementioned observed phenomena for various accreting neutron star sources, many require the addition of a heat source in the neutron star outer layers. Shallow heating has been used to explain the characteristic break in the crust cooling light curve~\citep{Dege14,Turl15,Merr16,Pari17,Page22}, the existence of short-waiting time bursts in X-ray bursting systems~\citep{Keek17}, and is likely necessary to resolve the discrepancy between modeled and inferred superburst ignition depths~\citep{Coop09}.
The physical mechanism for the shallow heat source is not known, which is one of the major outstanding problems in accreting neutron star physics~\citep{Scha22}. Nuclear reactions are known to be an important crustal heat source~\citep{Gupt07,Gupt08,Stei12}; however, both the depth and magnitude of this heat appear to be inconsistent with observational constraints for shallow heating~\citep{Brow09,Deib15,Fant18,Cham20}. Other suggested heat sources are related to compositionally driven convection in the accreted neutron star ocean and transfer of energy from the accretion flow to deeper depths via gravity waves~\citep{Inog10,Medi15,Deib15}. Determining if any of these explanations, or some combination thereof, ultimately suffice will require concerted model-observation comparison efforts.
Crust cooling model-observation comparisons likely provide the most stringent constraints on shallow heating, as the queiscent cooling light curve provides a tomographic picture of the accreted crust~\citep{Page13}. However, these model calculations have a large number of poorly-constrained parameters and therefore many degenerate solutions as to the strength and depth of shallow heating. As such, complementary constraints on the properties of shallow heating in accreting neutron star crusts are desirable. The inferred depth of carbon ignition for X-ray superbursts provides such an opportunity.
X-ray superbursts are thought to be energetic explosions ignited by carbon fusion in the accreted neutron star ocean and primarily powered by the photodisintegration of heavy nuclei remaining from earlier surface burning~\citep{Taam78,Cumm01,Stro02,Corn03,Scha03}. The ignition column depth can be roughly inferred based on the typical recurrence time $\Delta t_{\rm rec}\sim1$~yr and accretion rate $\dot{M}\sim5\times10^{-9}$~M$_{\odot}/$yr for superbursting systems~\citep{intZ03,Gall20} as $y_{\rm ign}=\dot{M}\Delta t_{\rm rec}/\left(4\pi R_{\rm NS}^{2}\right)\sim5\times10^{11}$~g\,cm$^{-2}$~\citep{Meis18}, assuming a neutron star radius $R_{\rm NS}\sim12$~km~\citep{Rile21}. A more rigorous analysis based on fitting the observed superburst light curve with cooling models results in the ignition column depth inferred from observations, $y_{\rm ign,obs}=0.5-3\times10^{12}$~g\,cm$^{-2}$~\citep{Cumm06}. This range for $y_{\rm ign,obs}$ is somewhat sensitive to the neutron star envelope temperature profile that is assumed prior to the superburst~\citep{Keek15}, but this correction is not considered here.
These constraints on the superburst $y_{\rm ign,obs}$ can be confronted with results from model calculations of carbon ignition in the accreted neutron star ocean. As is described in more detail in the following sections, ignition curves based on adopted heating and cooling rates can be paired with models of the accreted neutron star thermal structure in order to calculate $y_{\rm ign}$. Comparisons of the calculated and inferred $y_{\rm ign}$ can place constraints on the accreted neutron star thermal structure and therefore on the magnitude and depth of shallow heating.
In this work, I perform comparisons of calculated and inferred $y_{\rm ign}$ in order to place constraints on the properties of the shallow heat source thought to be present in accreted neutron star crusts. In Section~\ref{sec:calculations}, I describe the calculations of the carbon ignition curves and crust thermal profiles, as well as the superburst ignition depth. Section~\ref{sec:results} contains the calculation results. In Section~\ref{sec:discussion}, the constraints on the shallow heat source depth and magnitude are discussed, followed by a discussion of the nuclear physics uncertainties potentially impacting the results, as well as a discussion of incorporating the technique presented here into future multi-observable model-observation comparisons. Section~\ref{sec:conclusions} contains a summary.
\section{Calculations}
\label{sec:calculations}
This work follows the $y_{\rm ign}$ calculation approach of \citet{Deib16}, who closely followed the method presented by~\citet{Pote12}. For a chosen $^{12}$C+$^{12}$C fusion rate and ocean thermal conductivity, the changes in the nuclear energy generation rate and in the cooling rate with a change in temperature are calculated and, at each column depth, it is determined what temperature is required for the heating derivative to exceed the cooling derivative. For an adopted set of astrophysical conditions and crust microphysics, the temperature as a function of depth is determined by numerically solving the general relativistic heat diffusion equation, in this case using the code {\tt dStar}~\citep{Brow15}. The point at which an ignition curve intersects a thermal profile is the superburst ignition depth for that set of astrophysical conditions and microphysics. Each of these steps is described in more detail in the following subsections.
\subsection{Carbon Ignition Curves}
\label{ssec:carbonignition}
Nuclear energy generation at the ignition of a superburst is set by the $^{12}$C+$^{12}$C fusion rate. At temperatures relevant for the accreted neutron star envelope, this nuclear reaction rate is based on nuclear theory calculations and is uncertain by several orders of magnitude~\citep{Beck20,Tang22,Alio22}. Modern theoretical $^{12}$C+$^{12}$C rates include results from barrier penetration calculations using the Sao Paulo potential~\citep{Yako10}, coupled-channel calculations performed using the M3Y+repulsion double-folding potential~\citep{Esbe11}, empirical extrapolations based on the hindrance model~\citep{Jian18}, experimentally derived results based on the trojan horse method (THM)~\citep{Tumi18}, THM results adopting a Coulomb renormalization~\citep{Mukh19}, and a microscopic model with molecular resonances~\citep{Tani21}. Each of these are used in the present work.
Theoretical results for $^{12}$C+$^{12}$C fusion are typically presented as a modified astrophysical $S$-factor, $S^{*}$, where the $S$-factor is $S(E)=S^{*}(E)\exp(-0.46E)$, with $E$ as the center-of-mass energy of the reaction. For nuclear reactions involving the fusion of two charged particles, the $S$-factor is related to the directly measured (or calculated) cross section $\sigma(E)$ by $\sigma(E)=S(E)\exp(-2\pi\eta)/E$, where $\eta$ is defined below. Following~\citet{Pote12}, the thermonuclear fusion rate of nuclear species 1 and 2 per unit volume at temperature $T$ in an electron-degenerate environment characterized by electron chemical potential $\mu_{e}$ is
\begin{equation}
\begin{split}
\mathcal{R}_{12}(T,\mu_{e})=\frac{w_{12}c\sqrt{8}}{\sqrt{\pi \mu_{\rm red}m_{u}\left(k_{\rm B}T\right)^{3}}}n_{1}(\mu_{e})n_{2}(\mu_{e}){\rm INT},\\
{\rm INT}=\int_{0}^{\infty}S(E_{\rm s})\exp\left(-2\pi\eta\left(E_{\rm s})\right)-E/(k_{\rm B}T)\right)dE.
\label{eqn:reactionrate}
\end{split}
\end{equation}
\noindent Here, $c$ is the speed of light in vacuum, $k_{\rm B}$ is the Boltzmann constant, $m_{u}$ is the nucleon mass, $w_{12}=0.5$ for identical nuclear species and 1 otherwise, and $\mu_{\rm red}=(A_{1}A_{2})/(A_{1}+A_{2})$ is the reduced mass of species with nuclear mass numbers $A_{i}$. The species' number densities $n_{i}$ are related to $\mu_{e}$ via the mass-density $\rho$ by $n_{i}=(X_{i}/A_{i})\rho$, where, for an environment in which the pressure is dominated by degenerate electrons~\citep{Meis18},
\begin{equation}
\rho(\mu_{e})\approx7.2\times10^{6}\frac{N_{\rm A}}{Y_{e}}\left(\frac{\mu_{e}}{1\,{\rm MeV}}\right)^{3}\,{\rm g}\,{\rm cm}^{-3}.
\end{equation}
$N_{\rm A}$ is the Avogadro constant and the electron fraction $Y_{e}=\left(\sum_{i}Z_{i}X_{i}/A_{i}\right)/\left(\sum_{i}A_{i}X_{i}/A_{i}\right)$ is summed over all species at $\mu_{e}$, where each species has nuclear charge $Z_{i}$ and mass-fraction $X_{i}$. The Sommerfeld parameter is
$\eta(E)=\sqrt{(Z_{1}^{2}Z_{2}^{2}\alpha_{\rm fs}^{2}\mu_{\rm red}m_{u})/(2E)}$, where $\alpha_{\rm fs}$ is the fine-structure constant. In order to account for the enhancement of the fusion rate due to plasma screening, both $S(E)$ and $\eta(E)$ are evaluated at a shifted energy, $E_{\rm s}=E+H_{12}(0)$~\citep{Cler19}. The temperature-dependent energy shift $H_{12}(0)$ for a high-density environment can be approximated as $H_{12}(0)=k_{\rm B}Th_{12}^{0}$, where $h_{12}^{0}=f_{0}(\Gamma_{i})+f_{0}(\Gamma_{j})-f_{0}(\Gamma_{ij}^{\rm comp})$, assuming the linear mixing rule~\citep{Chug09}. The terms $f_{0}(\Gamma)$ are the Coulomb free energy per ion in a one component plasma using the analytic approximation~\citep{Pote00}
\begin{equation}
\begin{split}
f_{0}(\Gamma)=&a_{1}\left(\sqrt{\Gamma(a_{2}+\Gamma)}-a_{2}\ln\left(\sqrt{\Gamma/a_{2}}+\sqrt{1+\Gamma/a_{2}}\right)\right)\\
&+2a_{3}\left(\sqrt{\Gamma}-\arctan(\sqrt{\Gamma})\right)+b_{1}\left(\Gamma-b_{2}\ln(1+\Gamma/b_{2})\right)\\
&+(b_{3}/2)\ln(1+\Gamma^{2}/b_{4}),
\end{split}
\end{equation}
\noindent where $a_{1}=-0.907$, $a_{2}=0.62954$, $a_{3}=0.2771$, $b_{1}=0.00456$, $b_{2}=211.6$, $b_{3}=-10^{-4}$, and $b_{4}=0.00462$. The ion coupling radii are $\Gamma_{i}=\alpha_{\rm fs}\hbar cZ_{i}^{5/3}/(a_{e}k_{\rm B}T)$, where $\hbar$ is the reduced Planck constant and the electron sphere radius is $a_{e}=\left(3/(4\pi n_{e}(\mu_{e}))\right)^{1/3}$. The electron number density at $\mu_{e}$ is $n_{e}(\mu_{e})=(Y_{e}/m_{u})\rho(\mu_{e})$. For the compound nucleus resulting from the fusion of species $1+2$, $Z_{i}=Z_{1}+Z_{2}$.
The local nuclear energy generation rate per unit mass is
\begin{equation}
\epsilon_{\rm nuc}=\mathcal{R}_{12}Q_{12}/\rho,
\label{eqn:enuc}
\end{equation}
where $Q_{12}$ is the energy release of a fusion event between species 1 and 2.
The present work deals with $^{12}$C+$^{12}$C fusion, so $w_{12}=0.5$, $Z_{1}=Z_{2}=6$, $A_{1}=A_{2}=12$, $Q_{12}$ is the absolute value of the atomic mass excess of the compound nucleus $^{24}{\rm Mg}$ ($|{\rm ME}(^{24}{\rm Mg})|=13.9336$~{\rm MeV}\citep{Wang21}), and the adopted $S^{*}$ come from the aforementioned nuclear theory calculations. The envelope is assumed to be comprised of only $^{12}{\rm C}$ and $^{56}{\rm Fe}$, with $X_{\rm C}=0.2$ and $X_{\rm Fe}=0.8$.
The local cooling rate per unit mass from thermal diffusion is \citep{Fuji81,Pote12}
\begin{equation}
\epsilon_{\rm cool}=\kappa_{\rm eff}\rho T/y^{2},
\label{eqn:ecool}
\end{equation}
where $\kappa_{\rm eff}=0.17\kappa$ is the effective thermal conductivity. The thermal conductivity $\kappa=\pi^{2}c^{2}k_{\rm B}^{2}Tn_{e}/(3\mu_{e}\nu_{\rm coll})$~\citep{Meis18}, where the collision frequency $\nu_{\rm coll}$ in a liquid ocean is determined by the electron-ion impurity using the linear mixing rule approximation~\citep{Brow04}: $\nu_{\rm coll}=4\alpha_{\rm fs}^{2}\mu_{e}\langle Z^{2}\Lambda_{\rm ei}\rangle/(3\pi\hbar\langle Z\rangle)$. Here, $\langle\rangle$ are mass-fraction-weighted averages of the composition at $\mu_{e}$ and $\Lambda_{\rm ei}=1$ is the Coulomb logarithm, where it is noted that a more accurate estimate of $\Lambda_{\rm ei}$ would consider the accreted neutron star envelope structure~\citep{Horo09,Roge16}. For an environment in which the pressure is dominated by degenerate electrons, the column depth $y$ is related to $\mu_{e}$ by~\citep{Meis18}
\begin{equation}
y\approx7.2\times10^{9}\left(\frac{\mu_{e}}{1\,{\rm MeV}}\right)^{4}\frac{2.44\times10^{14}\,{\rm cm}\,{\rm s}^{-2}}{g}\,{\rm g}\,{\rm cm}^{-2}.
\label{eqn:coldepth}
\end{equation}
The local gravitational acceleration is $g=(GM_{\rm NS}/R_{\rm NS}^{2})(1+z)$, where $G$ is the gravitational constant, $M_{\rm NS}$ is the neutron star mass, and the surface gravitational redshift is $(1+z)=1/\sqrt{1-2GM_{\rm NS}/(R_{\rm NS}c^{2})}$.
Thermal instability for superburst ignition is achieved when the change in the nuclear heating rate outpaces the change in the cooling rate with a change in temperature~\citep{Fush87}:
\begin{equation}
\frac{\partial\epsilon_{\rm nuc}}{\partial T}>\frac{\partial\epsilon_{\rm cool}}{\partial T}.
\label{eqn:ignition}
\end{equation}
The carbon ignition curves shown in Figure~\ref{fig:ExampleProfiles} were calculated by identifying the minimum $T$ needed to satisfy this inequality at each $y$ located within part of the neutron star ocean and outer crust.
\subsection{Crust Thermal Profiles}
\label{ssec:thermalprofiles}
Thermal profiles (i.e. temperature as a function of depth) for the accreted neutron star crust were calculated for a large number of somewhat arbitrarily chosen but astrophysically relevant conditions using the open-source code {\tt dStar}~\citep{Brow15}. In {\tt dStar}, the thermal evolution of a neutron star undergoing (or having undergone) accretion is calculated by solving the general relativistic heat diffusion equation using the {\tt MESA}~\citep{Paxt11,Paxt13,Paxt15} numerical libraries with the microphysics described by \citet{Brow09}. The input file for a {\tt dStar} calculation is known as an {\it inlist}, as shown in Appendix~\ref{sec:appendixA}, which can be used to specify a number of astrophysical parameters, microphysics models, and numerical controls. Key input quantities are described in this subsection. In addition, Tables~\ref{tab:inputs} and~\ref{tab:urca} list the input parameters that were varied between calculations.
Accretion drives the neutron star outer layers out of thermal equilibrium with the core, where heat is deposited into the crust via nuclear reactions that are driven by accretion, which can also lead to neutrino cooling~\citep{Lau18,Scha22b}. The thermal profile, and thereby the temperature at each radial coordinate $r$ (assuming spherical symmetry), over time $t$ is determined via the heat diffusion equation~\citep{Page13}:
\begin{equation}
C_{V}\frac{\partial T}{\partial t}=\kappa\frac{\partial^{2}T}{\partial r^{2}}+\frac{1}{r^{2}}\frac{\partial(r^{2}\kappa)}{\partial r}\frac{\partial T}{\partial r}
\label{eqn:diffusion} + Q_{\rm heat} - Q_{\rm cool},
\end{equation}
where the specific heat $C_{V}$ (described in detail by~\citet{Brow09}), $\kappa$, nuclear heating rate $Q_{\rm heat}$, and neutrino cooling rate $Q_{\rm cool}$ are each depth dependent. The neutron star core can be approximated as an infinite heat sink, though this is note quite the case~\citep{Cumm17,Brow18}, fixing the temperature at the crust-core boundary to the core temperature $T_{\rm core}$. In this work, $T_{\rm core}=10^{8}$~K based on typical constraints for several crust-cooling systems~\citep{Page13,Dege14,Deib15,Lali19}.
In the crust, $\kappa$ is related to the variance in the nuclear charge of the composition, as described in Section~\ref{ssec:carbonignition}; however, the composition is generally not the same as in the ocean. It is customary to describe the charge variance in the accreted neutron star crust by the impurity parameter~\citep{Itoh93,Brow09}:
\begin{equation}
Q_{\rm imp}=\frac{\sum_{i}n_{i}\left(Z_{i}-\langle Z\rangle\right)^{2}}{\sum_{i}n_{i}}.
\label{eqn:qimp}
\end{equation}
In reality, $Q_{\rm imp}$ evolves over depth due to nuclear reactions and due to changing surface burning over time, e.g. because of changing $\dot{M}$. However, both the nuclear physics of crust reactions and the surface burning history of a given accreting neutron star system have significant uncertainties. It is therefore more common employ a single approximate $Q_{\rm imp}$ in model-observation comparisons. In this work, $Q_{\rm imp}=4$ and 40 are adopted, where the former has successfuly explained crust-cooling observations of the superbursting system KS 1731-26~\citep{Lali19} and the latter is the largest $Q_{\rm imp}$ yet used to reproduce any observed crust-cooling light curve~\citep{Dege14}.
Here, deep crustal heating is approximated by depositing 1.5~MeV per accreted nucleon (MeV\,$u^{-1}$) across $y=5\times10^{15}-2\times10^{17}$~g\,cm$^{-2}$ and $e^{-}$-capture heating is approximated by depositing 0.3~MeV\,$u^{-1}$ across $y=5\times10^{12}-2\times10^{15}$~g\,cm$^{-2}$, consistent with recent estimates~\citep{Gupt08,Haen08}. Absent a physical model, shallow heating of strength $Q_{\rm sh}$ is deposited uniformly about a column depth $y_{\rm sh}$ within the range $y_{\rm sh}/3$ to $3y_{\rm sh}$, following~\citet{Deib15}. Each of the shallow heating magnitudes and depths listed in Table~\ref{tab:inputs} was employed in combination with each of the other input parameter options. The range of $Q_{\rm sh}$ adopted was 0.1-10~MeV\,$u^{-1}$ in steps of 0.1, where the lower-bound is equivalent to using a higher-end estimate for $e^{-}$-capture heating and the upper estimate is the maximum $Q_{\rm sh}$ thus far inferred from accreting neutron star model-observation comparisons~\citep{Deib15}. Rather than selecting the shallow heating depth in terms of $y_{\rm sh}$, this depth was selected on a grid of pressure $P_{\rm sh}$ over the range $\log(P_{\rm sh})=24-29$, in cm-g-s units, in steps of 0.05. This is intended to encompass the range of $y_{\rm sh}$ found to be plausible by crust cooling model-observation comparisons~\citep{Deib15,Merr16,Pari18,Oote19,Page22}. For this depth range, where the pressure is primarily due to electron degeneracy, the pressure $P=\mu_{e}^{4}/(12\pi^{2}\hbar^{3}c^{3})$~\citep{Meis18}.
The total amount of heat deposited into the accreted neutron star outer layers during an accretion outburst depends on the duration $\Delta t$ and average $\dot{M}$ of the accretion outburst. The two values for $\dot{M}$ used in this work are approximately 10\% and 100\%, respectively, of the Eddington accretion rate for a standard neutron star accreting hydrogen-rich fuel~\citep{Scha99}. The smaller accretion rate is in the range typically inferred for superbursting systems~\citep{intZ17} and the larger is roughly the accretion rate at which stable burning begins~\citep{Gall21}. Here, {\tt dStar} thermal profiles were recorded for each calculation at $\Delta t$ of 1643.6~d and 4565~d. The former is the $\Delta t$ required to replace the envelope down to $y=10^{12}$~g\,cm$^{-2}$ at 10\% Eddington accretion rate, while the latter is the duration of the accretion outburst observed for KS 1731-26 prior to going into quiescence in 2001~\citep{Merr16}. Neither of these $\Delta t$ are sufficient to reach a steady-state temperature profile, but the longer of the two is close to achieving that state~\citep{Page13}.
Neutrinos can be emitted from spherical shells in the crust via $e^{-}$-capture/$\beta^{-}$-decay cycling, known as urca cooling~\citep{Scha14}. The neutrino luminosity $L_{\nu}$ associated with $e^{-}$-capture parent species ($Z_{i}$,$A_{i}$) with mass fraction $X_{i}$ is~\citep{Tsur70,Deib16}:
\begin{equation}
L_{\nu,i} \approx L_{34}\times10^{34}{\rm{erg\,s}}{}^{-1}X_{i}T_{9}^{5}\left(\frac{g_{14}}{2}\right)^{-1}R_{10}^{2} \ ,
\label{eqn:Lnu}
\end{equation}
where $T_{9}$ is the temperature of the urca shell in units of $10^{9} \, \mathrm{K}$, $R_{10}\equiv R_{i}/(10~\rm{km})$, $R_{i}\approx R_{\rm NS}$ is the radius of the urca shell from the neutron star center, and $g_{14}\equiv g/(10^{14}~\rm{cm}\,\rm{s}^{-2})$. $L_{34}(Z_{i},A_{i})$ is the intrinsic cooling strength:
\begin{equation}
L_{34}=0.87\left(\frac{10^{6}~{\rm{s}}}{ft}\right)\left(\frac{56}{A_{i}}\right)\left(\frac{|Q_{\rm{EC}}|}{4~{\rm{MeV}}}\right)^{5}\left(\frac{\langle F\rangle^{*}}{0.5}\right).
\label{eqn:L34}
\end{equation}
The energy-cost for $e^{-}$-capture is the $e^{-}$-capture $Q$-value $Q_{\rm EC}={\rm ME}(Z_{i},A_{i})-{\rm ME}(Z_{i}-1,A_{i})$, where the atomic mass excesses ME are corrected by a Coulomb lattice energy $+C_{\ell}Z^{5/3}Q_{\rm EC,0}$, $Q_{\rm EC,0}$ is the $Q$-value without the lattice correction, and $C_{\ell}\approx3.407\times10^{-3}$~\citep{Roca08}. The factor $\langle F\rangle^{*}\equiv\langle F\rangle^{+}\langle
F\rangle^{-}/(\langle F\rangle^{+}+\langle F\rangle^{-})$, where the Coulomb factor $\langle F\rangle^{\pm}\approx2\pi\alpha_{\rm fs}
Z_{i}/|1-\exp(\mp2\pi\alpha_{\rm fs} Z_{i})|$. The comparative half-life of the weak transition $ft$ is the average for the $\beta^{-}$-decay and $e^{-}$-capture reactions in the urca cycle, $ft=(ft_{\beta}+ft_{\rm EC})/2$, where the two are related by the spin $J$ degeneracy of the initial states $ft_{\beta}/(2J_{\beta}+1)=ft_{\rm EC}/(2J_{\rm EC}+1)$~\cite{Paxt16}. In principle, Equation~\ref{eqn:L34} can be modified by the thermal population of nuclear excited states~\citep{Misc21}, but they are ignored in the present work.
For simplicity, the present work is limited to investigating the impact of urca cooling from $^{55}{\rm Sc}-^{55}$Ca, as this pair is thought to have more than an order of magnitude larger $L_{\nu}$ than the next most significant urca pair~\citep{Deib16,Meis17} when adopting superburst ashes, where $X(A=55)=0.018$~\citep{Scha14,Keek12}. Four sets of urca cooling conditions were investigated, summarized in Table~\ref{tab:urca}. For the no-cooling scenario, corresponding to an absence of $^{55}{\rm Sc}$ at the approprate depth ($\mu_{e}\approx Q_{\rm EC})$, $XL_{34}=0$. The nominal cooling scenario corresponds to using the current best estimates for $Q_{\rm EC,0}$ and $ft$, while the maximum and minimum cooling scenarios correspond to upper and lower limits calculated based on the uncertainties for these parameters. For consistency, the pressure at which the urca shell is located $P_{\rm urca}$ is modified corresponding to $Q_{\rm EC,0}$. In this work, $|Q_{\rm EC,0}|=12.192$~MeV is calculated using ME from \citet{Mich18,Leis21} and the associated uncertainty $\delta Q_{\rm EC,0}=0.172$~MeV is calculated using their one-standard-deviation uncertainties. I use $ft=5.9$ and associated uncertainty $\delta ft=2$ based on the systematics of \citet{Sing98}, as experimental constraints do not yet exist for this weak transition.
To summarize the presentation of the thermal profile calculations, $161\,600$ {\tt dStar} calculations were performed, using all combinations of the input parameters and urca cooling conditions detailed in Tables~\ref{tab:inputs} and \ref{tab:urca}, respectively. Example thermal profiles are shown in Figure~\ref{fig:ExampleProfiles}.
\begin{table}
\centering
\caption{{\tt dStar} input parameters which were varied for the large grid of calculations performed in this work, where ``cgs" indicates in cm-g-s units. Note that $\dot{M}$ and $Q_{\rm imp}$ each only had two settings. Urca cooling settings are described in Table~\ref{tab:urca}.}
\label{tab:inputs}
\begin{tabular}{lccc} %
\hline
Parameter & Lower Bound & Upper Bound & Step Size\\
\hline
$\dot{M}$ [$M_{\odot}$\,yr$^{-1}$] & 1.75$\times10^{-9}$ & 1.75$\times10^{-8}$ & -\\
$Q_{\rm sh}$~[MeV\,$u^{-1}$] & 0.1 & 10 & 0.1\\
$\log (P_{\rm sh}[\rm cgs])$ & 24 & 29 & 0.05\\
$Q_{\rm imp}$ & 4 & 40 & -\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Urca cooling conditions used in {\tt dStar} calculations for this work, where ``cgs" indicates in cm-g-s units. All other varied inputs are described in Table~\ref{tab:inputs}.}
\label{tab:urca}
\begin{tabular}{lcc} %
\hline
Mode & $X(A)L_{34}$ & $\log (P_{\rm urca}[\rm cgs])$ \\
\hline
None & 0 & -\\
Minimum & 4.47$\times10^{-2}$ & 29.135\\
Nominal & 4.80 & 29.159 \\
Maximum & 5.15$\times10^{2}$ & 29.184 \\
\hline
\end{tabular}
\end{table}
\subsection{Superburst Ignition Depth}
\label{ssec:ignitiondepth}
For a single carbon ignition curve and thermal profile, $y_{\rm ign}$ is determined by numerically finding the intersection of the two. In a physical system, carbon will accumulate until a sufficient depth is reached for ignition, and as such the shallowest-$y$ intersection is the one of interest. The $y_{\rm ign}$ was determined for each combination of the six carbon ignition curves described in Section~\ref{ssec:carbonignition} and the $161\,600$ thermal profiles described in Section~\ref{ssec:thermalprofiles}.
\section{Results}
\label{sec:results}
Figure~\ref{fig:yignmap} shows $y_{\rm ign}$ resulting from Section~\ref{ssec:ignitiondepth} for calculations performed with the no urca cooling scenario. The columns of the subfigures show the impact of adopting different $^{12}{\rm C}+^{12}$C reaction rates, while the rows of the subfigures show the impact of $\Delta t$. The solid-black regions in these figures correspond to cases where the conditions for $y_{\rm ign}$ were not met, i.e. the carbon ignition curve and thermal profile did not intersect within $y=10^{10}-10^{15}$~g\,cm$^{-2}$. For all cases, $y_{\rm ign}$ is relatively shallow for large $Q_{\rm sh}$ and shallow $y_{\rm sh}$, while $y_{\rm ign}$ is relatively deep for the opposite scenario. Contours of approximately equal $y_{\rm ign}$ follow a trajectory of increasing $Q_{\rm sh}$ and deepening $y_{\rm sh}$. However, the contours abruptly end for low $Q_{\rm sh}$ and shallow $y_{\rm sh}$. This is because insufficient heat is deposited and retained within the neutron star outer layers in order for the thermal profile to intersect the ignition curve. For low $\dot{M}$, this region spans a considerable portion of the phase-space, even failing to achieve carbon ignition for $Q_{\rm sh}\approx9$~MeV\,$u^{-1}$ if $y_{\rm sh}\approx10^{10}$~g\,cm$^{-2}$.
The gradient of $y_{\rm ign}$ within the $Q_{\rm sh}$-$y_{\rm sh}$ phase space becomes steeper moving from large $Q_{\rm sh}$ and shallow $y_{\rm sh}$ to small $Q_{\rm sh}$ and deep $y_{\rm sh}$. This can be understood by considering Figure~\ref{fig:ExampleProfiles}. The thermal profile for small $Q_{\rm sh}$ has a relatively shallow slope $\partial T/\partial y$, while $\partial T/\partial y$ rapidly increases in magnitude for increasing $Q_{\rm sh}$, approaching a converged slope. For deep $y_{\rm sh}$, the thermal profile intersects the carbon ignition curve at deep $y$, where the carbon ignition curve slope is especially shallow. As such, for small $Q_{\rm sh}$ and deep $y_{\rm sh}$, $y_{\rm ign}$ depends on the the intersection of two shallow-sloped curves, which will be particularly sensitive to small changes in the slope of the thermal profile, leading to a more rapid change in $y_{\rm ign}$ in this region of the $Q_{\rm sh}$-$y_{\rm sh}$ phase-space.
The results shown in Figure~\ref{fig:yignmap} demonstrate a weak sensitivity to $Q_{\rm imp}$ and $^{12}{\rm C}+^{12}$C rate, as well as a modest sensitivity to $\Delta t$. Of all parameters investigated in this work, $y_{\rm ign}$ is primarily sensitive to $\dot{M}$. While both $\dot{M}$ and $\Delta t$ impact $Q_{\rm heat}$ deposited within the crust, $\Delta t$ controls how close the thermal profile is to steady-state, while $\dot{M}$ decides what the steady-state thermal profile is. Here, both $\Delta t$ were sufficiently close to achieving steady-state that the difference in thermal profiles is relatively modest. This would not necessarily be the case for an abnormally short $\Delta t$, as observed for 4U 1608-522~\citep{Keek08}. Similarly, $Q_{\rm imp}$ impacts $\kappa$ and therefore the thermal diffusion time, but does not have a particularly strong impact on the steady-state thermal profile.
Results were very nearly identical when including urca cooling. The only scenario in which urca cooling was observed to have some impact was $Q_{\rm imp}=40$ $\dot{M}=1.75\times10^{-8}$~$M_{\odot}$\,yr$^{-1}$. This (extremely modest) impact is shown in Figure~\ref{fig:yignmapurca}, where implementing the maximum urca cooling scenario slightly changes $y_{\rm ign}$ at low $Q_{\rm sh}$ and deep $y_{\rm sh}$. Upon close inspection, one sees urca cooling drives $y_{\rm ign}$ slightly deeper around $Q_{\rm sh}\approx2$~MeV\,$u^{-1}$, $\log(y_{\rm sh}[{\rm g\,cm}^{-2}])\approx14.5$ for $\Delta t=4565$~d. Consulting Figure~\ref{fig:ExampleProfiles}, it is apparent that the impact of shallow heating on the thermal profile is mostly concentrated near $y_{\rm sh}$. Given the strong $T$-dependence of $L_{\nu}$ (see Equation~\ref{eqn:Lnu}), urca cooling will result in an insignificant $Q_{\rm cool}$ unless $y_{\rm sh}$ approaches $y_{\rm urca}$, consistent with the findings of~\citet{Deib16}. Note that the present work only considers the impact of the $^{55}{\rm Sc}-^{55}$Ca urca pair, which is located in the crust. The impact of urca cooling may be more significant when considering an urca pair located in the accreted neutron star ocean, though there the strong $Q_{\rm EC}$-dependence will reduce $L_{\nu}$ (see Equation~\ref{eqn:L34}) and therefore $Q_{\rm cool}$.
Figure~\ref{fig:yignconstraints} shows the regions from Figure~\ref{fig:yignmap} where $y_{\rm ign}$ calculated in this work is within the ignition depth inferred from model-observation comparisons of superburst light-curves, $y_{\rm ign,obs}=0.5-3\times10^{12}$~g\,cm$^{-2}$~\citep{Cumm06}. The left and right columns of this figure show the influence of the $^{12}{\rm C}+^{12}$C reaction rate and $\Delta t$, respectively. For comparison, Figure~\ref{fig:yignconstraints} includes constraints on shallow heating obtained from fits to crust cooling light curves performed in earlier works. These include the results of fits to EXO 0748-676~\citep{Dege14}, MXB 1659-29~\citep{Pari18}, KS 1731-26~\citep{Merr16}, and MAXI J0556-332~\citep{Deib15,Page22}, of which it is noted that only KS 1731-26 has been observed to feature superbursts. The two constraints for MAXI J0556-332 are quite disparate, as \citet{Page22} assumed that the first crust cooling event observed for this source was shortly preceded by a hyperburst within the crust, while \citet{Deib15} fit only this cooling event while assuming shallow heating was responsible for the large energy deposition in the neutron star outer layers. For the low $\dot{M}$ modeled in this work, which is closer to the $\dot{M}$ observed prior to superbursts~\citep{intZ17}, the shallow heating constraints obtained in the present work are on the edge of consistency with crust cooling constraints. Here, for the lower of the two $\dot{M}$, the present work favors generally larger $Q_{\rm sh}$ and deeper $y_{\rm sh}$, with the exception of the unique findings of \citet{Deib15}. For $\dot{M}$ near the Eddington limit, the present work results in shallow heating that is in agreement with most constraints from crust cooling model-observation comparisons, but skewing to deeper $y_{\rm sh}$. For large $\dot{M}$, there is stronger sensitivity of $y_{\rm ign}$ to $\Delta t$ at deep $y_{\rm sh}$.
\section{Discussion}
\label{sec:discussion}
\subsection{Shallow Heating Constraints}
\label{ssec:shallowheatconstraints}
Figure~\ref{fig:yignconstraints} demonstrates that the inferred depth of carbon ignition for superbursts can be used as a constraint for the magnitude and depth of shallow heating in accreting neutron stars. This constraint is primarily sensitive to $\dot{M}$, modestly sensitive to $\Delta t$ and the $^{12}{\rm C}+^{12}$C reaction rate, weakly dependent on $Q_{\rm imp}$, and negligibly dependent on urca cooling in the accreted crust. The primary sensitivity, $\dot{M}$, can usually be constrained for a superbursting system based on the persistent luminosity (though there are complications related to X-ray reflection from the accretion disk)~\citep{He16}. Therefore, this shallow heating constraint is relatively robust to changes in modeling assumptions.
Relative to the complementary constraints on shallow heating from crust cooling model-observation comparisons, the superburst ignition depth constraint is weaker in that it allows for a larger region of the $Q_{\rm sh}$-$y_{\rm sh}$ phase-space. This is to be expected, as $y_{\rm ign}$ essentially only depends on the thermal structure at a single $y$, while crust cooling light curves depend on the thermal structure of the entire neutron star. However, the superburst ignition depth constraint offers the distinct advantage of being insensitive to $Q_{\rm imp}$ and urca cooling, contrary to crust cooling light curves~\citep{Brow09,Meis17}. Furthermore, most superbursting sources do not feature crust cooling episodes~\citep{Gall20} and therefore the method presented here can expand the set of accreting neutron stars that can be used to constrain shallow heating.
Of the crust-cooling sources featured in Figure~\ref{fig:yignconstraints}, only KS 1731-26 has also exhibited a superburst. The $\dot{M}$ inferred from the persistent X-ray luminosity prior the the superburst event is $\approx10$\% of Eddington~\citep{Kuul02}, corresponding to the lower $\dot{M}$ modeled in the present work. This $\dot{M}$ is in agreement with the $\dot{M}$ that one would infer based on the recurrence time between KS1731-26 X-ray bursts prior to the superburst~\citep{Lamp16,Meis19}. Fits to the KS 1731-26 crust-cooling light curve infer that $Q_{\rm imp}\approx4$~\citep{Merr16,Lali19}. The $\Delta t$ prior to the quiescent episode of KS 1731-26 in 2001 is assumed to be 4565~d based on observational data~\citep{Merr16}, implying that $\Delta t$ prior to the 1998 superburst event was $\Delta t\approx$2900~d. Therefore, the results from the present work that are most relevant for KS 1731-26 are from the calculations with $Q_{\rm imp}=4$ and $\dot{M}=1.75\times10^{-9}$~$M_{\odot}$\,yr$^{-1}$. These results do not appear to be consistent with the shallow-heating constraints obtained by crust-cooling light curve fits in \citet{Merr16} (blue solid box in Figure~\ref{fig:yignconstraints}). However, given the approximate nature of the $\dot{M}$ constraints prior to the suprbursting episode, it is plausible that consistency could be achieved. More detailed analysis, e.g. of the persistent X-ray luminosity prior to the superburst or model-observation comparisons for the light curve shape and recurrence time of standard bursts~\citep{Meis18b,John20}, is likely necessary. Additionally, while the crust-cooling models of \citet{Merr16} fit for $Q_{\rm sh}$, they held $y_{\rm sh}$ fixed. It is therefore possible that the inconsistency between their $Q_{\rm sh}$ constraints and this work are due to fixing $y_{\rm sh}$ in that work. Further crust-cooling model-observation comparisons are needed to draw stronger conclusions.
\subsection{Influence of Nuclear Physics Uncertainties}
\label{ssec:nucuncertainties}
The shallow heating constraints obtained in this work are relatively insensitive to assumptions regarding input nuclear physics. This includes not only the $^{12}{\rm C}+^{12}$C nuclear reaction rate, but also past surface burning and nuclear reactions occurring within the accreted crust.
As shown in Figures~\ref{fig:ExampleProfiles} and \ref{fig:yignmap}, adopting different $^{12}{\rm C}+^{12}$C reaction rates leads to minor changes in the calculated $y_{\rm ign}$. When comparing the two most discrepant theoretical predictions for this rate, which differ in $S^{*}$ by nearly six orders of magnitude~\citep{Tang22}, the change in $y_{\rm ign}$ is roughly a factor of four. This relative insensitivity is due to the extreme temperature dependence of the $^{12}{\rm C}+^{12}$C rate, owing to the considerable Coulomb barrier. Nonetheless, the impact is on the same order of the uncertainty in the inferred $y_{\rm ign,obs}$ and so some uncertainty reduction in the $^{12}{\rm C}+^{12}$C rate would be beneficial. It would be particularly beneficial to push direct measurements of this nuclear reaction cross section down to slightly lower energies in order to confirm or exclude the THM~\citep{Tumi18} and Hindrance~\citep{Jian18} rates, which are responsible for the bulk of the impact found in the present work. Based on the relative $S^{*}$ of theoretical models, it appears that direct measurements of $^{12}{\rm C}+^{12}$C down to $E=2.25$~MeV may suffice. Though~\citet{Tan20} reach $E=2.20$~MeV for their lowest-energy measurement, that data-point is a relatively large upper-limit on $S^{*}$ and has a large $E$-separation from the neighboring points measured in that work, making it difficult to confront with theoretical predictions.
For all accreting neutron star systems, the crust composition is uncertain due to uncertainties in which surface-burning modes (i.e. stable burning, superbursts, or X-ray bursts) were prevalent in the past, the nuclear physics of the surface-burning processes, and nuclear physics of the accreted crust. For instance, superburst ashes and X-ray burst ashes imply a substantially different $Q_{\rm imp}$~\citep{Meis18}. Meanwhile, individual nuclear reaction rates, and even individual nuclear masses, can have important impacts on $Q_{\rm imp}$ and $X_{i}$ of urca nuclides~\citep{Cybu16,Scha17,Ong18,Meis19,Hoff20,Meis22}. Nuclear reactions in the crust further modify the crust composition, but these modifications depend sensitively on input nuclear physics, such as nuclear masses and the presence of exotic reaction processes~\citep{Shch19,Scha22b}. Furthermore, even if $X_{i}$ were known, the $L_{\nu}$ themselves are sensitive to the adopted $Q_{\rm EC}$ and $ft$, which are often not known~\citep{Meis15b,Ong20}. The insensitivity of results in the present work regarding the adopted $Q_{\rm imp}$ and urca cooling imply that these considerable uncertainties in the crust composition and $L_{\nu}$ are mostly inconsequential. The caveat to this statement is that the present work only considered an urca pair in the crust, which may not apply to an urca pair located closer to $y_{\rm sh}$ in the ocean. Furthermore, it is important to highlight that the surface-burning production of $^{12}$C is extremely important, as it sets $X_{\rm C}$ in the ignition curve calculations. The mechanism to produce sufficiently high $X_{\rm C}$ is uncertain, but appears to require the system to spend some time in a special region of $\dot{M}$~\citep{Stev14,Keek16}.
The remaining nuclear physics uncertainties that were not investigated in this work include $e^{-}$-capture heating and deep crustal heating, where the latter is primarily dependent on the crust-core transition pressure and hence the dense-matter equation-of-state~\citep{Shch21,Shch22}. Neither of these heat sources would remove the need for shallow heating, but they do impact the accreting neutron star thermal profile and therefore likely have some influence on the shallow heating constraints inferred from $y_{\rm ign}$~\citep{Coop09}.
Other important microphysics uncertainties that were not investigated in the present work include the Coulomb logarithm and plasma-screening for carbon ignition. Neither are accessible in terrestrial laboratory experiments in the foreseeable future (with the exception of plasma-screening effects for nuclear reactions of light nuclides at modest densities~\citep{Kemp19}) and therefore dedicated theoretical efforts will be required. Once the physical origin of shallow heating is eventually determined, it is possible that $y_{\rm ign}$ model-observation comparisons like the ones presented here can provide some constraints.
\subsection{Incorporation into Multi-observable Modeling}
\label{ssec:multiobs}
The method for obtaining shallow heating constraints presented in this work could be applied to any accreting neutron star system featuring superbursts. However, the value of these constraints would be increased by combining them with constraints derived from other observables. The source KS 1731-26 is particularly promising in this regard, as it features X-ray bursts that resemble the successfully modeled~\citep{Meis18,John20} bursts of GS 1826-24~\citep{Muno00}, photospheric radius expansion bursts that can be used to get $M_{\rm NS}$ and $R_{\rm NS}$ constraints~\citep{Ozel12}, superbursts~\citep{Kuul02}, and an episode of crust cooling~\citep{Rutl02}.
Ideally, multi-observable modeling of a source such as KS 1731-26 would use consistent assumptions for the system properties across the models. However, as shown in this work, the exact crust composition need not be used for shallow heating constraints from calculations of $y_{\rm ign}$. Ignoring this complication would reduce the computational cost of incorporating the $y_{\rm ign}$ shallow heating constraints into the multi-observable modeling.
\section{Conclusions}
\label{sec:conclusions}
The present work demonstrates that the inferred depth of carbon ignition for X-ray superbursts can be used to constrain the depth and magnitude of shallow heating in the accreted neutron star crust. This constraint is shown to be weakly sensitive to input nuclear physics, including assumptions regarding the $^{12}$C+$^{12}$C nuclear reaction rate and the crust composition. The main model sensitivity is the accretion rate $\dot{M}$ prior to the superburst, along with the accretion outburst duration $\Delta t$, if $\Delta t$ is comparable to thermal time at the depth of the shallow heat source. This method provides a new way to constrain shallow heating, expanding the number of accreting neutron star sources that can be used for this purpose. For sources featuring other observables such as crust cooling, the inclusion of this method into multi-observable modeling may improve the stringency of constraints on shallow heating.
\section*{Acknowledgements}
I thank Hendrik Schatz, Wei Jia Ong, and Duncan Galloway for useful discussions, Xiadong Tang for providing tables of the $^{12}$C+$^{12}$C $S^{*}$-factors, Ed Brown for creating and maintaining a public release of {\tt dStar}, and the {\tt MESA} developers for creating and maintaining a public release of that code and the associated software development kit.
This work was supported by the U.S. Department of Energy Office of Science under Grants No. DE-FG02-88ER40387 and DE-SC0019042 and the U.S. National Nuclear Security Administration through Grant No. DE-NA0003909. This work was inspired by conversations at a workshop that was supported by funds from the U.S. National Science Foundation under Grants No. PHY-1430152 (Joint Institute for Nuclear Astrophysics -- Center for the Evolution of the Elements) and OISE-1927130 (International Research Network for Nuclear Astrophysics).
\section*{Data Availability}
The Appendix contains a sample {\tt dStar} inlist that could be used to recreate the thermal profiles used in this work. The carbon ignition curves are available as Supplementary Material.
\bibliographystyle{mnras}
\bibliography{references} %
\appendix
\section{Sample {{\tt dStar}} Inlist}
\label{sec:appendixA}
The following is an example inlist for {\tt dStar}~\citep{Brow15} calculations of the accreted crust thermal profile performed for the present work.
\lstinputlisting[breaklines=true,basicstyle=\tiny]{SampleInlist}
\bsp %
\label{lastpage} |
Title:
The impact of spurious collisional heating on the morphological evolution of simulated galactic discs |
Abstract: We use a suite of idealised N-body simulations to study the impact of
spurious heating of star particles by dark matter particles on the kinematics
and morphology of simulated galactic discs. We find that spurious collisional
heating leads to a systematic increase of the azimuthal velocity dispersion
($\sigma_\phi$) of stellar particles and a corresponding decrease in their mean
azimuthal velocities ($\overline{v}_\phi$). The rate of heating is dictated
primarily by the number of dark matter halo particles (or equivalently, by the
dark matter particle mass at fixed halo mass) and by radial gradients in the
local dark matter density along the disc; it is largely insensitive to the
stellar particle mass. Galaxies within haloes resolved with fewer than $\approx
10^6$ dark matter particles are particularly susceptible to spurious
morphological evolution, irrespective of the total halo mass (with even more
particles required to prevent heating of the galactic centre). Collisional
heating transforms galactic discs from flattened structures into rounder
spheroidal systems, causing them to lose rotational support in the process. It
also affects the locations of galaxies in standard scaling relations that link
their various properties: at fixed stellar mass, it increases the sizes of
galaxies, and reduces their mean stellar rotation velocities and specific
angular momenta. Our results urge caution when extrapolating simulated galaxy
scaling relations to low masses where spurious collisional effects can bias
their normalisation, slope and scatter.
| https://export.arxiv.org/pdf/2208.07623 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
Galaxy: kinematics and dynamics -- Galaxy: evolution -- Galaxy: disc -- Galaxy: structure -- methods: numerical
\end{keywords}
\section{Introduction}
Galaxies are complex dynamical systems, and much of our understanding of their origin and evolution has been inferred from simulations.
In particular, cosmological simulations -- which are now capable of sampling the diverse environments in which galaxies form, while
simultaneously resolving their internal properties -- yield galaxy populations that appear realistic when confronted
with wide array of observational data \citep[e.g.][]{Schaye2015, Furlong2017, Ludlow2017, Lagos2017, Nelson2018, vandeSande2019, Pillepich2019}.
Recent advances in algorithms and computing architecture have also lead to increases in both simulation volume and particle number, and
large-volume cosmological simulations (i.e. those with volumes of order $\approx 100$ Mpc cubed or greater) routinely achieve mass
and force resolutions of order $10^6\,{\rm M_\odot}$ and $1~{\rm kpc}$, respectively \citep{Vogelsberger2014, Schaye2015, Pillepich2018b, Dave2019}.
Despite these improvements, even the most massive galaxies and dark matter haloes formed in such simulations are still resolved with far fewer
particles than there are stars and dark matter (DM) particles in real galaxies, and are therefore subject to incoherent fluctuations in
the gravitational potential of point-like particles.
These fluctuations deflect the trajectories of stellar and DM particles (through a scattering process commonly referred to as ``collisions''),
causing them to deviate from the smooth paths dictated by the mean-field potential of the system.
Galaxies are dynamically cold stellar systems
embedded within comparatively hot DM haloes, and collisions between their constituent particles may therefore alter the structure and kinematics
of galaxies in undesirable ways. For example, collisions can lead to a net exchange of energy between the two components as they attempt to reach
energy equipartition (the galaxy heats up in the process, while the halo cools down). This effect becomes more pronounced as the mass ratio of DM
to stellar particles, $\mu\equiv m_{\rm DM}/m_\star$, is increased, but is also present when $\mu=1$ due to the strong phase-space segregation of
stars and DM typical of most galaxies. But for disc galaxies, which harbour most of their kinetic energy in ordered rotation, collisions between DM
and stellar particles also transfer kinetic energy from azimuthal stellar motions to kinetic energy in radial and vertical motions, even when the
energy exchanged between the two components is small. As a result, disc galaxies are particularly susceptible to spurious evolution due to collisional
effects \citep{Sellwood2013,Ludlow2021}.
\citet[][]{Lacey1985} calculated analytically the collisional heating rate of thin galactic discs embedded within
DM haloes composed of point-mass particles (assumed to be black holes, which were considered plausible DM candidates
at the time). Their calculations are based on epicyclic theory and assume that the disc remains thin and cold relative to the halo at
all times, i.e. $\sigma_\star\ll\sigma_{\rm DM}$. By comparing their analytic results to the observed velocity dispersion of
stars in the solar neighbourhood, they concluded that black holes with masses exceeding $\gtrsim 10^6\, {\rm M_\odot}$ cannot dominate
the DM in the Milky Way's halo or else collisional heating would have rendered its disc hotter and thicker than it is observed to be.
This critical black hole mass is of order (or smaller than) the mass of DM particles used in many large-volume cosmological
simulations, suggesting many simulated galaxies are affected by spurious collisional heating. Indeed,
\citet[][see also \citealt{Revaz2018}]{Ludlow2019} showed that ${\rm M_\star}\lesssim 10^{10}{\rm M_\odot}$ galaxies in the \eagle~
simulation \citep[][which has a DM particle mass of $m_{\rm DM}\approx 10^7\,{\rm M_\odot}$]{Schaye2015} undergo spurious size growth
due to collisional heating.
Later, \citet{Ludlow2021} used idealised\footnote{This refers to non-cosmological simulations, typically of isolated, equilibrium
systems. Our idealised runs consist of a two-component model composed of an initially thin disc and a
spherically-symmetric DM halo (see Section~\ref{sec:sims} for details).} simulations of
secularly-evolving, stable galactic discs to confirm the theory developed
by \citet[][]{Lacey1985}, and proposed empirical corrections to their analytic results able to accommodate instances of extreme disc heating
(i.e. when $\sigma_\star\approx \sigma_{\rm DM}$ and the vertical scale height of the disc is of order its scale length, which can occur
in poorly-resolved haloes). Using their empirical
model, they verified that the stellar discs of Milky Way-mass galaxies, if simulated using DM particles with masses
$m_{\rm DM}\gtrsim 10^6\,{\rm M_\odot}$, are subject to non-negligible levels of spurious heating. For example, their vertical
velocity dispersion will artificially increase by $\gtrsim 20 \,{\rm km/s}$ and their scale heights by $\gtrsim 300\, {\rm pc}$
in roughly a Hubble time. These values are of the same order of magnitude as the observed vertical velocity dispersion and scale
height of Milky Way disc stars.
Spurious collisional heating affects the kinematics of disc stars, their rotation velocities, as well as the radial and vertical sizes of discs. And because cosmological simulations typically sample the DM density field using equal mass particles,
low-mass galaxies are resolved with fewer particles than massive ones and are therefore more vulnerable to the effect. This
may lead to mass-dependent biases in the standard
scaling laws that link the structural and kinematic properties of simulated galaxies, such as their masses, sizes and
characteristic velocities, as well as how these relations depend on galaxy morphology.
From an observational perspective, the slopes, intercepts, and scatter of many galaxy scaling relations are well-constrained,
with errors of order $10$ per cent or less \citep[e.g.][]{Di-Teodoro2022}. Reproducing these scaling laws is a primary goal of galaxy
formation models, and it is therefore necessary to draw comparisons between observed and simulated galaxies only when the
latter are free from numerical artefacts.
This paper is a follow-up to \citet{Ludlow2021} in which we address these issues. It is organised as follows.
In Section~\ref{sec:sims} we describe our simulations (Section~\ref{ssec:sims}) and
analysis techniques (Section~\ref{ssec:analysis}). Section~\ref{sec:heating} contains our main results:
We begin with an overview of the morphological evolution of galactic discs due to
spurious collisional heating (Section~\ref{ssec:visual}), followed by quantitative analyses of its effect on their azimuthal
velocity profiles (Section~\ref{ssec:azimuthal_vel}), galaxy scaling relations (Section~\ref{ssec:scaling-relations}), and morphologies
(Section~\ref{ssec:measurements}). In Section~\ref{sec:model} we present a model that describes the evolution of the velocity dispersions
and azimuthal velocities of stellar disc particles (Section~\ref{ssec:model}) and discuss the implications of our results for
cosmological simulations (Section~\ref{ssec:cosmosims}). We provide our conclusions in Section~\ref{sec:summary}.
\section{Simulations and Analysis}
\label{sec:sims}
\subsection{Simulations}
\label{ssec:sims}
Our analysis is based on the suite of idealised disc galaxy simulations first presented in \citet{Ludlow2021}, which we
review below. The initial conditions\footnote{The initial conditions for all of our simulations were created using \texttt{GalIC}
\citep[see][for details]{Yurin2014}.} of our simulations comprise an initially thin, axisymmetric stellar disc of mass ${\rm M}_\star$ embedded
within a spherically-symmetric and isotropic dark matter (DM) halo. We adopt a coordinate system coincident with the galaxy centre
and align the $z$-axis with the disc's angular momentum vector, $\mathbfit{J}_\star$. In this coordinate system, the three-dimensional structure of
the disc is described by
\begin{equation}
\rho_\star(R,z)=\frac{{\rm M_\star}}{4\pi\, z_d\, R_d^2}\exp\biggr(-\frac{R}{R_d}\biggl)\,\sech^2\biggr(\frac{z}{z_d}\biggl),
\label{eq:star-density}
\end{equation}
where $z$ denotes the vertical height above the disc plane and $R=\sqrt{x^2+y^2}$ is the distance from the
$z$-axis; $z_d$ and $R_d$ are the scale height and scale length of the disc, respectively.
The DM halo is modelled as a
\cite{Hernquist1990} sphere of total mass ${\rm M_{DM}}$, which has a circular velocity profile given by
\begin{equation}
V_{\rm DM}(r)=\sqrt{\frac{{\rm G\,M_{DM}}\, r}{(r+a)^2}},
\label{eq:Vchern}
\end{equation}
where $a$ is its characteristic scale radius, $G$ is the gravitational constant, and $r=\sqrt{R^2+z^2}$ is the three-dimensional
radial coordinate. Where necessary, we distinguish the circular velocity due to DM and stars using subscripts, and use
$V_c$ to denote the total circular profile due to the halo and disc: $V_c^2(r)=V_{\rm DM}^2(r) + V_\star^2(r)$.
Equation~(\ref{eq:Vchern}) predicts a circular velocity profile similar to that of a Navarro-Frenk-White profile
\citep[][hereafter, NFW]{Navarro1996,Navarro1997} at $r\lesssim a$ \citep[e.g.][]{Springel2005b}, which allows us to characterise the halo's
structure in terms of the more familiar virial velocity,\footnote{Throughout the paper, we define the virial radius of a DM
halo as that of a sphere enclosing a mean density of $\rho_{200}\equiv200\,\rho_{\rm crit}$, where $\rho_{\rm crit}=3\,H^2/8\,\pi\,G$ is
the critical density for closure and $H$ is the Hubble-Lema$\hat{i}$tre constant. This
implicitly defines the halo's virial mass, ${\rm M}_{200}$, and virial circular velocity, $V_{200}=\sqrt{G\,{\rm M}_{200}/r_{200}}$.}
$V_{200}$, and concentration, $c$, of the latter. We do so by matching their inner density profiles, which requires a Hernquist
scale radius of $a=(r_{200}/c)\sqrt{2\,f(c)}$, where $f(c)=\ln(1+c)-c/(1+c)$, and a total halo mass of
${\rm M_{DM}}=(1-f_\star)\,{\rm M_{200}}$, where $f_\star$ is the stellar mass fraction.
Other structural parameters of relevance are the halo's spin parameter, $\lambda_{\rm DM} = j_{\rm DM} / (\sqrt{2} \, r_{200} V_{200})$,
and the ratio of the specific angular momentum of the disc to that of the halo, i.e. $f_j=j_\star/j_{\rm DM}$ (which is sometimes referred
to as the angular momentum retention fraction).
Much of our analysis is based on a suite of ``fiducial'' galaxy models for which we adopt
$V_{200} = 200\, {\rm km/s}$, $c = 10$ (or equivalently, $r_{200}/a\approx 5.8$),
$\lambda_{\rm DM} = 0.03$, and $f_j=1$ (i.e. we assume that the disc and halo initially have the same specific angular momentum,
which is approximately valid for both observed and simulated disc galaxies; e.g. \citealt{Di-Teodoro2022}; \citealt{Rodriguez-Gomez2022}).
The latter determines the initial size of the disc, which for our fiducial models is $R_{d} = 0.02 \, r_{200} \approx 4.1\,{\rm kpc}$.
The initial disc scale height, which is independent of $R$, is chosen to be $z_{d} = 0.05 \, R_d$ ($\approx 0.2 \,{\rm kpc}$
for our fiducial models). All of our simulations adopt a stellar mass fraction of $f_\star=0.01$.
We note that this is lower than the inferred stellar-to-halo mass ratio of discs occupying haloes with virial velocities
of $V_{200} \approx 200\, {\rm km/s}$, which is closer to
$0.05$ \citep[e.g.][]{Posti2019, Di-Teodoro2022}. However, it ensures that our isolated, equilibrium discs are: 1) free from Toomre instabilities that may jeopardise the
collisional heating effects we wish to quantify; and 2) not massive enough to gravitationally alter the structure of the surrounding DM
halo, which would complicate the interpretation of our results (see Appendix A of \citealt{Ludlow2021} for details).
Our suite of fiducial models differ only in their (stellar and DM) mass resolution. For the DM, we adopt a range
of particle masses corresponding to integer and half-integer values of $\log m_{\textrm{DM}}$, starting from a lowest-resolution
of $m_{\rm DM}=10^8 \, {\rm M}_\odot$, and extending to $m_{\rm DM}=10^6 \, {\rm M}_\odot$ (the corresponding number of DM halo
particles span $N_{\rm DM}= 1.8 \times 10^4$ to $1.8 \times 10^6$).
The stellar particle mass is determined by the DM-to-stellar particle mass ratio, for which we adopt $\mu\equiv m_{\rm DM}/m_\star = 5$.
This is approximately the inverse of the cosmic baryon fraction \citep[i.e. $\Omega_{\rm DM}/\Omega_{\rm bar}\approx 5.36$; e.g.][]{Planck2018},
and roughly equivalent to the (initial) $\mu$ value for cosmological smoothed particle hydrodynamics simulations that adopt equal
numbers of DM and baryonic particles.
In addition to our fiducial runs we also analyse models with $V_{200} = 50, 100$ and $400\,\,{\rm km/s}$.
These models also span a range of DM particle
masses that differ by increments of $\Delta\log m_{\rm DM}=0.5$. Specifically, $4\leq \log (m_{\rm DM}/{\rm M_\odot})\leq 6$ for
$V_{200}=50\, {\rm km/s}$, $5\leq \log (m_{\rm DM}/{\rm M_\odot})\leq 7$ for $V_{200}=100\, {\rm km/s}$, and
$7\leq \log (m_{\rm DM}/{\rm M_\odot})\leq 9$ for $V_{200}=400\, {\rm km/s}$. These choices ensure that the
DM haloes are resolved with between $N_{\rm DM}\approx 10^4$ and $\approx 10^6$ DM particles regardless of $V_{200}$. Each of
these additional models adopt the
same values of $\lambda_{\rm DM}$, $f_\star$, $f_j$, $\mu$, and $z_d/R_d$ as our fiducial runs, which ensures that
the structural properties of the discs and haloes of all models scale proportionally.
Note also that, for all $V_{200}$, the local DM density is the same at radii $R_f$ enclosing a fraction $f$ of the disc's
stellar particles.
Because our fiducial runs adopt a stellar-to-DM particle mass ratio that differs from unity, the cumulative effects of collisions
results in a transfer of energy from the high- to low-mass particle species (i.e. from DM to stellar particles in our case), albeit
at a rate slower than heating due to shot noise in the DM particle distribution \cite[see Appendix A of][for details]{Ludlow2021}.
This effect is known as ``mass segregation'' and has been shown to result in spurious
size growth of simulated galaxies \citep[e.g.][]{Revaz2018, Ludlow2019}. To assess the impact of mass segregation on
disc morphology, we also consider a suite of models for which $\mu=1$ and 25, although defer a discussion of these
to Appendix \ref{sec:fitting}.
Collisional heating is dominated mainly by local scattering events, and therefore depends on the local density and characteristic velocities
DM particles, in addition to their masses (for centrally concentrated systems like galactic haloes).
For that reason, we also consider a subset of our $V_{200}=200\,{\rm km/s}$ models for which we adopt different halo concentration
parameters, specifically $c=7$ and $c=15$. For these we adjust $f_j$ such that the stellar mass
profiles remain unchanged. These runs, which are discussed in Appendix \ref{sec:fitting}, were carried out for
$m_{\rm DM}=10^7 \, \rm M_\odot$ and $10^8 \, \rm M_\odot$, and all adopt $\mu=5$.
All of our simulations were performed using \texttt{GADGET-2} \citep{Springel2005} for a total integration time of 9.8 Gyr.
We used a Plummer-equivalent gravitational softening length of $\epsilon_{\rm soft} = z_d$, which marginally resolves the
vertical forces across the disc.\footnote{As discussed in \citet{Ludlow2021}, the impact of spurious collisional heating on
galaxy properties is independent of gravitational softening provided $\epsilon_{\rm soft}\lesssim z_d$, but is
suppressed for larger softening values. In this paper we only consider models with $\epsilon_{\rm soft}=z_d$ as these ensure that gravitational
forces are marginally resolved across the galaxy disc and that our results are not strongly influenced by
softening, but we note that these $\epsilon_{\rm soft}$ values are likely smaller than in
the majority of cosmological hydrodynamical simulations.} For most simulations, we output 100 snapshots equally-spaced
in time $t$; for those corresponding to
haloes with fewer than $N_{\rm DM} < 1.2\times 10^5$ particles we increase the output cadence by a factor of 4
(i.e. we output 400 snapshots in total).\footnote{As mentioned below, many of the graphical results presented in this paper
were smoothed over a time interval of $\Delta t=2\,{\rm Gyr}$ for aesthetic purposes. The increased output frequency adopted for
our low-resolution runs improves the robustness of the smoothing operation without affecting our results.}
We note that the initial structure of the disc and halo in our poorly-resolved models
is subject to Poisson noise due to the coarse sampling of their respective distribution functions. For that reason,
we carried out 5 (10) simulations based on independent realisations of the initial conditions for models with
$N_{\rm DM} < 1.2\times 10^5$ ($<4\times 10^4$) and present their results as averages over all realisations.
A summary of all models studied in the paper is provided in Table~\ref{table:simulation-list}.
\begin{table*}
\caption{Properties of the disc galaxies analysed in this paper. The first five columns list the properties of the
disc or halo that we vary: $V_{200}$ is the virial circular velocity of a
Navarro-Frenk-White profile with the same inner density structure as our adopted Hernquist dark matter halo;
$c=r_{200}/r_{-2}$ is the NFW halo's concentration; $f_j=j_\star/j_{\rm DM}$ is the disc's angular momentum
retention fraction (i.e. the ratio of the disc to halo specific angular momentum); $R_d$ is the scale length of
the disc, expressed here in units of NFW halo scale radius, $r_{-2}$; and $z_d$ is the disc scale height, expressed in units of
$R_d$. For each $V_{200}$ we simulate a range of models with different numbers of DM particles, $N_{\rm DM}$ (corresponding to
different DM particle masses, $m_{\rm DM}$), some of which also vary the the DM-to-stellar particle mass ratio, $\mu=m_{\rm DM}/m_\star$;
The relevant values are listed under $N_{\rm DM}$, $m_{\rm DM}$ and $\mu$, respectively. These parameters, together with
the stellar mass fraction, which is $f_\star=0.01$ for all runs, specify the stellar particle masses.
For our low-resolution models, Poisson noise due to coarse sampling of the disc and halo distribution functions
leads to slight differences in their initial structure and kinematics. In those cases, we carried our multiple runs corresponding to different
realisations of the initial conditions and presented results after averaging over all of them. The number of runs
carrier out for each $N_{\rm DM}$ is listed in the right-most column.}
\centering
\begin{tabular}{r r r r r c c c c}
\hline \hline
$V_{200}$ [km/s] &
$c$ &
$f_j$ &
$R_d / r_{-2}$ &
$z_d / R_d$ &
$N_{\rm DM} / 10^5$ &
$\log (m_{\rm DM}/$M$_\odot)$ &
$\mu = m_{\rm DM} / m_\star$ &
Number of runs \\
\hline
200 & 10 & 1.0 & 0.20 & 0.05 & 0.18, 0.58, 1.84, 5.82, 18.4 & 8.0, 7.5, 7.0, 6.5, 6.0 & 5 & 10, 5, 1, 1, 1 \\
200 & 7 & 0.81 & 0.14 & 0.05 & 0.18, 1.84 & 8.0, 7.0 & 5 & 10, 1 \\
200 & 15 & 1.03 & 0.31 & 0.05 & 0.18, 1.84 & 8.0, 7.0 & 5 & 10, 1 \\
200 & 10 & 1.0 & 0.20 & 0.10 & 0.18, 1.84 & 8.0, 7.0 & 5 & 10, 1 \\
200 & 10 & 1.0 & 0.20 & 0.20 & 0.18, 1.84 & 8.0, 7.0 & 5 & 10, 1 \\
200 & 10 & 1.0 & 0.20 & 0.05 & 0.18, 0.58, 1.84, 5.82, 18.4 & 8.0, 7.5, 7.0, 6.5, 6.0 & 1, 25 & 10, 5, 1, 1, 1 \\
50 & 10 & 1.0 & 0.20 & 0.05 & 0.29, 0.91, 2.88, 9.09, 28.8 & 6.0, 5.5, 5.0, 4.5, 4.0 & 5 & 10, 5, 1, 1, 1 \\
100 & 10 & 1.0 & 0.20 & 0.05 & 0.23, 0.73, 2.3, 7.27, 23.0 & 7.0, 6.5, 6.0, 5.5, 5.0 & 5 & 10, 5, 1, 1, 1 \\
400 & 10 & 1.0 & 0.20 & 0.05 & 0.15, 0.47, 1.47, 4.66, 14.7 & 9.0, 8.5, 8.0, 7.5, 7.0 & 5 & 10, 5, 1, 1, 1 \\
\hline
\end{tabular}
\label{table:simulation-list}
\end{table*}
\subsection{Analysis}
\label{ssec:analysis}
We use a number of diagnostics to quantify the morphology of galactic discs, some based on the spatial distribution of stellar
particles and others based on their kinematics. We review them below.
We characterise disc size using the cylindrical half-mass radius, $R_{1/2}$, enclosing half of all stellar particles; the analogous
vertical half-mass height is denoted $z_{1/2}$. These are initially related to the scale length and height of the disc via
$R_{1/2}= 1.68 \, R_d$ and $z_{1/2}=0.55\, z_d$, respectively. Many of the results presented in \Cref{sec:heating} and \Cref{sec:model}
are based on measurements made within a cylindrical shell centred on $R_{1/2}$ of width $\Delta\log R=0.2$,
which is sufficiently wide to enclose $\gtrsim 100$ star particles for all of our models.
However, we have verified that similar results are obtained at other radii $R_f$ enclosing different fractions $f$ of the
disc's stellar mass.
Stellar particle velocities are also evaluated in cylindrical coordinates, and for our analysis we measure their
dispersions in the vertical ($\sigma_z$; i.e. perpendicular to the disc plane), the radial ($\sigma_R$;
i.e. perpendicular to the $z$-axis) and the azimuthal directions ($\sigma_\phi$), although our analysis principally focuses
on the latter. For each velocity component $i$,
the velocity variance is defined $\sigma_i^2=\sum_j\, m_j(v_{i,j}-\overline{v}_i)^2 / \sum_j m_j$, where $\overline{v}_i$ is the
mass-weighted mean velocity in the $i$ direction (i.e. $\overline{v}_i=\sum_j m_j v_{i,j} / \sum_j m_j$), $m_j$ is the mass of
the $j^{\rm th}$ stellar particle, and the sum extends over all particles $j$ that occupy a given cylindrical shell. We
note that our disc galaxies have $\overline{v}_z\approx \overline{v}_R\approx 0$ at all times, and $\overline{v}_\phi\approx V_c$ initially.
We quantify galaxy morphology using a few diagnostics. The first, $\kappa_{\rm rot}$,
is defined as the fraction of the disc's kinetic energy contained in rotational motion
\citep[e.g.][]{Sales2010}, and is defined by
\begin{equation}
\kappa_{\rm rot}=\frac{\sum_j m_j v_{\phi,j}^2}{\sum_j m_j v_j^2},
\label{eq:kappa-data}
\end{equation}
where $v_j$ magnitude of the $j^{\rm th}$ particle's velocity, and the sum extends over all particles
within the annulus. We also consider the stellar spin parameter, $\lambda_r$ \citep[e.g.][]{Emsellem2007, Naab2014}, which is defined as
\begin{equation}
\lambda_r=\frac{\overline{v}_\phi}{\sqrt{\overline{v}_\phi^2+\sigma^2_{1\rm D}}},
\label{eq:lambda-data}
\end{equation}
where $\sigma^2_{1\rm D}\equiv (\sigma_z^2+\sigma_R^2+\sigma_\phi^2) / 3$ is the one dimensional stellar velocity
dispersion (squared) at $R$.
Finally, we measure the ratio of rotation-to-dispersion velocities, i.e. $\overline{v}_\phi/\sigma_{1\textrm D}$,
and the circularity of stellar particle orbits, $\varepsilon_{\rm circ}\equiv j_z/j_c(E)$ \citep{Abadi2003}. In the latter, $j_z$ is the
$z$-component of the particle's angular momentum and $j_c(E)$ is the angular momentum of a circular orbit in the plane of the disc
with the same total energy. The circularity parameter can be used to calculate the disc-to-total ($D/T$) mass ratio, which is commonly
defined as the mass fraction of orbits with $\varepsilon_{\rm circ} \geq 0.7$ \citep{Sales2010, Aumer2013, Grand2017, Joshi2020}, i.e.
$D/T=M_\star^{-1}\sum_j^{\varepsilon_{\rm circ}\geq 0.7} m_j$ (we note, however, that this threshold will also include orbits with
grossly non-circular motions, as emphasised by \citealt{Peebles2020}). Similarly, the spheroid-to-total
ratio, $S/T$, is defined as twice the mass fraction of counter-rotating orbits, i.e. $S/T=(2/M_\star)\sum_j^{v_\phi<0} m_j$.
In general, these definitions imply that $D/T + S/T \neq 1$. We calculate $S/T$ and $D/T$ using only particles within a given
cylindrical shell.
In addition to these dynamical measures of galaxy morphology, we also consider several geometric quantities:
1) the disc aspect ratio $z_{1/2}/R_{1/2}$ (note that both quantities are calculated using {\em all} stellar particles); and
2) the ratio $c/a$ of the minor-to-major axis lengths of the galaxy's moment of inertia tensor. In practice, we follow \citet{Thob2019}
and use an iterative scheme to obtain the principle axis lengths of the reduced moment of inertia tensor, which is appropriate for
highly flattened geometries. Note that $c/a$ is calculated using all stellar particles.
\section{The effects of collisional heating on the properties of simulated galactic discs}
\label{sec:heating}
\subsection{Visual appearance}
\label{ssec:visual}
Fig.~\ref{fig:projections-low} shows edge-on projections of five
galaxy models (corresponding $V_{200}=100\, {\rm km/s}$) that are
identical in all respects other than their mass resolution. Stellar and DM particles are represented using
white and blue points, respectively, and have been smoothed for visual presentation using Py-SPHViewer
\citep{Benitez-Llambay2015}. Each disc has an initial half-mass height of $z_{1/2}=57\,{\rm pc}$,
and an initial aspect ratio $z_{1/2}/R_{1/2}=0.016$.
From left-to-right, the DM particle mass increases from
$m_{\rm DM}=10^5\,{\rm M_\odot}$ to $m_{\rm DM}=10^7\,{\rm M_\odot}$ in equally-spaced steps of $\Delta \log m_{\rm DM}=0.5$
(corresponding to haloes resolved with between $N_{\rm DM}=2.3\times 10^6$ and $2.3\times 10^4$ DM particles). From top-to-bottom,
the different rows correspond to different output times, from $t=0$ (i.e. the initial conditions of the simulations)
to $t=1, \, 2, \, 4,$ and $8\,{\rm Gyr}$, respectively. Although these galaxies were constructed to be
in stable equilibrium, they clearly follow different evolutionary paths depending on $m_{\rm DM}$. For
example, for $m_{\rm DM}=10^5\,{\rm M_\odot}$ ($N_{\rm DM}=2.3\times 10^6$; i.e. the best-resolved system, which is plotted in
the left-most panels of Fig.~\ref{fig:projections-low}), the galactic disc remains thin at all times. Its vertical half-mass
height, for example, only increased by a factor of $\approx 2$ over $8\,{\rm Gyr}$ (at which point $z_{1/2}/R_{1/2}\approx 0.037$).
Contrast this with the {\it least}-resolved
galaxy, which is plotted in the right-most panel and has $m_{\rm DM}=10^7\,{\rm M_\odot}$ ($N_{\rm DM}=2.3\times 10^4$). This disc
galaxy is virtually unrecognisable after only $t\approx 2\,{\rm Gyr}$, by which time its
half-mass height has increased by a factor of $\approx 13$ relative to the initial value (and $z_{1/2}/R_{1/2}\approx 0.19$).
But its evolution does not end there:
after $t=8\,{\rm Gyr}$, $z_{1/2}\approx 2\,{\rm kpc}$, a factor of $\approx 33$ larger than at $t=0$. By comparing the
various models at fixed $t$ (i.e. along rows of Fig.~\ref{fig:projections-low}), it is clear that spurious
collisional heating, as observed at fixed times, increasingly affects galaxies in less-resolved haloes.
Figs.~\ref{fig:projections-med} and \ref{fig:projections-hi} show similar results, but for haloes of virial mass
${\rm M}_{200}={\rm 1.86\times 10^{12}\,M_\odot}$ ($V_{200}=200\,{\rm km/s}$; which is of order the mass of the Milky Way) and
${\rm M}_{200}=1.49\times 10^{13}\,{\rm M_\odot}$ $(V_{200}=400\,{\rm km/s}$), respectively. As in Fig.~\ref{fig:projections-low},
the DM particle masses increase from left-to-right, but span a different range from the lowest- to the
highest-resolution cases depending on ${\rm M_{200}}$. As mentioned in \Cref{ssec:sims}, this ensures that the various models plotted
in corresponding columns of Figs.~\ref{fig:projections-low} to \ref{fig:projections-hi} are resolved with similar
{\it numbers} of DM particles, rather than with similar DM particle masses. The visual effects of collisional heating
are nonetheless analogous in all three figures, and clearly show that it is the {\em number} of particles per halo (rather than the particle
masses) that dictates whether or not collisional heating will be an important driver of
morphological evolution. Consider, for example, runs carried out with $m_{\rm DM}=10^7\,{\rm M_\odot}$ (shown in the right-most,
middle and left-most columns of Figs.~\ref{fig:projections-low}, \ref{fig:projections-med}, and \ref{fig:projections-hi},
respectively). For $V_{200}=100\, {\rm km/s}$ (Fig.~\ref{fig:projections-low}) this particle mass corresponds to the lowest-resolution
simulation (i.e. $N_{\rm DM}=2.3\times 10^4$); but for $V_{200}=400\, {\rm km/s}$ (Fig.~\ref{fig:projections-hi}) it
is the highest-resolution run ($N_{\rm DM}=1.5\times 10^6$). These galaxies evolve very differently.
Both are initially thin, and have half-mass heights of $z_{1/2}=0.016 \, R_{1/2}$ (corresponding to $57\,{\rm pc}$ and
$0.226 \,{\rm kpc}$ for $V_{200}=100\, {\rm km/s}$ and $V_{200}=400\, {\rm km/s}$, respectively). But after $8\,{\rm Gyr}$, $z_{1/2}$
has increased to $z_{1/2} = 0.40 \, R_{1/2}$ ($1.96\,{\rm kpc}$) and $0.047 \, R_{1/2}$ ($0.65\,{\rm kpc}$)
for $V_{200}=100\,{\rm km/s}$ and $400\,{\rm km/s}$ respectively. This suggests that galaxies of similar ages in
cosmological simulations that inhabit haloes spanning a wide range of masses will be subject to different levels of
spurious heating, a result that is expected but worth re-emphasising.
\subsection{Azimuthal velocity profiles}
\label{ssec:azimuthal_vel}
\citet{Ludlow2021} showed that spurious collisional heating increases the vertical and radial velocity dispersion of stellar disc
particles, as well as the thickness of discs. In Fig.~\ref{fig:kinematic-profiles} we show that the same is true for the
azimuthal velocities of star particles, where in the left-hand panels we plot radial profiles of $\sigma_\phi$ (upper-left) and
$\overline{v}_\phi$ (lower-left) after $t=5\,{\rm Gyr}$, and in the right-hand panels we plot the evolution of $\sigma_\phi$ and
$\overline{v}_\phi$ measured at the {\em initial} value of $R_{1/2}$ (circles in the panels on the left mark the instantaneous values of $R_{1/2}$ at $t=5\,{\rm Gyr}$). Velocities and radii have been normalised by $V_{200}$ and $r_{200}$, respectively,
and results are shown for our fiducial models (i.e. $V_{200}=200\,{\rm km/s}$, $\mu=5$,
and $c=10$) using different coloured lines for different mass resolutions. For comparison, we also plot the
one-dimensional velocity dispersion profile of the DM halo (solid black line, upper-left panel) and the {\em total} circular velocity profile
(solid black line, lower-left panel) of the halo and disc.
The thick, faint lines show the results from our simulations; the thin, dark lines of corresponding colour have been smoothed
for aesthetic purposes.\footnote{In Figs.~\ref{fig:kinematic-profiles} \ref{fig:size-angular-momentum}, \ref{fig:shape-global},
\ref{fig:predicted-kinematics} and \ref{fig:unpredicted-kinematics}, we use a Savitzky-Golay filter with a width of $2\,{\rm Gyr}$
to smooth measurements obtained directly from the simulation outputs, and plot these as dark lines.}
These results are reminiscent of those obtained by \citet{Ludlow2021}, and show that the azimuthal velocity dispersion of stellar
particles increases with time as a result of spurious collisional heating. The severity of the effect increases with increasing
DM particle mass (at fixed halo mass) and with decreasing radius, the latter due to the increased density of DM ``perturbers''
closer to the halo centre. Note too that the increase in velocity dispersion is accompanied by a decrease in the mean azimuthal
velocity of stellar particles. This is the well-known phenomenon of increasing ``asymmetric drift'' with increasing velocity dispersion,
as observed for stars in the solar neighbourhood. In the Milky Way, disc stars are gravitationally scattered by past encounters with
satellite galaxies, molecular clouds, spiral arms and other non-axisymmetric features, whereas in our simulations, they are scatted
primarily by DM halo particles. In both cases, the scattering randomly perturbs the angular momentum vectors of the disc stars, thus
effectively converting coherent rotation into random motion.
Clearly these results have implications for the spurious evolution of a number of other dynamical properties of galactic
discs, such as their sizes, rotation velocities, angular momenta, and shapes. We discuss each of these in the sections that follow.
First however, we note that the dynamical effects of collisional heating can be described reasonably well by a simple empirical model,
shown as dashed coloured lines in each panel of Fig.~\ref{fig:kinematic-profiles}. This model is a slightly modified and extended
version of the one first presented by \citet{Ludlow2021}; it is described below in \Cref{ssec:model} and in more detail in \Cref{sec:fitting}.
\subsection{Galaxy scaling relations}
\label{ssec:scaling-relations}
The results above imply that collisional heating drives spurious evolution in the structure and dynamics of
disc galaxies. We investigate this in Fig.~\ref{fig:scaling-relations}, where we plot a subset of our model galaxies on several standard
scaling laws. Different columns separate runs carried out using different DM particle masses:
from left to right, $m_{\rm DM}=10^6\,{\rm M_\odot}$, $m_{\rm DM}=10^7\,{\rm M_\odot}$, $m_{\rm DM}=10^8\,{\rm M_\odot}$,
respectively. The different sets of points in each panel distinguish the different $V_{200}$ values
(at fixed $m_{\rm DM}$, these correspond to different $N_{\rm DM}$; we indicate using vertical
dotted lines the values of ${\rm M_\star}$ corresponding to a few characteristic values of $N_{\rm DM}$, assuming
$\mu=5$ and $f_\star=0.01$).
We use points of different colour for different output times (as labelled in the lower-left panel).
From top to bottom, different rows show the size-mass relation, the relation between
rotational velocity and stellar mass (i.e., the Tully-Fisher \citeyear{Tully1977} relation),
and the specific angular momentum-stellar mass relation
(i.e., the \citealt{Fall1983} relation), respectively.
The various grey dashed lines in each panel show the corresponding relations
obtained from the initial conditions of our simulations: from top to bottom, $R_{1/2}\varpropto {\rm M}_\star^{1/3}$,
$v_{\rm rot}\varpropto {\rm M}_\star^{1/3}$, and $j_\star\varpropto {\rm M}_\star^{2/3}$, respectively. Each panel is accompanied
beneath by a smaller ``residual'' panel, which shows the measured departure of our simulations from these initial relations.
The upper panels of Fig.~\ref{fig:scaling-relations} demonstrate that poorly-resolved discs experience a significant
increase in size due to spurious heating, a result previously discussed by \citet[][see also \citealt{Revaz2018, Ludlow2020}]{Ludlow2019}.
The effect is most severe for models with $N_{\rm DM}\lesssim {\rm a\, few}\times 10^5$: in our lowest-resolution
runs for example, corresponding to $N_{\rm DM}\approx 2\times 10^4$, $R_{1/2}$ increased by about 60 per cent after $9.8 \,{\rm Gyr}$;
for $N_{\rm DM}\approx 5 \times 10^4$ (not shown in Fig.~\ref{fig:scaling-relations}), $R_{1/2}$ increased by only about 10 per cent over the
same time interval. For $N_{\rm DM} \gtrsim 10^6$, however, the half-mass size evolution is negligible,
exhibiting less than 2 per cent growth over the course of the simulation. Similar results are obtained for $R_{1/4}$ and $R_{3/4}$, enclosing
one quarter and three quarters of the galaxy's stellar mass, respectively.
The middle panels of Fig.~\ref{fig:scaling-relations} plot the relation between rotational velocity, $v_{\rm rot}$,
and stellar mass. The two sets of points in each panel correspond to two complimentary measurements of
$v_{\rm rot}$: in one case (circles) $v_{\rm rot}=V_c(R_{1/2})$ is taken to be the total circular velocity at $R_{1/2}$; in
the other case (squares), $v_{\rm rot}=\overline{v}_\phi(R_{1/2, 0})$ is the mean azimuthal velocity of stellar particles
measured within a cylindrical aperture centred on the {\em initial} value of $R_{1/2}$. We note that a number of observational
studies define galaxy rotation velocities by averaging the outer-most points of their rotation curves, typically between one and
${\rm a \, few}\times R_{1/2}$, which minimises the scatter in the Tully-Fisher relation compared to other velocity measurements
\citep[e.g.][]{Verheijen2001, Lelli2019}. Although this differs somewhat from our estimates of $v_{\rm rot}$, we note that good
agreement between the Tully-Fisher relations of observed and simulated galaxies can be obtained using $v_{\rm rot}=V_c(R_{1/2})$ for the latter
\citep[e.g.][]{Ferrero2017}.
Clearly, $v_{\rm rot}$ is affected by spurious collisional heating, but the magnitude of the effect depends strongly on
how it is calculated: it is much less severe when estimated from the total circular velocity profile than from the mean
azimuthal velocity of stellar particles. This is because
our galaxy discs have relatively low mass fractions ($f_\star = 0.01$) and therefore do not contribute much to $V_{\rm c}$, but
also due to the fact that $R_{1/2}$ probes a radial range over which $V_{\rm DM}$ rises relatively slowly; the small increase
in $v_{\rm rot}= V_{\rm c}(R_{1/2})$ is therefore primarily due to the spurious growth of $R_{1/2}$ probing slightly higher DM
circular velocities.
If $v_{\rm rot}$ is instead estimated using the mean azimuthal velocities of star particles, then its evolution can be significant.
Models with $N_{\rm DM}={\rm a\, few}\,\times 10^4$, for example, see $v_{\rm rot}$ reduced by about a factor of 4 relative to
its initial value over the course of the simulations. Even models with $N_{\rm DM}={\rm a\, few\,\times 10^5}$ are not immune:
for those, $v_{\rm rot}$ drops by about 26 per cent over ${\rm 9.8\,Gyr}$. This is in fact a natural consequence of the collisional
heating of disc galaxies, which harbour a large fraction of their initial kinetic energy
in ordered rotational motion. In addition to {\em heating} stellar particle orbits -- which occurs primarily as a result of
energy equipartition and mass segregation -- collisions between stellar and DM particles %
perturb their motions, thereby converting ordered rotation into velocity dispersion.
It is therefore not surprising that simulated stellar discs tend to lose angular momentum over time as well.
The effect, shown in the bottom panels of Fig.~\ref{fig:scaling-relations}, is however most problematic for runs
with $N_{\rm DM}\lesssim {\rm a\, few}\times 10^5$, for which up to 30 per cent of the disc's initial angular momentum can be
lost over $9.8\,{\rm Gyr}$. We note, however, that
in none of our simulations is the loss of angular momentum sufficient to migrate the points from the relation occupied by ``discs''
to the one occupied by ``bulges'' in both observations and cosmological simulations (the diagonal dotted lines in the lower
panels show the relations for observed discs and bulges obtained by \citealt{Fall2018}).
Fig.~\ref{fig:scaling-relations} highlights one consequence of collisional heating, already apparent in
Figs.~\ref{fig:projections-low}, ~\ref{fig:projections-med}, and \ref{fig:projections-hi}: it
drives morphological changes in disc galaxies by converting thin, rotationally supported discs into dispersion-dominated
spheroids. This is emphasised in the various panels of Fig.~\ref{fig:scaling-relations}, where we plot disc-dominated systems using filled symbols
and dispersion-dominated ones using open symbols (note that the latter are defined as galaxies for which twice the
counter-rotating mass fraction is $S/T \geq 0.5$; otherwise they are disc dominated).
In Fig.~\ref{fig:size-angular-momentum} we plot the time dependence of each of these quantities obtained for our fiducial
galaxy models (i.e. $V_{200}=200\,{\rm km/s}$ and $\mu=5$). The upper panel plots the evolution of the stellar
half-mass radius (normalised by $r_{200}$); the middle panel plots the total circular velocity at $R_{1/2}$ (normalised
by $V_{200}$; we considered the evolution of $\overline{v}_\phi$ separately in Fig.~\ref{fig:kinematic-profiles});
and the bottom panel plots the evolution of the specific angular momentum of all stellar particles (normalised
by $\sqrt{2}\,V_{200}\,r_{200}$).
Although the impact of collisional heating on galaxy scaling relations is often small (at least for galaxies
occupying well-resolved DM haloes), it is systematic, and becomes increasingly problematic
with time. It also disproportionately affects poorly-resolved systems, for which the effect
clearly cannot be ignored. For cosmological simulations that adopt a uniform mass resolution for DM particles, there
will always be a halo mass scale below which these effects are potentially important. We will address this in a companion
paper, but for the remainder of this paper we focus our attention on how spurious collisional heating alters the kinematics and
morphologies of isolated, idealised disc galaxies.
\subsection{The spurious evolution of disc galaxy morphology}
\label{ssec:measurements}
As alluded to above (Fig.~\ref{fig:scaling-relations}), spurious collisional heating can alter the morphologies of simulated disc galaxies,
transforming them into spheroids.
In this section, we quantify these effects using kinematic and structural estimates of galaxy morphology.
For simplicity, we only present results for our fiducial runs (i.e. $V_{200}=200\, {\rm km/s}$, $\mu=5$),
but have verified that our conclusions apply to all of our simulated discs.
\subsubsection{The evolving shapes of galactic discs}
\label{ssec:shapes}
Fig.~\ref{fig:shape-global} plots the time evolution of two complimentary structural estimates of galaxy shape. In the
upper panel, we plot the aspect ratio of the half-mass height to the half-mass length, i.e. $z_{1/2}/R_{1/2}$.
As with previous plots, runs carried out with different $m_{\rm DM}$ are plotted using different coloured lines.
Initially, all models have $z_{1/2}/R_{1/2}\approx 0.016$, as expected for thin discs.
However, all discs experience an increase in $z_{1/2}$ relative to $R_{1/2}$, with the rate of increase proportional to $m_{\rm DM}$.
For our lowest-resolution runs the aspect ratio initially increases very rapidly, but later slows as the vertical velocity dispersion
of stellar particles approaches the maximum value dictated by the local one-dimensional velocity the dispersion of the DM halo (the latter imposes
a maximum vertical scale height through the hydrostatic equilibrium equation; see \citealt{Ludlow2021} for details).
This suggests that the vertical structure of discs is much more vulnerable to collisional heating than their radial structure, or surface density profile.
To quote a few numbers, our lowest-resolution model (blue lines) has $z_{1/2}/R_{1/2} = 0.16, 0.24, 0.32, 0.43$ after $t=1, 2, 4$ and $9.8\,{\rm Gyr}$
respectively; i.e. the final value of $z_{1/2}/R_{1/2}$ is roughly a factor of 27 larger than its initial value.
Increasing the DM mass resolution reduces the effect, but does not eliminate it. For example, for $m_{\rm DM}=10^7\,{\rm M_\odot}$ ($N_{\rm DM} = 1.8 \times 10^5$),
we find $z_{1/2}/R_{1/2}\approx 0.2$ by the end of the simulation, about an order of magnitude larger than at $t=0$. Even for our highest-resolution galaxy
($N_{\rm DM} = 1.8 \times 10^6$) the disc aspect ratio grows noticeably with time, increasing by a factor of $\approx 3$ over 9.8 Gyr, although its disc-like
properties are preserved overall.
The lower panels plot the minor-to-major axis ratios, $c/a$, for the same set of models. For our highest-resolution run, which retains a disc-like appearance
after $t=9.8\,{\rm Gyr}$ (as easily seen in the left-most column of Fig.~\ref{fig:projections-med}), $c/a$ increases by a factor of roughly 2.5 over the same
time interval, resulting in a final axis ratio of $c/a\approx 0.09$. This is a considerable change despite the galaxy's halo being resolved with
$N_{\rm DM}\gtrsim 10^6$ particles. As the mass resolution decreases the discs become substantially more spheroidal. For example, for
$m_{\rm DM}=10^{7}\,{\rm M_\odot}$ ($N_{\rm DM}=1.8\times 10^5$), the final axis ratio is $c/a\approx 0.3$, and for our lowest-resolution run
($N_{\rm DM}=1.8\times 10^4$), $c/a\approx 0.7$. This galaxy resembles a spheroid more than a disc.
\subsubsection{Stellar kinematic indicators of galaxy morphology}
\label{ssec:shapes}
In Fig.~\ref{fig:predicted-kinematics} we plot the evolution of several kinematics-based indicators of galaxy morphology, again
limiting our results to our fiducial runs. From top to bottom, different panels correspond to $\overline{v}_\phi/\sigma_{\rm 1D}$
(i.e. the ratio of rotation-to-dispersion velocities), $\kappa_{\rm rot}$ (i.e. equation~\ref{eq:kappa-data}, which measures the fraction of stellar kinetic energy
in rotation), and $\lambda_r$ (equation~\ref{eq:lambda-data}, the disc spin parameter). In order to more easily compare these results to a simple analytic model
(described later in \Cref{ssec:model}), all quantities were measured in a cylindrical aperture of width
$\Delta\log R=0.2$ centred on the initial value of $R_{1/2}$, although other radii $R_f$ could also have been used. We note, however, that
qualitatively similar results are obtained using all stellar particles rather than those occupying a particular radial shell. As in the previous two figures,
different coloured lines distinguish runs with different mass resolution.
In line with previous results, the morphologies of our disc galaxies -- as quantified by these kinematic
quantities -- move progressively from disc-like to spheroid-like with time. And as before, the rate of morphological transformation
is strongly correlated with mass resolution. All of our models have a relatively high initial ratio of rotation to dispersion velocities, i.e.
$\overline{v}_\phi/\sigma_{\rm 1D}\approx 17$, but collisional
heating converts ordered rotational motion to velocity dispersion causing $\overline{v}_\phi/\sigma_{\rm 1D}$ to decrease steadily with
time, an effect that is weak but noticeable in our highest-resolution runs and substantial in our lowest-resolution runs.
For example, in order of decreasing mass resolution and after approximately 8 Gyr of evolution,
our simulated discs have $\overline{v}_\phi/\sigma_{\rm 1D}\approx 8.8$, 5.4, 2.8, 1.3 and 0.4, respectively.
Our poorest-resolved galaxy becomes dispersion dominated (i.e. has $\overline{v}_\phi/\sigma_{\rm 1D}\lesssim 1$) after only $\approx 3.3\,{\rm Gyr}$.
Even in our intermediate-resolution model (green lines in Fig.~\ref{fig:predicted-kinematics}),
in which the galaxy's DM halo is resolved with roughly $2\times 10^5$ DM particles, $\overline{v}_\phi/\sigma_{\rm 1D}$ drops from $\approx 17$
at $t=0$ to $\approx 2.5$ after $t=9.8\,{\rm Gyr}$. This suggests that resolving thin, rotationally supported stellar discs requires their
DM haloes to be resolved with at least $10^6$ particles, an estimate that we verify quantitatively in \Cref{ssec:cosmosims}.
The results plotted in the middle and lower panels, for $\kappa_{\rm rot}$ and $\lambda_r$, respectively, tell a similar story: Collisional
heating drives galactic discs from kinematically cold structures, dominated by ordered rotational motion, to warm
(or in extreme cases {\em hot}) systems supported substantially by the random motions of their stellar particles.
Our lowest-resolution model, for example, falls below the $\kappa_{\rm rot}\gtrsim 0.5$ \citep[e.g.][]{Sales2012} threshold often used
to classify discs after only $t=4.1\,{\rm Gyr}$; and our second-lowest-resolution model is intermediate between disc- and bulge-dominated
(i.e. $0.7 > \kappa_{\rm rot} > 0.5$), with $\kappa_{\rm rot}=0.52$ after $t=9.8\,{\rm Gyr}$.
Another quantity often used to diagnose galaxy morphology is the orbital circularity parameter, $\varepsilon_{\rm circ}\equiv j_z/j_{\rm c}(E)$.
This quantity can be calculated for each
stellar particle, and can therefore be used to decompose a simulated galaxy into distinct components: e.g., disc stars are commonly
defined by the subset of particles with $\varepsilon_{\rm circ}\gtrsim 0.7$ \citep[e.g.][]{Aumer2013, Grand2017, Joshi2020}, and the mass of
a galaxy's spheroidal component as twice the mass fraction having $\varepsilon_{\rm circ}<0$ \citep[e.g.][]{Abadi2003}.
Although more sophisticated methods to dynamically decompose galaxies exist \cite[see e.g.][]{Scannapieco2009, Domenech-moral2012, Obreja2016},
simple criteria based on circularity thresholds such as those described above are also commonplace, so we choose this straightforward method
to assign disc-to-total ($D/T$) and spheroid-to-total ($S/T$) mass ratios to our simulated discs.
In Fig.~\ref{fig:circularity} we plot the evolution of the median circularity parameter of stellar particles, $\varepsilon_{\rm circ} = j_z / j_c(E)$,
in our fiducial runs (thick lines), i.e $V_{200}=200\, {\rm km/s}$, $\mu=5$. Simulations adopting different particle masses are shown in separate panels, but use
the same colour-coding as previous plots. Shaded regions highlight the inter-percentile ranges corresponding to $1-99$
(lightest), $10-90$, $20-80$, $30-70$ and $40-60$ (darkest). Smaller panels to the right of the main ones show the distributions
of $\varepsilon_{\rm circ}$ at $t=0$ and after $t=4.9$ and 9.8 Gyr (darker to lighter lines). Note that at $t=0$, virtually all stellar
particles are co-orbiting, as expected for pure disc galaxies. As time goes on, however, spurious collisional heating disturbs an increasing
number of stellar particle orbits, resulting in a slow but systematic decrease in the median value of $\varepsilon_{\rm circ}$ and a corresponding
increase in its dispersion. After several Gyr of evolution, a substantial fraction of the stellar mass of our poorly-resolved haloes
has been scattered onto counter-rotating orbits, most likely originating from repeated large-angle deflections during encounters with DM particles.
Indeed, a small minority (about 3 per cent) of stellar particles in our lowest-resolution run find themselves on counter-rotating
orbits {\em in the disc plane} (i.e. they have $\varepsilon_{\rm circ} < -0.7$).
The galaxies in our two lowest-resolution simulations
(bottom-most two panels) eventually develop median circularity parameters that dip
below the threshold typically associated with disc stars, i.e. $\varepsilon_{\rm circ} \approx 0.7$. Using traditional diagnostics, a large fraction of their
mass (95 and 76 per cent for the lowest- and second-lowest-resolutions models, respectively)
would therefore not be associated with a disc; it would
instead be associated with a bulge/spheroid or, potentially, a thick disc. We emphasise this in Fig.~\ref{fig:unpredicted-kinematics},
where we plot the time dependence of the disc-to-total ($D/T$; upper panel; defined as the mass-fraction with $\varepsilon_{\rm circ}\geq 0.7$) and
spheroid-to-total ($S/T$; lower panel; defined as two times the mass fraction with $\varepsilon_{\rm circ}\leq 0$) ratios for each model. Only the
two highest-resolution runs retain disc and spheroid fractions of $D/T \approx 1$ and $S/T \approx 0$; our lowest-resolution
run is spheroid-dominated after $\approx 4.5\,{\rm Gyr}$.
\section{An empirical model for spurious collisional heating and its implications for galactic disc morphology}
\label{sec:model}
Many of the numerical results presented in \Cref{ssec:measurements} can be reproduced by a model that describes the
evolution of the velocity dispersion and average azimuthal velocity of stellar disc particles. As we show below, this
can be achieved using an existing analytic description of gravitational scattering \citep[][]{Lacey1985, Ludlow2021}, provided it is
suitably modified to overcome limitations arising from its simplifying assumptions. In Section~\ref{ssec:model} we describe how this can be
achieved, and in Section~\ref{ssec:cosmosims} we discuss the implications of spurious collisional heating for modelling
disc galaxy morphology in cosmological simulations.
\subsection{An empirical model for spurious collisional heating}
\label{ssec:model}
We follow \citet{Ludlow2021} and use the collisional disc heating rates derived analytically by \citet[][]{Lacey1985} as a
starting point for our empirical model. We present below the equations required to implement the model, and refer the interested
reader to those papers and our Appendix \ref{sec:fitting} for a more detailed discussion.
In the limit $\sigma_i(R)\ll\sigma_{\rm DM}(R)$, the local heating rate of stellar particles results in a incremental
increase in their velocity dispersion that may be approximated by (see \citealt{Ludlow2021})
\begin{equation}
\frac{\Delta \sigma_i^2}{\Delta t}=\sqrt{2}\,\pi\,\ln\Lambda\,\frac{G^2\,\rho_{\rm DM}\,m_{\rm DM}}{V_{200}}\times k_i\,\biggr(\frac{\rho_{\rm DM}}{\rho_{200}}\biggl)^{\alpha_i}.
\label{eq:lin}
\end{equation}
Here $\ln\Lambda$ is the Coulomb logarithm, $k_i$ is a dimensionless free parameter that determines the normalisation
of the heating rate, and $\alpha_i$ is a free parameter that governs its dependence on $\rho_{\rm DM}$,
the local density of the DM halo. The ``$i$'' subscripts on $k_i$ and $\alpha_i$ indicate that their values
differ for the different velocity components, but they do not depend on any intrinsic properties of the halo, or on
radius or time. For $\alpha_i=0$, equation~(\ref{eq:lin}) reproduces the linear density dependence of the collisional heating rate
obtained analytically by \citet[][]{Lacey1985}, with one important difference: it implies a local heating rate that depends inversely
on $V_{200}$, which differs from the $\sigma_{\rm DM}^{-1}$ dependence obtained by \citet[][]{Lacey1985} and adopted in the
\citet{Ludlow2021} model.\footnote{The velocity dependence implied by equation~\ref{eq:lin} was determined empirically by
fitting the model not only to our fiducial runs, but also to those that vary the halo's concentration. By including the latter,
our fits were sensitive to a broader range of halo velocity dispersions than would be the case when fitting to our fiducial runs
alone, which was the method employed by \citet{Ludlow2021}. For that reason, we provide in \Cref{table:best-fit-values} updated
model parameters for $\sigma_R$ and $\sigma_z$.}
Although the value of $k_i$ can in principle be calculated explicitly once a velocity distribution for the DM is
specified, we prefer to treat it as a free parameter. This allows us to avoid uncertainties that may
arise due to inaccuracies of the assumed velocity distribution, and offers some freedom when assigning a
value to the Coulomb logarithm, which itself depends weakly on the detailed properties of the disc and halo.
In practice, we treat the combined term $k_i\, \ln\Lambda$ as a free parameter when obtaining our best-fit models.
As discussed by \citet{Ludlow2021}, equation~(\ref{eq:lin}) breaks down when $\sigma_i\approx \sigma_{\rm DM}$, which occurs at late times
for our lowest-resolution runs. Indeed, energy equipartition driven by collisions between stellar and DM particles results
in maximum asymptotic stellar velocity dispersion of $\sigma_{i,{\rm max}}\approx \sqrt{\mu}\times\sigma_{\rm DM}$. Strictly speaking, this can
only be achieved for $\mu\lesssim 2$ since, according to the virial theorem, $\sigma_{i,{\rm max}}=\sqrt{2}\times \sigma_{\rm DM}$.
\citet{Ludlow2021} verified these analytic expectations, but also showed that the typical timescale over which they are reached
exceeds a Hubble time in most halos of interest (see their Appendix B). In practice, they found that a maximum asymptotic
local velocity dispersion of $\sigma_{i,{\rm max}}(R)=\sigma_{\rm DM}(R)$ provided a sufficiently accurate description of their
numerical results on timescales $\lesssim 10\,{\rm Gyr}$. We adopt this limiting velocity dispersion in what follows.
The finite asymptotic velocity dispersion of stellar particles implies that equation~(\ref{eq:lin}) will break down as
$\sigma_i\rightarrow\sigma_{\rm DM}$. \citet{Ludlow2021} found that a better description of their numerical results was
obtained using
\begin{equation}
\sigma_i^2=\sigma_{\rm DM}^2\biggr[1-\exp\biggr(-\frac{t+t_0}{t_{\sigma_i}}\biggl)\biggl],
\label{eq:exp}
\end{equation}
where $t_{\sigma_i}$ is the timescale at which the linear heating model, equation~(\ref{eq:lin}), predicts $\sigma_i=\sigma_{\rm DM}$, i.e.
\begin{equation}
\frac{t_{\sigma_i}}{t_c}=\biggr[\sqrt{2}\,\pi\,k_i\, \ln\Lambda \biggr(\frac{\rho_{\rm DM}}{\rho_{200}}\biggl)^{\alpha_i}\biggr(\frac{V_{200}}{\sigma_{\rm DM}}\biggl)^2\, \biggl]^{-1},
\label{eq:tvir}
\end{equation}
where $t_c$ is a characteristic timescale defined by
\begin{equation}
t_c=\frac{V_{200}^3}{G^2\,\rho_{\rm DM}\,m_{\rm DM}}.
\label{eq:tc}
\end{equation}
The timescale $t_0$ in equation~(\ref{eq:exp}) is defined such that $\sigma_i(t_0)=\sigma_{i,0}$ is the initial velocity dispersion in
the $i$ direction, and may be calculated using
\begin{equation}
\frac{t_0}{t_{\sigma_i}}=\ln\biggr(\frac{\sigma_{\rm DM}^2}{\sigma_{\rm DM}^2-\sigma_{i,0}^2}\biggl).
\label{eq:t0}
\end{equation}
It is easy to verify that equation~(\ref{eq:exp}) reduces to equation~(\ref{eq:lin}) in the limits $\sigma_{i,0}\rightarrow 0$
and $\sigma_i\ll \sigma_{\rm DM}$.
As we discuss below (and show in more detail in \Cref{sec:fitting} and in \citealt{Ludlow2021}), equation~(\ref{eq:exp}) provides
an accurate description of the
velocity dispersion profiles of galactic stellar discs and how they evolve in response to spurious collisional heating. For
example, the dashed lines in Fig.~\ref{fig:kinematic-profiles} show the best-fit radial $\sigma_\phi$ profiles (upper-left panel)
and the time evolution of $\sigma_\phi$ (measured at $R_{1/2}$; upper-right panel) obtained using equation~(\ref{eq:exp}).
The mean azimuthal velocity profiles can be obtained from the axisymmetric Jeans equation \citep[e.g.][\textsection 4.8.2]{Binney2008}.
For an exponential stellar density profile (i.e. equation~\ref{eq:star-density}) that is symmetric about the $z$-axis and has no bulk radial
motions (i.e. $\overline{v}_R=0$), this can be written as
\begin{equation}
\overline{v}_\phi^2=V_c^2-\sigma_\phi^2+\sigma_R^2\biggr(1-\frac{R}{R_d}\biggl)+R\,\frac{\partial\sigma_R^2}{\partial R}.
\label{eq:asymm_drift}
\end{equation}
Note that equation~(\ref{eq:asymm_drift}) also assumes that orbits in the vertical and radial directions are decoupled
(i.e. $\overline{v_R\, v_\phi}\approx 0$), but does not assume $\sigma_i\ll \sigma_{\rm DM}$.
The dotted lines in the lower-left panel of Fig.~\ref{fig:kinematic-profiles} show the radial $\overline{v}_\phi$ profiles
expected from equation~(\ref{eq:asymm_drift}) assuming $\sigma_R$ and $\sigma_\phi$ are given by the best-fit
equation~(\ref{eq:exp}) (for clarity, these lines are plotted over the same radial range resolved by each simulation).
These curves reproduce our numerical results reasonably well, at least in the outer parts of discs;
in the inner parts, however, differences are evident.
We found that the $\overline{v}_\phi(R)$ profiles are also accurately described by set of equations
analogous to the one used for $\sigma_i$. Specifically,
\begin{equation}
\overline{v}_\phi=V_c\,\exp\biggr(-\frac{t+t_0}{t_{v_\phi}}\biggl),
\label{eq:exp_phi}
\end{equation}
where $V_c$ includes contributions from the disc and halo, and $t_{v_\phi}$ is a characteristic timescale
over which $\overline{v}_\phi$ is reduced relative to its initial value (i.e. $\overline{v}_{\phi,0}$).
We determine $t_{v_\phi}$ by matching the slopes of equation~(\ref{eq:exp}; for $\sigma_\phi$) and equation~(\ref{eq:exp_phi})
at $t=0$ for an initially thin disc (i.e. $v_{\phi,0}\approx V_c$ and $\sigma_{i,0}\approx 0$), which implies
\begin{equation}
\frac{t_{v_\phi}}{t_{\sigma_\phi}}=-\frac{d(\sigma_\phi^2)}{d v_\phi}\biggr(\frac{V_c}{\sigma_{\rm DM}^2}\biggl).
\label{eq:tvir_phi}
\end{equation}
We identify the first term on the right-hand side of equation~(\ref{eq:tvir_phi}) with the ratio of the second- and
first-order moments of the parallel velocity change $\Delta v_{||}$ of a star particle relative to a background
population of DM particles, which were derived by \citet{Chandrasekhar1960} and applied to disc galaxies by \citet{Lacey1985}
using epicyclc theory. Assuming a relative velocity $v_{\rm rel}$ between
stellar and DM particles, this yields
\begin{equation}
\frac{t_{v_\phi}}{t_{\sigma_\phi}}=\frac{2\,V_c}{v_{\rm rel}}\equiv f,
\label{eq:tvir_phi2}
\end{equation}
where $f$ is a number of order unity that we treat as a free parameter. The value of $t_0$ in equation~(\ref{eq:exp_phi}) is
defined such that $\overline{v}_\phi(t=0)=v_{\phi,0}$ is the initial azimuthal velocity of the disc, i.e.
\begin{equation}
\frac{t_0}{t_{v_\phi}}=\ln\biggr(\frac{V_c}{\overline{v}_{\phi,0}}\biggl).
\label{eq:t0}
\end{equation}
Although the asymmetric drift equation provides a reasonable approximation to the $\overline{v}_\phi$ profiles obtained from
our simulations, we prefer to use equation~(\ref{eq:exp_phi}) because:
1) it predicts $\overline{v}_\phi\rightarrow 0$ for $t\gg t_{v_{\phi}}$ (equation~\ref{eq:asymm_drift}
does not); and 2) it can be calculated from properties of the galaxy's DM halo, with no reference to the structure of the galaxy itself.
We expect the latter point to be beneficial when applying our model to galaxies and DM haloes identified in cosmological
simulations of galaxy formation, whose structural properties may or may not have been affected by spurious collisional heating.
To obtain the best-fit values of $\alpha_i$ and $k_i\, \ln\Lambda$ (for $\sigma_i$), and $f$ (for $\overline{v}_\phi$)
we fit equations~(\ref{eq:exp}) and (\ref{eq:exp_phi}) to the measured azimuthal velocity dispersions and mean azimuthal
velocities, respectively, obtained from a subset of our disc galaxy simulations.
In practice, we combine results from our fiducial runs (i.e. $V_{200}=200\,{\rm km/s}$, $\mu=5$ and $c=10$)
with those obtained from models with higher ($c=15$) and lower ($c=7$) concentration DM halos (which
also use $V_{200}=200\,{\rm km/s}$ and $\mu=5$). We also combine measurements made at a few characteristic radii -- specifically,
$R_{1/4}$, $R_{1/2}$ and $R_{3/4}$ enclosing one quarter, one half and three quarters of all stellar particles, respectively.
Doing so allows us to probe a
larger range of DM densities and velocity dispersions than would be possible by fitting our fiducial models alone. We also
limit our fits to timescales over which $\Delta\sigma_i^2/\Delta t$ and $\Delta v_\phi/\Delta t$ evolve approximately linearly with time
(which roughly corresponds to excluding outputs for which $\sigma_i\gtrsim 0.2\, \sigma_{\rm DM}$).
The best-fit parameters are listed in \Cref{table:best-fit-values} for $\sigma_\phi$ and $\overline{v}_\phi$,
along with updated parameters\footnote{We do not present results in this paper for $\sigma_z$ and $\sigma_R$, as they they
were discussed at length by \citet{Ludlow2021}. We refer to that paper for a detailed discussion of these velocity components.} for
$\sigma_z$ and $\sigma_R$. In \Cref{sec:fitting} we show that our best-fit model also describes the spurious disc heating rates
obtained for galaxies not included in our fits, specifically those corresponding to haloes of different virial mass, different $\mu$
values, as well as discs with different initial scale heights.
The various panels of Fig.~\ref{fig:kinematic-profiles} confirm that our simple empirical model (i.e. equations~\ref{eq:exp} and \ref{eq:exp_phi})
describes the spurious evolution of $\sigma_\phi$ and $\overline{v}_\phi$ reasonably well. The coloured dashed lines
in the left-hand panels, for example, plot the predicted velocity dispersion profiles (upper left) and azimuthal
velocities (lower left) after $t=5\,{\rm Gyr}$. These curves, obtained by extrapolating the corresponding initial
profiles of $\sigma_\phi$ and $\overline{v}_\phi$ (shown as black dot-dashed lines in the corresponding panel),
accurately describe the simulated data at all resolved radii. In the right-hand panels, dashed coloured lines plot the corresponding
quantities as a function of time, but at the initial half-mass radius of the disc. These predictions agree well with our
numerical results over the entire $9.8\,{\rm Gyr}$ simulated.
Equation~(\ref{eq:exp}) for $\sigma_i$ and equation~(\ref{eq:exp_phi}) for $\overline{v}_\phi$ are also sufficient to describe the
evolution of several morphology diagnostics presented in \Cref{sec:heating}, provided the latter are evaluated at fixed radii.
Indeed, the dashed lines plotted in the upper panel of Fig.~\ref{fig:predicted-kinematics} show the predicted values of
$\overline{v}_\phi/\sigma_{\rm 1D}$, and in the lower panel the predicted $\lambda_r$ (i.e. equation~\ref{eq:lambda-data}, but using the
predicted velocities rather than the measured ones). For $\kappa_{\rm rot}$ (middle panels) we plot
\begin{equation}
\kappa_{\rm rot}=\frac{\overline{v}_\phi^2+\sigma_\phi^2}{\overline{v}_\phi^2+\sum \sigma_i^2},
\label{eq:kap_mod}
\end{equation}
which is equivalent to equation~(\ref{eq:kappa-data}) in the limit of large particle numbers provided $\overline{v}_z\approx \overline{v}_R\approx 0$,
which is approximately valid for our simulations. Note that an isotropic stellar velocity distribution corresponds to $\kappa_{\rm rot}=1/3$, which
explains why the blue curve in the middle panel of Fig.~\ref{fig:predicted-kinematics} asymptotes to $1-\kappa_{\rm rot}\approx 2/3$.
Our model reproduces the evolution of all three morphology diagnostics reasonably well.
Finally, the spheroid-to-total ratio -- $S/T$, defined as twice the mass fraction of counter-rotating orbits --
can also be predicted, provided the azimuthal velocity {\em distribution} is known. In practice, we find that $v_\phi$ is approximately
normally distributed, with a mean and standard deviation given by equations~(\ref{eq:exp_phi}) and (\ref{eq:exp}), respectively.
Hence, we may use the approximation
\begin{align}
S/T &\approx \frac{2}{\sqrt{2\,\pi}\sigma_\phi} \int_{-\infty}^0 \exp\biggr({\frac{-(v_\phi-\overline{v}_\phi)^2}{2\,\sigma_\phi^2}}\biggl)\, dv_\phi,\label{eq:ST1} \\
&=1+\erf\biggr(-\frac{\overline{v}_\phi}{\sqrt{2}\,\sigma_\phi}\biggr)\label{eq:ST2},
\end{align}
where $\erf (x)$ is the error function. The various dashed lines in the lower panel of Fig.~\ref{fig:unpredicted-kinematics} show
the evolution of the spheroid-to-total ratios of our discs as anticipated by equation~(\ref{eq:ST2}). We note that
predicting the disc-to-total mass ratio is more challenging, as it requires an accurate model of the distribution of
$\varepsilon_{\rm circ}$ and how it evolves due to spurious collisional heating.
We defer a more detailed analysis of the evolution of stellar particle orbital circularities to future work.
\begin{table}
\caption{Best-fit values for the free parameters $k_i\, \ln\Lambda$ and $\alpha_i$
obtained from equation~(\ref{eq:exp}) (for $\sigma_\phi$, $\sigma_z$, and $\sigma_R$) and the best-fit value of $f$
obtained from equation~(\ref{eq:exp_phi}) (for $\overline{v}_\phi$) for the different velocity components in our model.}
\label{table:best-fit-values}
\centering
\begin{tabular}{c c r r r}
\hline \hline
Vel. Component & eq. & $k_i \, \ln \Lambda$ & $\alpha_i$ & $f$ \\
\hline
$\sigma_z$ & eq.~(\ref{eq:exp}) & 20.19 & -0.308 & $-$ \\
$\sigma_R$ & eq.~(\ref{eq:exp}) & 20.17 & -0.189 & $-$ \\
$\sigma_\phi$ & eq.~(\ref{eq:exp}) & 9.40 & -0.115 & $-$ \\
$\overline{v}_\phi$ & eq.~(\ref{eq:exp_phi}) & $-$ & $-$ & 0.75 \\
\hline
\end{tabular}
\end{table}
\subsection{Implications for disc galaxies in cosmological simulations}
\label{ssec:cosmosims}
The spurious collisional heating of stellar motions in galactic discs may have consequences for the evolution of galaxy morphologies in
cosmological simulations. In this section, we use the best-fit model described above to make predictions for several
morphology diagnostics, as well as how they depend on galaxy age, DM halo structure, and distance from the galaxy centre.
A few results are plotted as a function of the number of DM particles per halo, ${\rm N_{200}}$, in the various
panels of Fig.~\ref{fig:n-dependence}. We focus on three morphology indicators: $\overline{v}_\phi/\sigma_{\rm 1D}$ (top panels),
$\kappa_{\rm rot}$ (middle panels), and $1-S/T$ (bottom panels). When implementing our model, we assume an NFW profile for the DM halo,
and that the half mass radius of the disc can be approximated as $R_{1/2}=0.2\times r_{-2}$,
where $r_{-2}=r_{200}/c$ is the halo's characteristic radius \citep[see][]{Navarro2017, Ferrero2017}.
For haloes with spin parameter $\lambda_{\rm DM}=0.03$ and concentration $c = 10$, this radius corresponds to an angular momentum
retention fraction of $f_j \approx 1.0$. For simplicity, we also assume a massless disc, in which case
the circular velocity term in equation~(\ref{eq:exp_phi}) is due solely to the DM halo with no contribution from the disc.
In the left-hand panels of Fig.~\ref{fig:n-dependence}, we plot our model predictions at $R=R_{1/2}$ and after $t=13.8\,{\rm Gyr}$
for discs with different initial velocity dispersions, $\sigma_{i,0}$\footnote{For simplicity we assume the galaxy is initially an isotropic rotator with
$\sigma_{i,0}=\sigma_{\phi,0}=\sigma_{z,0}=\sigma_{R,0}$, and $\overline{v}_\phi= V_{200} - \sigma_{i,0}$, which is approximately
valid for our fiducial simulations.}; the middle panels plot
results for $\sigma_{i,0}=0$ and $t=13.8\,{\rm Gyr}$ but at different galacto-centric radii (specifically,
at $R_{1/2}/4$, $R_{1/2}$ and $4\, R_{1/2}$); and in the right-hand panels we plot results at $R=R_{1/2}$ but for galaxies with different
ages: $t=2$, $5$ and $13.8\,{\rm Gyr}$ (note that $\sigma_{i,0}=0$ in these cases). All model curves assume
an NFW halo with concentration parameter $c=10$; the grey shaded regions surrounding the solid black curves highlight the impact of
varying the halo's concentration parameter between $c=5$ and 15 (for clarity, this is only shown for the subset of models with
$\sigma_{i,0}=0$, $t=13.8\,{\rm Gyr}$ and at $R=R_{1/2}$).
The left-hand panels of Fig.~\ref{fig:n-dependence} suggest that halos resolved with fewer than ${\rm a\,\rm few}\times 10^{5}$
DM particles are vulnerable to strong spurious morphological evolution after a Hubble time, regardless of the disc's initial velocity
dispersion. For illustration, consider the results obtained for $c=10$ and $\sigma_i=0$, which are shown as solid black lines in each
of the left-hand panels. For ${\rm N_{200}=10^{5.5}}$, the value of $\overline{v}_\phi/\sigma_{\rm 1D}$, although initially infinite, drops
to $\approx 1.36$ after a Hubble time. Similarly, $\kappa_{\rm rot}$ drops from 1 to $\approx 0.61$ for the same ${\rm N_{200}}$.
The spheroid-to-total ratio, however, appears to be a more robust measure of morphology, at least when measured at $R=R_{1/2}$. Indeed,
we find $S/T \approx 0.18$ after $t=13.8\,{\rm Gyr}$ for ${\rm N_{200}}=10^{5.5}$.
For ${\rm N_{200}}=10^{4.5}$, however, our model predicts that spurious collisional heating will have catastrophic consequences for
simulated discs. In this case, $\overline{v}_\phi/\sigma_{\rm 1D}\approx 0.02$, $\kappa_{\rm rot}\approx 0.34$, and $S/T \approx 0.98$,
after a Hubble time. Note too that the values quoted here differ only slightly
for initially hotter discs (the various curves in the left-hand panels approximately overlap).
The middle panels show, unsurprisingly, that collisional heating drives more substantial morphological evolution near galaxy
centres than in their outskirts. This is due to the strong radial gradients in the number density of DM particles across the
disc (which also explains why galaxies in higher concentration haloes are more strongly affected than those in lower concentration ones).
As a result, larger particle numbers are required to suppress spurious morphological evolution in the central regions of galaxies
than in their outer parts. For example, after $t=13.8\,{\rm Gyr}$ we find that maintaining $\overline{v}_\phi/\sigma_{\rm 1D}\gtrsim 4$
requires ${\rm N_{200}}\gtrsim 2\times 10^7$ at $R=R_{1/2}/4$, but only ${\rm N_{200}}=1.3\times 10^5$ at $4\,R_{1/2}$ (at $R=R_{1/2}$, it requires
${\rm N_{200}=1.6\times 10^6}$).
Interestingly, this implies that spurious collisional heating drives in inside-out evolution of disc morphology, affecting first
small radii, and only later affecting the outer disc structure. The
bottom-middle panels of Fig.~\ref{fig:n-dependence} imply that, at least in the simplest
case of an initially cold disc, spurious heating will first transform the central regions into a dispersion supported structure, leaving
the outer disc largely unaffected. This is indeed what our model predicts for DM haloes with ${\rm N_{200}}\approx 10^5$ (and
$c=10$): in this case, after $t=13.8\,{\rm Gyr}$, we find $S/T \approx 0$ at $R=4\, R_{1/2}$, but $S/T \approx 1$ at $R=R_{1/2}/4$.
Of course, the stellar components of galaxies are typically not contemporaneous, but rather formed smoothly over time or through
episodic bursts of star formation. Although these complexities are ignored in our model, the rightmost panels of Fig.~\ref{fig:n-dependence}
give an impression of the expected magnitude of the effect. There, the solid black, dashed orange, and dot-dashed green curves plot the
various morphology diagnostics for galaxies or stellar populations of different age -- specifically, $t=13.8\,{\rm Gyr}$,
$5\,{\rm Gyr}$, and $2\,{\rm Gyr}$, respectively. Clearly younger galaxies (or populations of stars) are less affected by spurious
collisional heating, but they are not less vulnerable. Because collisional heating is a cumulative effect, given sufficient time, {\em all}
simulated galaxies will, to some extent, suffer the consequences of their finite resolution.
\section{Discussion and Conclusions}
\label{sec:summary}
We used a suite of idealised simulations of equilibrium stellar discs embedded within ``live'' DM haloes to study the
effects of spurious collisional heating of star particles by DM particles on disc kinematics and morphology. Our simulated discs,
previously presented in \citet[][which we refer to for more details]{Ludlow2021}, are constructed to be in a state of stable
equilibrium, are DM dominated, free from disc instabilities, contain no gaseous component, experience no mergers or accretion of DM
or baryons, and are non-cosmological. Although this presents a highly simplified scenario, it allows us to isolate
the effects of spurious collisional heating on the kinematics of stellar discs, while eliminating other genuine sources of heating.
Our main results can be summarised as follows:
\begin{itemize}
\item The cumulative effects of spurious collisional heating alter the
visual morphology of simulated galactic discs (Figs.~\ref{fig:projections-low} to~\ref{fig:projections-hi}), making them
appear thicker and more spheroidal than they were initially. Both the scale length and scale height of discs increase as
a result, but at fixed mass resolution the increase in the latter occurs more rapidly (Fig.~\ref{fig:shape-global}).
At fixed halo mass, the net effect depends strongly on the mass of DM particles (or equivalently, on the number of DM halo
particles), but also on the local DM density and velocity dispersion. Our fiducial Milky Way-mass systems, for example, initially
have an aspect ratio of $z_{1/2}/R_{1/2}\approx 0.02$, but after
$t=10\,{\rm Gyr}$ have $z_{1/2}/R_{1/2}\approx 0.05$ for $m_{\rm DM}=10^6\,{\rm M_\odot}$ ($N_{\rm DM}=1.8 \times 10^6$) and
$\approx 0.16$ for $m_{\rm DM}=10^7\,{\rm M_\odot}$ ($N_{\rm DM}=1.8 \times 10^5$).
\item Provided $\sigma_\phi\ll\sigma_{\rm DM}$, the azimuthal velocity dispersion of stellar particles (squared) increases
approximately linearly with time, i.e. $\sigma_\phi^2\propto t$, as predicted by the analytic model of \citet{Lacey1985}. In poorly-resolved
systems, however, the azimuthal velocity dispersion reaches a maximum asymptotic value set by the local velocity dispersion
of the DM halo, i.e. $\sigma_{\phi,{\rm max}}\approx \sigma_{\rm DM}$ (a similar result holds for the vertical and radial
velocity dispersion of stellar particles; see \citealt{Ludlow2021} for details), and is better described by equation~(\ref{eq:exp}).
When $\sigma_\phi\ll\sigma_{\rm DM}$, the evolution of the mean azimuthal velocity of the disc is described reasonably well by
asymmetric drift, although the agreement breaks down when $\sigma_\phi\approx\sigma_{\rm DM}$. For that reason, we provided a
parametric model (equation~\ref{eq:exp_phi}; similar to equation~\ref{eq:exp} for $\sigma_i$) that accurately describes the
evolution of the $\overline{v}_\phi(R)$ profiles for all of our simulations, and over all resolved radii.
Equation~(\ref{eq:exp_phi}) has the added benefit that it depends only on the structure of a galaxy's DM halo, which is better
resolved and easier to predict than the structure of the galaxy itself, which evolves due to collisional heating.
We note, however, that the spurious increase in the velocity dispersion of stellar particles is largely
a result of kinetic energy being transferred from their azimuthal motion into vertical and radial motions, with little
change to the total kinetic energy of the stellar component. For example, the total kinetic energy of our
($m_{\rm DM}=10^7\,{\rm M_\odot}$) fiducial disc increased by only about 5 percent over a Hubble time as a result of collisional
heating, but the kinetic energy contributed by velocity dispersion increased by a factor of $\approx 5$ over the same time interval.
\item Spurious collisional heating alters the kinematics of disc stars, as well as the sizes and shapes of discs, resulting in
systematic biases in the location of galaxies in a number of standard galaxy scaling relations, including the
size-mass relation, the \citet{Tully1977} relation, and the \cite{Fall1983} relation
(Fig.~\ref{fig:scaling-relations}). Galaxies that are affected by spurious heating should therefore
be eliminated from simulated galaxy populations before they are compared to observed galaxy samples. Not doing so risks drawing
erroneous conclusions about galaxy formation theory, or about the virtues or weaknesses of galaxy formation models.
\item All of the galaxy morphology diagnostics we have studied are sensitive to the effects of spurious collisional heating: Discs get
thicker, have increased levels of dispersion support, and larger axis ratios due to its cumulative effects. Spurious heating also
broadens the circularity
distributions of stellar particle orbits, $\varepsilon_{\rm circ}$, resulting in higher spheroid-to-total mass ratios (commonly
estimated as twice the mass fraction with $\varepsilon_{\rm circ}<0$) and lower disc-to-total mass ratios (often defined
as the stellar mass fraction with $\varepsilon_{\rm circ}\geq 0.7$). For quantities measured at the galaxy half mass radius, $R_{1/2}$,
the effects are noticeable for haloes resolved with
fewer than $\approx 10^6$ DM particles, and we predict that those resolved with fewer than $\approx 10^5$ particles are
unlikely to harbour old stellar discs at all (Fig.~\ref{fig:n-dependence}). Suppressing spurious morphological evolution at radii
smaller than $R_{1/2}$ requires haloes to be resolved with even more DM particles.
\end{itemize}
Our results are however obtained from a particular set of highly idealised simulations, and therefore may not represent
the typical simulated galaxy's response to spurious collisional heating. For example, the disc mass fractions are too low ($f_\star=0.01$, whereas observations
suggest $f_\star\approx 0.05$ for the discs of Milky Way-mass galaxies; e.g. \citealt{Posti2021}) and they contain no gas component. This deliberate
choice was made to suppress the formation of local instabilities that can contribute to disc heating, permitting us to isolate the numerical
effects. But instabilities are believed to
be present in real galaxies, and may also be present in those formed in cosmological simulations provided they have sufficient mass
and spatial resolution. Our discs also contain no giant molecular clouds, globular clusters, or spiral density waves, all of which
provide additional sources of gravitational heating \citep[e.g.][]{Hanninen2002,Aumer2016,Gustafsson2016}. Our runs are non-cosmological and therefore
eliminate the possibility of accretion-driven heating, or tidal heating due to mergers or fly-bys, all of which are likely to heat the stellar
components of real galaxies as well as those in cosmological simulations \citep[e.g.][]{Benson2004,Genel2018,Borrow2022}.
This suggests that the heating rates of stellar discs in cosmological simulations may in fact be {\em larger} than those inferred from our
idealised runs, because there are contributions from both numerical and physical effects. However, our discs have no intrinsic stellar age
gradients and no star formation,
whereas those in cosmological simulations can in principle form new stars and thereby replenish their thin discs. Star formation will therefore
combat the cumulative effects of collisional heating since new stars are typically born in thin discs with low velocity dispersion
($\lesssim 10\,{\rm km/s}$).
Thus, it will be difficult to establish the extent to which spurious heating affects the properties of galaxies in cosmological simulations,
and doing so will likely require simulations that specifically aim to suppress the effect. We will present results from such a simulation
in a forthcoming paper.
\section*{Acknowledgements}
ADL and DO acknowledge financial support from the Australian Research Council through their Future Fellowship
scheme (project numbers FT160100250, FT190100083, respectively).
CL has received funding from the ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
This work has benefited from the following public
{\textsc{python}} packages: {\textsc{scipy}} \citep{scipy}, {\textsc{numpy}} \citep{numpy}, {\textsc{matplotlib}}
\citep{matplotlib} and {\textsc{ipython}} \citep{ipython}.
\section*{Data Availability}
Our simulation data can be made available upon request or can be
generated using publicly available codes. Our model results
can be reproduced using the equations provided in the paper.
\bibliographystyle{mnras}
\bibliography{mybib} %
\appendix
\section{Comparison of the disc heating model to a diverse set of disc galaxy simulations}
\label{sec:fitting}
The results presented in the main body of our paper were obtained from a set of ``fiducial'' disc galaxy simulations.
These models all adopted the same structural properties for the disc and halo -- namely, $V_{200}=200\,{\rm km/s}$, $f_\star=0.01$,
$\lambda_{\rm DM}=0.03$, $c=10$, $f_j=1$, and $\mu=5$ -- but used different dark matter and stellar particle masses to assess
the impact of spurious collisional heating on the disc. Below we present the azimuthal velocity
evolution for models that vary some of the relevant properties of the disc or halo, while holding others fixed.
Because the various morphology diagnostics we considered in Section~\ref{ssec:shapes} can be calculated from
$\sigma_\phi$ and $\overline{v}_\phi$, we do not consider them explicitly below.
In Fig.~\ref{fig:A1}, we plot the $\sigma_\phi$ (upper panels) and $\overline{v}_\phi$ (lower panels) evolution for haloes of different
virial mass, corresponding to $V_{200}=50\,{\rm km/s}$, $100\,{\rm km/s}$ and $400\,{\rm km/s}$ (in addition to our fiducial runs, i.e. $V_{200}=200\,{\rm km/s}$),
with all other dimensionless properties of the disc and halo held fixed. From left-to-right, the columns correspond to measurements made
at three separate characteristic radii, specifically $R_{1/4}$, $R_{1/2}$, and $R_{3/4}$, enclosing one-quarter, one-half, and three-quarters
of each galaxy's initial stellar mass. Different colour curves correspond to the different particle
masses listed in Table~\ref{table:simulation-list}, which differ for the different values of $V_{200}$. Note that the velocities have been normalised by $V_{200}$
and times by the characteristic timescale $t_c$ (equation~\ref{eq:tc}). In these dimensionless units, the $\sigma_\phi$ and
$\overline{v}_\phi$ evolution obtained for all models evolve similarly regardless of $V_{200}$ or DM particle mass, and all
are accurately described by equations~(\ref{eq:exp}) and (\ref{eq:exp_phi}), respectively.
In Fig.~\ref{fig:A2} we plot the evolution of $\sigma_i$ and $\overline{v}_\phi$ (measured at the initial value of $R_{1/2}$, and normalised
by $V_{200}$) for a suite of models with $V_{200}=200\,{\rm km/s}$ that vary the DM-to-stellar particle mass ratio, $\mu$ (left-hand panels),
the concentration of the DM halo (middle panels), and the initial scale height of the disc, $z_d$ (right-hand panels). As in Fig.~\ref{fig:A1},
different coloured lines correspond to different DM particle masses, as indicated in the legends.
In agreement with \citet{Ludlow2021}, we find that the effects of spurious collisional heating are largely independent of
$\mu$, at least for the range of particle mass ratios ($1\leq\mu\leq 25$) and timescales ($\lesssim 10\,{\rm Gyr}$) we have
considered in our analysis.
In the middle panels of Fig.~\ref{fig:A2} we verify that our model also describes reasonably well the rate of disc heating in
haloes with different concentrations of DM (note that for $c=7$ and $15$ we have adjusted $f_j$ so that the stellar mass profile is
fixed for all models). In this case we plot heating rates measured at $R=R_{1/2}$ for our fiducial ($c=10$) models, but at the radii corresponding
to the same local DM density for the haloes with higher and lower concentration values. By doing so, the initial heating rates -- which are
dominated by $\rho_{\rm DM}$ -- are approximately equal for all models, but the asymptotic values of $\sigma_\phi$ and $\overline{v}_\phi$
are not.
Finally, in the right-hand panels of Fig.~(\ref{fig:A2}), we consider discs with varying initial scale heights (or equivalently, different
vertical velocity dispersion). Solid lines show our fiducial models (i.e. $z_d=0.05\,R_d$); discs that are initially two and four
times thicker are shown using dashed and dotted lines, respectively. Despite the small differences in the initial velocity dispersion
of the discs, they rapidly evolve to have a similar kinematic structure that is dominated by the cumulative effects of collisional heating
rather than by the small differences in their initial kinematics.
\bsp %
\label{lastpage} |
Title:
Light curve completion and forecasting using fast and scalable Gaussian processes (MuyGPs) |
Abstract: Temporal variations of apparent magnitude, called light curves, are
observational statistics of interest captured by telescopes over long periods
of time. Light curves afford the exploration of Space Domain Awareness (SDA)
objectives such as object identification or pose estimation as latent variable
inference problems. Ground-based observations from commercial off the shelf
(COTS) cameras remain inexpensive compared to higher precision instruments,
however, limited sensor availability combined with noisier observations can
produce gappy time-series data that can be difficult to model. These external
factors confound the automated exploitation of light curves, which makes light
curve prediction and extrapolation a crucial problem for applications.
Traditionally, image or time-series completion problems have been approached
with diffusion-based or exemplar-based methods. More recently, Deep Neural
Networks (DNNs) have become the tool of choice due to their empirical success
at learning complex nonlinear embeddings. However, DNNs often require large
training data that are not necessarily available when looking at unique
features of a light curve of a single satellite.
In this paper, we present a novel approach to predicting missing and future
data points of light curves using Gaussian Processes (GPs). GPs are non-linear
probabilistic models that infer posterior distributions over functions and
naturally quantify uncertainty. However, the cubic scaling of GP inference and
training is a major barrier to their adoption in applications. In particular, a
single light curve can feature hundreds of thousands of observations, which is
well beyond the practical realization limits of a conventional GP on a single
machine. Consequently, we employ MuyGPs, a scalable framework for
hyperparameter estimation of GP models that uses nearest neighbors
sparsification and local cross-validation. MuyGPs...
| https://export.arxiv.org/pdf/2208.14592 |
\vspace{-0.8in}
\begin{center}
LLNL-PROC-839253
\end{center}
\vspace{0.5in}
\section{Introduction}
Photometric light curves are a series of observations that track the brightness
of an object over a period of time.
They can be used to characterize the
dynamic properties of a system.
Astronomers frequently analyze light curves to understand a
whole range of astrophysical topics including: detailed physics within a
star~\cite{gaia2019}, the discovery of exoplanets~\cite{kepler2013},
classifying distant supernovae~\cite{des2022}, and characterizing the
population of near earth asteroids~\cite{atlas2018}.
Space Domain Awareness (SDA) involves monitoring, detecting and understanding
the population of earth-orbiting bodies or resident space objects (RSOs).
SDA is of increasing importance due to the incipient growth in the number of
space objects and debris orbiting the earth, due in large part to the recent
commercial development of satellites.
The number so such RSOs is expected to grow by orders of magnitude in the next
decade.
While SDA is concerned with maintaining custody of orbiting objects, it also
prioritizes the rapid identification of changes to and anomalous behavior of
the many varying orbital systems.
Fortunately, light curves are valuable for inspecting RSOs in much the same way
as astrophysical phenomena, and can enable critical information for SDA.
The brightness of an RSO depends on structural features like size, shape, and
material composition as well the geometry between the object, the sun and
observer.
The proliferation of low cost commercial-off-the-shelf (COTS) ground-based
telescopes has made the generation of light curves from RSOs easier to produce.
Furthermore, many constellations of ground-based telescopes have been tasked
with tracking RSOs for SDA.
These factors have enabled the relatively cheap production of large volumes of
RSO
light curves, which has prompted practitioners to apply automation to analyze
them at scale.
Practitioners have recently applied machine learning to light curves of RSOs to
solve various SDA tasks.
For example, comparing the light curve of an unknown RSO to a catalog of known
RSOs is useful for predicting features like
shape~\cite{linares2014space, furfaro2019shape}, material
composition~\cite{dianetti2019space}, and general categories or
classes~\cite{linares2016space, jia2018space, furfaro2018space}.
Furthermore, forecasting RSO light curves into the future is useful for
detecting deviations from the expected patterns-of-life in near-real-time,
affording the detection of anomalous events such as
maneuvers~\cite{shabarekh2016novel, dupree2021time} or configuration changes.
The goal of machine learning in this context is to learn a function mapping an
input space (typically the time domain) to a an observation space, e.g.
magnitude, based upon independent and identically distributed (i.i.d.)
samples from a distribution of input-observation pairs.
We will refer to this data distribution as the ``target distribution''.
A machine learning model is successful when it is able to accurately predict
the observation of an unseen input drawn from the target distribution.
Both diffusion-based or exemplar-based methods have been traditionally
used in image or time-series completion problems.
Diffusion-based methods~\cite{sohl-dickstein2015deep} use thermal diffusion
equations to propagate information from surrounding regions into missing
regions, they are mostly effective when the gaps are small and tend to smooth
details out for larger problems.
Exemplar-based methods~\cite{criminisi2004region} use greedy algorithms
to apply patches of training data to missing regions which can produce
implausible patterns.
Deep Neural Networks (DNNs) are an especially popular tool to model light
curves in the literature due to their expressiveness and generalization
capabilities.
A DNN is a type of representation learning model --- a machine learning model
that learns an appropriate feature representation of the data in addition to
producing predictions.
DNNs iteratively transform the input space into latent feature spaces using
linear functions that are ``activated'' by element-wise nonlinear functions.
DNNs also happen to be universal function approximators --- it is possible to
approximate any continuous function to an arbitrary level of precision using a
sufficiently large DNN.
The persistent popularity of DNNs derives from several sources, such as the
widespread availability of hardware accelerators like graphical processing
units (GPUs), advanced stochastic optimization techniques that aid in their
training, and the development of user-friendly software libraries such as
Tensorflow~\cite{Abadi_TensorFlow_Large-scale_machine_2015} and
PyTorch~\cite{NEURIPS2019_9015}.
Although DNNs have many positive features, they also have some drawbacks that
are particularly notable in the light curve problem.
First, modern DNN architectures typically consist of a very large number of
parameters that must be trained.
Training such models generally consumes a large amount of computing resources,
as the model and its gradient are evaluated over many iterations in order to
refine parameter values.
In addition to computational expense, training a large model typically requires
a large amount of data.
The literature has observed a roughly linear relationship between model size
and the amount of labelled data required to train it~\cite{tan2018survey}.
This means that learning an accurate light curve model can require a large
number of observations in general.
Second, while DNN models lend themselves to high dimensional feature spaces
they tend to struggle with very small numbers of dimensions.
Feature engineering can often solve this problem, especially in time series
scenarios where some notion of a moving window usually serves as a feature
space.
However, this strategy is most successful when the observation cadence of the
light curve is high.
Upcoming ground-based surveys of deep space, such as the Legacy Survey of Space
and Time (LSST) are expected to incidentally capture many RSO images from which
light curves can be extracted.
However, these light curves will be irregular, sparse, and have low-dimensional
feature spaces, increasing the challenge of applying existing techniques.
Third, effective training typically requires a large volume of training data
that is complete and representative of the target distribution
i.e. a large model requires a large amount of training data that is i.i.d.
according to the target distribution.
In addition to informing prediction accuracy, data independence and size helps
complex DNN models avoid overfitting and simply memorizing the training data.
However, collections of physical measurements are often limited by the
realities of sensor availability and physical obstruction.
For example, weather affects optical astronomical measurements by introducing
correlated noise or blocking the desired object from view entirely.
Furthermore, it is in general desirable to design a model that can be
alternatively applied to several different RSO inference problems, i.e. several
distinct target distributions.
A transfer learning approach-one where the model is at least partially trained
on data from a possibly different distribution than the target
distribution~\cite{tan2018survey}- could address this need for a large volume
of training data.
Indeed, transfer learning has been applied to satellite aperture radar
images~\cite{rostami2019deep}, radiofrequency interference
characterization~\cite{lefcourt2022space}, and classifying RSOs using light
curves~\cite{furfaro2018space}.
However, transfer learning ultimately relies on features learned from a
potentially unrelated dataset, which may lead to inefficient or inaccurate
conclusions.
Generative models such as Variational Autoencoders (VAEs) and Generative
Adversarial Networks (GANs) are additional alternatives from the machine
learning literature that address data efficiency.
VAEs are autoencoders that impose structure upon their learned encoder and
decoder functions to ensure that the latent representation of the data encodes
the target distribution.
Researchers have successfully applied VAEs to learn shapes from the light
curves of RSOs~\cite{furfaro2019shape}.
GANs conversely simultaneously train models that generate samples meant to
mimic the target distribution while distinguishing between real and synthetic
samples.
Practitioners have recently used GANs to classify stars based upon their light
curves~\cite{garcia2022improving}.
However, GANs can be challenging and expensive to train.
GANs infer a distribution from a very small training dataset
and can suffer from mode collapse, non-convergence or general instability.
This brings us to the fourth point --- DNNs have difficulty quantifying the
uncertainty in their predictions.
It is generally difficult to determine when a DNN is extrapolating ---
predicting the response on data from a different distribution or in a different
region of the training data.
This is problematic in scientific applications because overconfident
predictions can lead to incorrect inference and unnatural results that are
difficult to interpret.
Furthermore, it is important for decision makers to distinguish between low and
high-confidence predictions when drawing conclusions for SDA.
Although uncertainty quantification methods are in research, most practical
models employed by practitioners provide only point predictions with no obvious
mechanism to measure model confidence.
Recent attempts to provide uncertainty quantification to DNNs attempt to learn
prediction intervals (i.e. error bars) in addition to point predictions, but
this literature is still developing and there is as yet no consensus
solution~\cite{kabir2018neural}.
Others attempt a hybrid approach where a Gaussian process is appended to the
last layer of a DNN~\cite{bradshaw2017adversarial}.
In this manuscript we propose Gaussian Process (GP) models as alternatives for
the light curve modeling problem.
Like DNNs, GPs are representation learning methods that are data-driven
universal function approximators.
A GP is a type of kernel method that implicitly and non-linearly maps input
features into a possibly infinitely dimensional inner product space.
Kernel methods use a parameterized kernel function to cheaply compute inner
products in this implicit space to predict unknown responses.
GPs can also be thought of as the generalization of a multivariate normal
distribution, where all data within a defined domain is assumed to be jointly
normally distributed with the covariance defined by the kernel function.
Therefore, predictions from GP models are probabilistically defined through
conditional probability.
GPs are attractive for scientific applications due to this inherently Bayesian
inference model, which produces explicit uncertainty quantification (Gaussian
distributions) of its predictions.
Furthermore, GPs have been shown to outperform DNNs in data-starved
regimes~\cite{muyskens2022star} and are ideal models for low-dimensional
feature spaces~\cite{muyskens2021muygps}.
However, conventional GPs have very poor scaling in the number of training
observations, which has previously limited their application to the light curve
modeling problem.
Both realizing the predictions from the GP model and evaluating the likelihood
function in training require cubic computation in the number of data points,
and require quadratic memory to store the kernel matrix.
While this practical drawback has tended to limit the application of GPs to
small data problems, scalable approximate GP methods have proliferated in the
literature~\cite{heaton2019case, liu2020gaussian}.
These methods typical tradeoff accuracy for speed.
However, the approximate GP method MuyGPs offers a selection of settings that
demonstrates superior accuracy or computational scaling in several datasets
~\cite{muyskens2021muygps, muyskens2022star, buchanan2022gaussian}.
Therefore, in this paper we use the MuyGPs approximate GP estimation
method~\cite{muyskens2021muygps}.
MuyGPs uses nearest neighbors sparsification and a batched leave-one-out
cross-validation objective function to train a GP-like model in nearly linear
time in the number of data observations.
Investigators have successfully applied MuyGPs to cosmology image processing
problems~\cite{goumiri2020star, muyskens2022star, buchanan2022gaussian}.
In this paper, we propose a method of light curve interpolation and prediction
using MuyGPs that gives both predictions and uncertainty quantification of
those predictions.
We use this method in two modes:
\begin{enumerate}
\item Interpolation, the prediction of unobserved magnitudes in the past, and
\item Forecasting, the prediction of unobserved future magnitudes.
\end{enumerate}
Interpolation is useful as a data preprocessing utility for catalogs.
Interpolation allows us to fill in magnitude observations for sparse, irregular
light curves to make them suitable for downstream comparison tasks such as
shape, pose, size, type, or material estimation.
Forecasting is useful to detect deviations from expected future
patterns-of-life that correspond to anomalies.
The posterior variance provided by GPs is especially useful to determine what
constitutes a significant deviation from expected behavior.
In \autoref{sec:method}, we provide background on the light curve data itself,
data processing, as well as the machine learning methods we utilize in our
comparative study.
Then in \autoref{sec:res} we describe several numerical studies that
demonstrate the comparative performance of several choices of our GP method to
that of a DNN and how the uncertainty quantification provided by our method can
be used to detect anomalies.
Finally, in \autoref{sec:conclusion}, we discuss our conclusions, the
importance of our findings, limitations, and future work associated with our
methodology.
\section{Methodology}
\label{sec:method}
\subsection{Light curves definition}
Light curves are time series of the brightness of resident space objects over
long periods.
They are obtained by observing the same object, night after night, for several
days or even years.
One way to visualize them is in two dimensions with one axis representing the
time of day (or solar phase angle) and the other axis representing the days
while the pseudo-color indicates the brightness, as in
\autoref{fig:illustration:light_curve}.
Light curves of RSOs contain potentially detailed information.
For example, one could detect dust accumulating on highly reflective solar
panels and deduce the age and life expectancy of a satellite.
Higher intensity reflections also have the potential to inform about the shape
of the observed object since different facets of a multi-faceted object would
reflect differently.
In addition, satellites can maneuver or rotate, in which case, the reflected
light will deviate from the expected pattern.
Those deviations can be detected and increase SDA knowledge.
Observations can be limited by the time of day, weather, sensor saturation, and
eclipses (see \autoref{fig:illustration:missing-data}).
Data is only available at night, and the measurements are less accurate near
dusk and dawn.
The weather can preclude observations for hours and sometimes days at a time.
When the reflected light from the sun is particularly bright, sensors can
saturate leading to missing data, though this particular case is usually fairly
obvious when looking at the brightness of surrounding available data.
Lastly, periodically, the light from a satellite will be eclipsed by the earth.
To test our codes we use a light curve dataset provided by Dave Monet
\cite{monetgrams} of 43 satellites.
All satellites in the dataset are from the public catalog
(www.space-track.org).
We selected 13 distinct RSOs, those manually flagged by Dave Monet as
\emph{nominal}, for our analysis.
There are approximately 500,000 data points per object.
All observations were taken from a single camera in Flagstaff AZ between
September 2014 and September 2018.
The dataset contains the brightness (magnitude; band is approximately Johnson
V) as well as the measurement error in brightness (uncertainty).
\subsection{Processing}\label{sec:processing}
To test our prediction capabilities, we select a portion of each light curve as
test data.
We use time periods ranging from a few hours (3 hours) to multiple weeks (2
weeks).
And we either select the test data at a random time within the time series
(interpolation) or at the end (extrapolation).
To guarantee that the selected period does not fall at a time lacking
observations (e.g. during the day or during a cloudy night) we reject those
that have fewer than 90\% of the data we'd expect to find in that time interval
if the data was uniformly distributed.
In addition, we apply that same condition to the preceding interval to ensure
that we're not accidentally interpolating for longer that we would expect.
Since the light curves have some daily and yearly periodicity, we compare
several embeddings of the light curve data into multi-dimensional spaces.
The reference is the 1D model which is just the original time series.
The 2D model has one dimension representing times of day as real numbers in the
$[0, 1)$ interval, and another dimension representing days as integers,
starting at zero on the first day of observation.
The 3D model is like the 2D model but with an additional dimension for years,
as integers between 0 and 4, and with a modified days dimension modulo 365.
Note that all the input dimensions are eventually normalized to the interval
$[0, 1]$ as is customary.
An alternate 2D model with year and time of year as opposed to day and time of
day was considered but dismissed for simplicity and for not being as intuitive
and interesting.
We observed that day-to-day correlation is stronger than year-to-year in our
dataset.
Note that it is more traditional to model this type of data periodicity using a
periodic kernel, but the MuyGPs estimation framework depends on a kernel
sparsity that is ultimately not validated with that kernel type.
Therefore, these embeddings are a novel way to replace such a periodic kernel
while maintaining the computationally efficient framework.
\subsection{Gaussian processes}
We will consider throughout a light curve to be a univariate response
$Y : \mathcal{X} \rightarrow \mathbb{R}$, where
$\mathcal{X} \subseteq \mathbb{R}^d$
is the observation space along $d$ time dimensions.
In preprocessing we de-trend the data, and so $Y$ therefore has zero mean.
In modeling $Y$ with a GP, we assume that it is drawn from a Gaussian process
distribution.
This means that $Y$ follows a multivariate Gaussian distribution at any finite
set of $n$ points $X = (\mathbf{x}_1, \dots, \mathbf{x}_n) \in \mathcal{X}^n$.
However, in reality measurement noise perturbs our observed values of $Y$ at
locations $X$.
We assume that each measurement is perturbed by homoscedastic noise that are
i.i.d.
$\mathcal{N}(0, \epsilon)$.
That is,
\begin{align} \label{eq:gp_prior}
Y(X)
& = (Y(\mathbf{x}_1), \dots, Y(\mathbf{x}_n))^\top \sim \mathcal{N}
\left ( \widetilde{0}, Q_\theta(X, X) \right ), \\
Q_\theta(X, X)
& = \sigma^2 \left ( K_\theta(X, X) + \epsilon I_n \right ),
\end{align}
where $\mathcal{N}$ is the multivariate Gaussian distribution, $\widetilde{0}$
is the $n$-dimensional zero vector, $\sigma^2$ is a variance scaling term,
$I_n$ is the $n \times n$ identity matrix, and $K_\theta(X, X)$ is an
$n \times n$ positive definite, symmetric covariance matrix between the
elements of $X$ that is controlled non-linearly through kernel function
$K_\theta(\cdot, \cdot)$ with hyperparameters $\theta$.
$Q_\theta(X, X)$ is $K_\theta(X, X)$ that is perturbed by the measurement noise
prior $\epsilon$ and scaling parameter $\sigma^2$.
Similarly, any additional (possibly unobserved) datum
$\mathbf{x}^* \in \mathcal{X}$ is also jointly normal with observed data $X$ by
the GP assumption.
Thus, the conditional distribution for the response of $Y$ at $\mathbf{x}^*$
given responses observed at $X$ and $\theta$ is also multivariate normal with
mean and variance
\begin{align}
\label{predmean}
Y_\theta(\mathbf{x}^* \mid X)
& =
K_\theta(\mathbf{x}^*, X) Q_\theta(X, X)^{-1} Y(X), \text{ and}
\\
\label{predvar}
\text{Var}(Y_\theta(\mathbf{x}^* \mid X))
& =
K_\theta(\mathbf{x}^*, \mathbf{x}^*) - K_\theta(\mathbf{x}^*, X)
Q_\theta(X, X)^{-1} K_\theta(X, \mathbf{x}^*),
\end{align}
where $K_\theta(\mathbf{x}^*, X) = K_\theta(X, \mathbf{x}^*)^\top$ is the
cross-covariance matrix between $\mathbf{x}^*$ and the elements of $X$.
GPs are typically trained by maximizing the log-likelihood of the observations
$Y(X)$ with respect to $\theta$.
This log-likelihood function possesses the following form:
\begin{equation}
\label{ll}
log(L(\theta, Y(X))) = - \frac{p}{2}log(2 \pi) - \frac{1}{2}
log(|Q_\theta(X, X)|) - \frac{1}{2} Y(X)^T Q_\theta(X, X)^{-1} Y(X).
\end{equation}
However, evaluating \autoref{ll} is $O(n^3)$ computation and $O(n^2)$ in
memory, which is intractable for all but relatively small data.
A MuyGPs model rewrites equations~\ref{predmean}~and~\ref{predvar} as
\begin{align}
\label{muygpspred}
\widehat{Y}_\theta(\mathbf{x}^* \mid X_{N^*})
& =
K_\theta(\mathbf{x}^*, X_{N^*}) Q_\theta(X_{N^*}, X_{N^*})^{-1} Y(X_{N^*}),
\textrm{ and}
\\
\label{muygpsvar}
\text{Var}(\widehat{Y}_\theta(\mathbf{x}^* \mid X_{N^*}))
& =
K_\theta(\mathbf{x}^*, \mathbf{x}^*) - K_\theta(\mathbf{x}^*, X_{N^*})
Q_\theta(X_{N^*}, X_{N^*})^{-1} K_\theta(X_{N^*}, \mathbf{x}^*),
\end{align}
where $X_{N^*}$ are the nearest neighbors of $\mathbf{x}^*$ in
$X \setminus{\{\mathbf{x}^*\}}$.
MuyGPs trains $\theta$ in terms of some loss function $\ell(\cdot, \cdot)$ over
a sampled batch $B = (\mathbf{x}_1^*, \dots, \mathbf{x}_b^*) \subseteq X$ of
size $b$ by minimizing an objective function $Q(\theta)$.
Minimizing $Q(\theta)$ allows us to train $\theta$ without evaluating the
expensive determinant in the GP likelihood.
When we set $Q(\theta)$ to leave-one-out cross-validation and $\ell_\theta$ to
mean squared error, training $\theta$ reduces to minimizing the function
\begin{equation} \label{eq:batch_loss}
Q(\theta)
=
\frac{1}{b} \sum_{i \in B} \left (
Y(\mathbf{x}_i^*) - \widehat{Y}_\theta(\mathbf{x}_i^* \vert X_{N_i^*})
\right )^2.
\end{equation}
Note other loss functions can be employed in this framework, but this mean
squared error function has been demonstrated as
performant~\cite{muyskens2021muygps}.
We use Bayesian optimization to optimize \autoref{eq:batch_loss} to train
$\theta$ in our experiments.
\subsection{A standard neural network model for benchmarking}
Neural networks have achieved state-of-the-art results on common benchmark
problems and are now ubiquitous in scientific applications.
Although the focal point of this work is the application of Gaussian processes
to light curve modeling, we benchmark the predictive performance of MuyGPs
against a standard deep neural network model to ensure its predictive accuracy.
We select a fully-connected architecture, as the structure of the feature space
is not suited to convolutional models.
Moreover, recurrent neural networks, including LSTMs, are another popular class
of models that are useful in forecasting settings, but not interpolation
problems.
Fully-connected neural networks can be represented as the composition of a
series of affine transformations and nonlinear activation functions.
The composition of an individual affine transformation and nonlinear activation
function constitutes a layer of the fully-connected network.
Each affine transformation in the network is parameterized by weights
$\mathbf{W}_i \in \mathbb{R}^{n_i \times n_{i+1}}$, where $n_i$ is the
dimension of the input to the $i^{th}$ layer and $n_{i+1}$ is the output
dimension, and a bias vector $\mathbf{b}_i \in \mathbb{R}^{n_{i+1}}$.
We label the activation function in the $i^{th}$ layer $\sigma_i$.
Let $\mathcal{F}_{NN}(\mathbf{x}) : \mathbb{R}^{n_{in}} \rightarrow
\mathbb{R}^{n_{out}}$ be a neural network.
Then,
\begin{equation} \label{eq:FCNN}
\mathcal{F}_{NN}(\mathbf{x})
=
\sigma_n \left(
\mathbf{W}_n \sigma_{n-1} \left(
\mathbf{W}_{n-1} \sigma_{n-2} \left(
\mathbf{W}_{n-2} (\cdots) +\mathbf{b}_{n-2}
\right) + \mathbf{b}_{n-1}
\right) + \mathbf{b}_n
\right).
\end{equation}
We train a fully-connected neural network using the day of the year and time of
day as the independent variables (features), with the target set to the
magnitude (normalized to take on values between 0 and 1 by dividing by the
largest magnitude observed).
The model architecture features ReLU activation functions and 5 layers of sizes
$2 \times 200$, $200 \times 200$, $200 \times 100$, $100 \times 20$, and $20
\times 1$.
There are $62,420$ total parameters in the model, roughly $1/8-1/5$ the number
of training samples depending on the test case.
We train the neural network in batches of size 128 using the Adam optimizer for
100 epochs.
We use the mean-squared error loss function with 20 percent of the training
data withheld for validation, the training rate set to $10^{-3}$, and a
learning rate exponential decay factor of 0.5 applied every 30 training epochs.
We set the ratio of the number of parameters in the model to the number of data
points on the order of $1/10$, a commonly accepted ratio in the deep learning
community.
We selected these training hyperparameters based on common values used in deep
learning to achieve the best combination of training loss decay and validation
error minimization.
\section{Results}
\label{sec:res}
First we compare the predicting power of Gaussian processes for the different
embeddings of the input data described in \autoref{sec:processing}.
For each of the 13 light curves in our dataset, we randomly select a total of
20 time intervals, 5 for each time duration in \{3 hours, 1 day, 1 week, 2
weeks\}, and use those intervals as test data while using the rest as training
data.
The accuracy of the prediction is measured as the Root Mean Square Error (RMSE)
divided by the extent (max - min) of the magnitude in the test data.
We use the common radial basis function (or RBF) to define our kernel in all of
our experiments.
This means that we use the kernel function
\begin{equation} \label{eq:rbf}
K_\ell(x, x^\prime)
= \exp \left (
-\frac{\|x - x^\prime\|_2^2}{2\ell^2}
\right ).
\end{equation}
The RBF kernel has length scale parameter $\ell$, and our model additionally
has the measurement noise variance prior $\epsilon$ and variance scaling
parameter $\sigma^2$.
We fix $\epsilon = 10^{-5}$ in our experiments.
We train $\ell$ by optimizing equation \autoref{eq:batch_loss} via Bayesian
optimization.
However, no prediction-based objective function is sensitive to $\sigma^2$,
and so we optimize it according to the analytic procedure described in Section
2 of \citep{muyskens2021muygps}.
\autoref{fig:results:embedings} shows the distribution of the resulting
comparison of the predictions to the truth for each embedding.
With a median of 0.066, the 2D embedding yields more than 3 times better
predictions than the original 1D embedding (median: 0.24).
The better prediction is possible because the distance between similar data
points across days is reduced, thanks to the extra dimension, allowing the GP
to more readily use them during interpolation.
However we see that adding a third dimension encoding the yearly periodicity
does not yield significant improvements over the 2D embedding since the results
are quasi-identical (median: 0.065).
Although the distributions of 2D and 3D embeddings are similar, from this on,
we focus on the 2D embedding and use it exclusively.
Part of the motivation for being able to predict data in light curves is to
complete gaps and missing data points.
A first benchmark is to be able to complete randomly selected data points
throughout a light curve.
We randomly select either 5\%, 10\%, or 20\% of the total number of data point
available in each of the 13 light curves in the dataset, 5 times for each
proportion.
Then, for more realistic interpolation, we select consecutive data points
representing gaps of either 3 hours, 1 day, 1 week, or 2 weeks, with 5
different random starting instants for each 13 light curve.
Lastly we repeat the same procedure but by selecting gaps at the end of each
light curve to evaluate how GPs perform for extrapolation.
\autoref{fig:results:interpolation_extrapolation} shows the performance of GP
predictions for all of these interpolation and extrapolation tasks.
For random interpolation, for all percentages (5\%, 10\%, or 20\%), the
performance is similar and very good with a median relative RMSE of about 0.03.
For gaps, either interpolation or extrapolation, there is more variation, both
for a given gap duration, and across durations.
To compare the performance of GPs compared to DNNs, we ran the same task of
predicting one day of missing data in a single 4
year-long light curve with training data exclusively from that single light
curve on the same machine.
As shown in \autoref{fig:results:nn_comparison}, GPs achieve better accuracy in
only a fraction of the time.
\autoref{fig:results:prediction} illustrates a single example of the quality of
the GP prediction
compared to raw measurements for a day of missing data using the 3D embedding.
The mean prediction follows the trend of magnitudes well as expected.
As is typical in GP models, our kernel model is assumed to be stationary and
homoscedastic, meaning that we learn a single set of hyperparameters that best
models all the training data (across all magnitudes).
In this interval pictured, we see that the data at high magnitudes seem to have
more variance than the data collected at lower magnitudes.
Our model is agnostic to any change in variance in the data itself, but will on
average provide desired uncertainty quantification.
Note this one timegap is a single interval that is in itself time-correlated in
magnitude and therefore variance regime.
Therefore, the coverage of the 95\% interval from this one sample can be
different than the desired overall level, but when we consider many independent
time intervals of this type, the uncertainties will average to the desired 95\%
confidence level.
In future work more flexible and complex GP kernel functions could be designed
with inhomogeneous or non-stationary kernels to correct for this potential
pattern in magnitude to improve uncertainties in each individual sample region.
If the magnitude distribution in our prediction interval differs from that of
our training data then there is a model discrepancy in our approach that could
explain our observed variances.
But the true power of GPs comes from their ability to predict a full posterior
distribution and not just a mean.
The predicted covariance can be used to detect areas where the prediction
differs from the measurement by a significant amount.
One application of particular interest for SDA is the
detection of anomalies --- potential maneuvers or state changes --- in light
curve data.
\autoref{fig:results:anomalies} shows two example of such anomalies.
In both cases, two factors make the anomalies noticeable.
Firstly, the prediction differs from the measurement for a small but
significant period of time, and the difference exceeds two times the predicted
standard deviation.
And secondly, the measurements from the surrounding days look similar to those
of the focal day --- and to the prediction --- except for that particular
period.
The combination of those two factors indicate that it is indeed the
measurements that are anomalous and not the predictions.
\section{Conclusion}
\label{sec:conclusion}
We have shown Gaussian processes (GPs) using MuyGP to be a capable and
pertinent tool for analyzing light curve data.
Unlike other machine learning methods that can have millions of parameters,
Gaussian processes are simple in that they do not have a large architecture
search and in that they only have a few parameters to be estimated.
Unlike traditionally GP estimation methods, the implementation of the MuyGPs
methodology allows astronomers to apply GP methods to very large data with very
little computational or memory burden.
In addition to their improved accessibility, we have shown that in our designed
experiments on light curves, Gaussian processes are both more accurate and
faster than an example neural network.
Further, the uncertainty quantification we get from Gaussian processes allows
us to identify statistically distinct deviations from the pattern-of-life.
As seen above, while Gaussian processes are able to ingest raw uni-dimensional
time series, encoding the periodicity of the light curve data through
additional dimensions yields much better predictions when we predict large gaps
of observations.
This is likely due to the distinct periodic trends in the light curves.
However, we observed that encoding the daily periodicity was sufficient and
that encoding the yearly periodicity did not further improve prediction.
A first explanation is that surrounding days are typically more similar than
surrounding years, so the majority of ``nearest neighbors" used in the GP model
would naturally be selected from surrounding days rather than from surrounding
years.
We have also shown how Gaussian processes compare advantageously to neural
networks both in terms of computing time during training and prediction
accuracy.
Note that our metric for computing time combines both training and prediction
time
from the two models.
This metric masks the tradeoff that a trained neural network can make
additional predictions efficiently whereas training a Gaussian process using
MuyGPs is very fast, but the prediction time given a trained model is much
slower comparatively.
\citep{fastmuygps} demonstrate a way to improve prediction time
of a simlar kernel interpolation with an additional approximation.
However, in the online scenario of the light curve problem where one may want
to perform this analysis in near realtime, one would both retrain and predict
simultaneously so the GP method is significantly preferred.
Both methods can be parallelized to take advantage of High Performance
Computing (HPC).
For DNNs, parallelizations techniques are readily available, for instance in
PyTorch, with the advent of GPUs and TPUs.
For MuyGPs, parallelizations efforts are in progress and will be released in a
different publication.
We ran all of the experiments in this manuscript on a single core of a
commodity laptop using the software library MuyGPyS~\cite{muygpys2021github}.
Future applications will use MPI~\cite{dalcin2011parallel} and
JAX~\cite{jax2018github} to scale model training and evaluation to multiple
compute nodes using many CPUs and GPUs in parallel.
These scalability features will enable applications that scale to the
observation sizes of LSST, which are anticipated to involve hundreds of
millions of space objects.
Possible future directions of research involve comparing GP training
from one set of light curves and using these trained parameters to make
predictions for a
different set of unseen light curves.
This will determine whether it is necessary to fit parameters to each light
curve.
Finally, since our method gives fully interpolated light curves, future work
could use these de-noised full time series for further SDA studies.
In summary, we believe our GP method is a valuable method for light curve
interpolation
and future prediction for three scientific tasks of particular interest.
First, by de-noising and interpolating missing data, GPs can be part of a
pre-processing pipeline before feeding the data to further algorithms or
software that cannot work with raw data.
As shown in figures~\ref{fig:illustration:light_curve}
and~\ref{fig:illustration:missing-data}, raw light curves can have a lot of
gaps frequently spanning multiple days so being able to interpolate those gaps
is crucial.
Second, by being able to extrapolate data over several hours, days, or even
weeks, GPs can be used to predict the expected behavior of RSOs, which can
serve in forecasting and collision avoidance in SDA.
Last and not least, by quantifying the uncertainty of predictions, GPs allow
for automatic change detection.
Indeed, being able to predict a full posterior distribution enables discerning
anomalies in measurements from anomalies in prediction, since uncertain
predictions can easily lead to false positives.
In future research, it should be possible to use tracking data and/or
documented known maneuvers to fine-tune and benchmark our maneuver detection
capabilities.
\section*{Acknowledgments}
This work was performed under the auspices of the U.S.
Department of Energy by
Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 with IM
release number LLNL-PROC-839253.
Funding for this work was provided by LLNL Laboratory Directed Research and
Development grant 22-ERD-028.
This document was prepared as an account of work sponsored by an agency of the
United States government.
Neither the United States government nor Lawrence
Livermore National Security, LLC, nor any of their employees makes any
warranty, expressed or implied, or assumes any legal liability or
responsibility for the accuracy, completeness, or usefulness of any
information, apparatus, product, or process disclosed, or represents that its
use would not infringe privately owned rights.
Reference herein to any specific
commercial product, process, or service by trade name, trademark, manufacturer,
or otherwise does not necessarily constitute or imply its endorsement,
recommendation, or favoring by the United States government or Lawrence
Livermore National Security, LLC.
The views and opinions of authors expressed
herein do not necessarily state or reflect those of the United States
government or Lawrence Livermore National Security, LLC, and shall not be used
for advertising or product endorsement purposes.
\bibliographystyle{plainnat}
\bibliography{AMOS_2022}
|
Title:
On the Masses, Age, and Architecture of the VHS J1256-1257AB b System |
Abstract: VHS J1256-1257AB is an ultracool dwarf binary that hosts a wide-separation
planetary-mass companion that is a key target of the JWST Exoplanet Early
Release Science program. Using Keck adaptive optics imaging and aperture
masking interferometry, we have determined the host binary's orbit
($a=1.96\pm0.03$ au, $P=7.31\pm0.02$ yr, $e=0.8826^{+0.0025}_{-0.0024}$) and
measured its dynamical total mass ($0.141\pm0.008$ $M_{\odot}$). This total
mass is consistent with VHS J1256-1257AB being a brown dwarf binary or pair of
very low-mass stars. In addition, we measured the orbital motion of VHS
J1256-1257 b with respect to the barycenter of VHS J1256-1257AB, finding that
the wide companion's orbit is also eccentric ($e=0.73^{+0.09}_{-0.10}$), with a
mutual inclination of $116^{\circ}\pm16^{\circ}$ with respect to the central
binary. This orbital architecture is consistent with VHS J1256-1257 b attaining
a significant mutual inclination through dynamical scattering and thereafter
driving Kozai-Lidov cycles to pump the eccentricity of VHS J1256-1257AB. We
derive a cooling age of $140\pm20$ Myr for VHS J1256-1257AB from low-mass
stellar/substellar evolutionary models. At this age, the luminosity of VHS
J1256-1257 b is consistent with both deuterium-inert and deuterium-fusing
evolutionary tracks. We thus find a bimodal probability distribution for the
mass of VHS J1256-1257 b, either $11.8\pm0.2$ $M_{\rm Jup}$ or $16\pm1$ $M_{\rm
Jup}$, from Saumon & Marley (2008) hybrid models. Future spectroscopic data to
measure isotopologues such as HDO and CH$_3$D could break this degeneracy and
provide a strong test of substellar models at the deuterium-fusion mass
boundary.
| https://export.arxiv.org/pdf/2208.08448 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
astrometry -- binaries: visual -- brown dwarfs -- exoplanets -- planetary systems
\end{keywords}
\section{Introduction}
Two fundamental parameters govern the bulk properties of gas-giant planets and brown dwarfs: mass and age. Mass is difficult to measure directly for imaged planets because of their long orbital periods, though there has been progress for a few planets inside 20\,au \citep{2018NatAs.tmp..114S, 2019ApJ...871L...4D, 2022MNRAS.509.4411D, Brandt_2021_beta_Pic_bc, 2021ApJ...915L..16B}. Ages for imaged planets have been largely reliant on an object belonging to a well-studied young association because of the limitations of determining precise ages for field stars. One notable exception is the Y-dwarf WD~0806-661~b \citep{2011ApJ...730L...9L} whose age $1.5^{+0.5}_{-0.3}$\,Gyr is determined by the cooling time of its white-dwarf host.
There are a handful of gas-giant companions with the potential for precise age dating using cooling ages from their low-mass hosts. Such ages are similar to white-dwarf cooling ages but without the need to estimate a stellar progenitor's lifetime. Because both stars and brown dwarfs begin with an initial entropy that is related to their mass, the age of such a low-mass object can be determined by measuring its mass and present-day luminosity \citep[e.g.,][]{2008ApJ...689..436L,2009IAUS..258..317B}. Gas giants in systems where the host stars are binary brown dwarfs or pre-main--sequence stars are amenable to such cooling-age measurements. \vhslong\ (hereinafter \vhs~AB) is one of the few such host binaries.
\citet{2015ApJ...804...96G} used the Visible and Infrared Survey Telescope for Astronomy (VISTA) Hemisphere Survey (VHS) to discover that VHS~J125601.58$-$125730.3 (hereinafter \vhs~b) is a common-proper motion companion to \vhs~AB, at a projected separation of $8\farcs06$. They derived spectral types and gravity classifications for the host and companion of M$7.5\pm0.5$~\textsc{int-g} and L$7.0\pm1.5$~\textsc{vl-g}, respectively. They measured a parallax of $78.8\pm6.4$\,mas, which placed the companion in the same location as HR~8799~b on the color-magnitude diagram. The parallax has since been updated, first by the Hawai`i Infrared Parallax Program \citep[$45.0\pm2.4$\,mas;][]{2020RNAAS...4...54D} and most recently by Gaia~EDR3 \citep[$47.27\pm0.47$\,mas = $21.14\pm0.22$\,pc;][]{2016A&A...595A...1G,2021A&A...649A...1G}, making the system more distant than originally thought. The companion is no longer a direct analog to HR~8799~b, but given the primary's age \citep[150--300~Myr;][]{2020RNAAS...4...54D}, it is still potentially planetary mass. Its cool temperature and wide separation make \vhs~b appealing for direct imaging studies, including being the primary spectroscopy target for {\sl JWST}'s Exoplanet Early Release Science program \citep{2022arXiv220512972H}.
Adaptive optics (AO) imaging revealed that the host is a binary \citep{2016ApJ...818L..12S, 2016ApJ...830..114R}, making \vhs\ a rare triple system potentially composed entirely of substellar objects. The inner binary's projected separation at discovery was $2.62\pm0.03$\,au (using the latest distance), which would correspond to a $\sim$10-year orbital period. The likelihood of obtaining dynamical masses on such a relatively short time scale motivated us to begin an orbit monitoring campaign, as we have done for other substellar binaries \citep[e.g.,][]{2017ApJS..231...15D}. We present here the dynamical masses for the binary components and a corresponding cooling age that suggests the directly imaged companion \vhs~b may be below the deuterium-fusing mass limit.
\begin{table}
\centering
\caption[]{Keck/NIRC2 Relative Astrometry of \vhs~AB.} \label{tbl:relast}
\begin{tabular}{lccrr}
\hline
Epoch & Sep (mas) & PA (\degree) & Corr & $\Delta{K_{\rm MKO}}$ (mag) \\
\hline
2016.059 & $128.21\pm 0.14$ & $168.07\pm0.05$ & $ 0.01$ & $ 0.021 \pm0.006$ \\
2017.050 & $139.7 \pm 2.8 $ & $163.5 \pm1.9 $ & $-0.20$ & $-0.06 \pm0.16 $\phn \\
2017.220 & $140.5 \pm 0.7 $ & $162.9 \pm0.4 $ & $ 0.35$ & $ 0.031 \pm0.023$ \\
2018.017 & $135.84\pm 0.29$ & $158.37\pm0.15$ & $ 0.83$ & $ 0.026 \pm0.026$ \\
2019.259 & $109.6 \pm 0.4 $ & $150.36\pm0.11$ & $ 0.32$ & $ 0.032 \pm0.028$ \\
2021.018 & \phn$ 35.02\pm 0.26$ & $112.5 \pm0.6 $ & $ 0.00$ & $ 0.033 \pm0.005$ \\
2022.066 & \phn$ 70.0 \pm 1.8 $ & $181.0 \pm1.7 $ & $ 0.64$ & $ 0.19 \pm0.04 $\phn \\
2022.271 & \phn$ 85.6 \pm 0.7 $ & $177.2 \pm0.4 $ & $-0.52$ & $ 0.25 \pm0.05 $\phn \\
\hline
\end{tabular}
\end{table}
\section{Observations} \label{sec:obs}
We obtained astrometry for the \vhs\ system from the Keck~II Telescope using the facility AO system. We began monitoring on 2016~Jan~22~UT using the Maunakea Observatories $K$-band filter \citep{2002PASP..114..180T} and NIRC2's narrow-camera mode, which has a pixel scale of $9.971\pm0.004$\,mas\,\perpix \citep{2016PASP..128i5004S} and field-of-view of $10\farcm2\times10\farcm2$. Most of our measurements were made using the standard laser guide star (LGS) AO system \citep{2006PASP..118..297W}, which uses a Shack-Hartmann wavefront sensor to measure the LGS and a separate red-optical sensor observing \vhs~AB itself as a tip-tilt reference. At one epoch, 2022~Jan~24~UT, we instead used the infrared pyramid wavefront sensor \citep{2020JATIS...6c9003B}, with \vhs~AB providing natural guide star AO correction.
Our first imaging from 2016 was obtained less than a year after the discovery imaging using MagAO/Clio2 and NIRC2 from \citet{2016ApJ...818L..12S}, and it was consistent with their measurement of increasing projected separation. In the following, we use the earliest data that comes from MagAO along with our own NIRC2 data. By 2018 the projected separation began decreasing, and eventually, the binary was unresolved on 2021~Jan~5~UT. We obtained aperture masking interferometry data the following night, using the 9-hole mask \citep{Ireland:2008yq}, and successfully resolved \vhs~AB at 35\,mas separation. In addition, starting with our first observation in 2016, we regularly obtained our imaging in a way that captured both \vhs~AB as well as \vhs~b. This allowed us to measure astrometry for \vhs~AB and the orbital motion of \vhs~b.
Our methodology for reducing NIRC2 imaging and masking data is described extensively in our previous work \citep[e.g.,][]{2017ApJS..231...15D}. Briefly, we perform standard calibrations (dark subtraction and flat-fielding with dome flats) and then measure the separation, position angle (PA), and flux ratio in individual images. This is done using StarFinder \citep{2000A&AS..147..335D} when possible, but when this fails at the closest separations we use an analytical, multi-component Gaussian PSF model optimized using the Levenberg-Marquardt algorithm implemented in IDL by the \textsc{mpfit} routine \citep{2009ASPC..411..251M}. We also tested a Moffat PSF model because \citet{2012CardosoC} and \citet{2022AJ....163..288C} showed it is the optimal profile for NACO data, but it did not significantly alter our results. We correct our measured pixel positions for NIRC2's distortion using the \citet{2016PASP..128i5004S} astrometric calibration, which also provides the pixel scale and orientation of the images. Final measurements at an epoch are the mean and standard deviation of values from individual images. For masking data, we obtain binary parameters by fitting the closure phases using the Sydney pipeline \citep{Ireland:2008yq}.\footnote{\url{https://github.com/mikeireland/idlnrm}}
Table~\ref{tbl:relast} presents our relative astrometry for \vhs~AB, including the linear Pearson correlation coefficient (Corr) for separation and PA. Table~\ref{tbl:absast} presents astrometry of \vhs~b we derived from imaging epochs where \vhs~AB was sufficiently well resolved for StarFinder analysis and that conformed to a standard configuration with the NIRC2 $y$-axis ${\rm PA}\approx0$\degree\ and \vhs~AB at NIRC2 $(x,y) \approx (250,800)$\,pix. By keeping all three components in approximately the same location on NIRC2, the astrometry should be minimally impacted by the $\approx$1\,mas uncertainty in the distortion solution. We follow convention in referring to relative declination as $\Delta\delta$ and right ascension as $\Delta\alpha^* \equiv \Delta\alpha\cos\delta$.
\begin{table}
\centering
\caption[]{Keck/NIRC2 Relative Astrometry of \vhs~b.} \label{tbl:absast}
\begin{tabular}{lcccc}
\hline
Epoch & $\Delta{\alpha^*}_{\rm b-A}$ (mas) & $\Delta{\delta}_{\rm b-A}$ (mas) \\
\hline
2016.059 & $-4974.0\pm2.8$ & $-6409.8\pm2.3$ \\
2017.220 & $-4977.6\pm2.2$ & $-6412.8\pm2.1$ \\
2018.017 & $-4982.9\pm2.5$ & $-6406.5\pm2.6$ \\
2022.271 & $-5049.0\pm1.3$ & $-6387.5\pm1.1$ \\
\hline
\end{tabular}
\begin{list}{}{}
\item[Note.]-- Astrometry relative to \vhs~A not the barycenter of AB. As described in Section~\ref{sec:orbit}, we determine the position of \vhs~b relative to the \vhs~AB barycenter to be $\Delta(\alpha^*,\delta)_{\rm b-AB} = (-5025.6\pm2.0,-6350\pm9)$\,mas at the mean epoch 2019.853.
\end{list}
\end{table}
Our multi-epoch NIRC2 data precisely constrain the $K$-band flux ratio of \vhs~AB. We used only StarFinder and masking results, as these should be less prone to systematic errors (epochs 2017.05, 2018.02, 2019.26, and 2021.02). The flux ratios are in excellent agreement, with $\chi^2=0.44$ and 3 degrees of freedom (dof), so we adopt the weighted average $\Delta{K_{\rm MKO}} = 0.033\pm0.004$\,mag.
The integrated-light spectrum of \vhs~AB was obtained with IRTF/SpeX in prism mode on 2016~Feb~19~UT as part of NASA IRTF program 2016A079 (PI: Bardalez Gagliuffi). The target was observed at airmass 1.19 with the 0$\farcs$5 slit and $6\times60$\,s exposures. The A0 star HD~112304 was observed immediately after the target for flux calibration and telluric correction. Internal flat fields and argon arc lamps followed the standard observations for wavelength calibration. All data were reduced with the IDL package SpeXtool v4.1. Further details on the observations and instrument settings can be found in \citet{2014ApJ...794..143B} and \citet{2010ApJ...710.1142B}.
\section{Orbit analysis} \label{sec:orbit}
For the three-body system of \vhs~AB~b, we separate our orbital analysis into the relative orbit of the host binary ($\sim$2\,au) and the orbital motion of the wide companion ($\sim$170\,au) relative to the AB barycenter. Dynamical interactions are negligible for such a wide, low mass-ratio ($M_{\rm comp}/M_{\rm host} < 0.1$) system.
To fit the relative orbit of \vhs~AB we used {\sc orvara} \citep[v1.0.4;][]{2021AJ....162..186B}. {\sc orvara} utilizes a novel, highly-efficient eccentric anomaly solver and determines posteriors of orbital parameters using the affine-invariant \citep{2010CAMCS...5...65G} Markov-Chain Monte Carlo (MCMC) sampler {\sc emcee} \citep{2013PASP..125..306F} with parallel-tempering \citep{Vousden_2016_PT}. We provide our {\sc orvara} configuration files as supplementary data here, but briefly, we fitted all eight standard parameters for a relative astrometric fit with their default priors (linear-flat in eccentricity $e$ and viewing angles, except inclination $p(i)\propto\sin{i}$, and log-flat in mass and semimajor axis $a$). Relative astrometry only constrains the total mass, $\Mtot \equiv M_{\rm A}+M_{\rm B}$, so we placed no limiting priors on the component masses. Thus, $M_{\rm A}$ and $M_{\rm B}$ varied freely in the {\sc orvara} MCMC analysis, but were always constrained implicitly to follow a consistent \Mtot. Our results are based on a run with 100 walkers and $10^5$ steps for the MCMC and 5 temperatures for parallel tempering. We thinned our chains, retaining every 50th step, and discarded the first 50\% as burn-in, yielding $10^5$ final samples in our posterior.
\begin{table}
\setlength{\tabcolsep}{3pt}
\centering
\caption[]{\vhs~AB orbital parameters derived from {\sc orvara}.} \label{tbl:orbit}
\begin{tabular}{lcc}
\hline
Property & Median $\pm$1$\sigma$ & 95.4\% c.i. \\
\hline
Total mass, \Mtot\ (\Msun) & $0.141\pm0.008$ & 0.125, 0.157 \\[3pt]
Semimajor axis, $a$ (au) & $1.96\pm0.03$ & 1.89, 2.03 \\[3pt]
Eccentricity, $e$ & $0.8826_{-0.0024}^{+0.0025}$& 0.8776, 0.8875 \\[3pt]
Inclination, $i$ (\degree) & $118.7\pm1.0$\phn\phn & 116.7, 120.7 \\[3pt]
PA of ascending node, $\Omega$ (\degree) & $4.4\pm0.5$ & 3.5, 5.3 \\[3pt]
Argument of periastron, $\omega$ (\degree) & $44.9\pm1.0$\phn & 42.8, 46.9 \\[3pt]
Mean longitude at $t_{\rm ref}$, $\lambda_{\rm ref}$ (\degree) & $-163.5\pm1.6$\phs\phn\phn &$-$166.7, $-$160.1\\
\hline
Period, $P$ (yr) & $7.307_{-0.024}^{+0.023}$ & 7.262, 7.357 \\[3pt]
Time of periastron, $t_p$ (yr) & $2021.537_{-0.014}^{+0.015}$\phn\phn\phn&2021.507, 2021.566\\[3pt]
$\tau \equiv (t_p-t_{\rm ref})/P$ & $0.579\pm0.007$ & 0.565, 0.592 \\
\hline
\end{tabular}
\begin{list}{}{}
\item[*] Reference epoch $t_{\rm ref} = 2010.0$ (55197\,MJD).
\item[Note.] Free parameters in the MCMC are shown in the top section. These were used to compute the parameters in the bottom section. All posterior distributions are nearly Gaussian.
\end{list}
\end{table}
Our measured total mass of $0.141\pm0.008$\,\Msun\ suggests that the components of \vhs~AB are possibly brown dwarfs with masses of $74\pm4$\,\Mjup\ if their mass ratio is near unity. Their orbital eccentricity of $0.8826^{+0.0025}_{-0.0024}$ is the highest ever measured for a very low-mass binary \citep[e.g., see][]{2011ApJ...733..122D,2017ApJS..231...15D}.
Independent of the orbit analysis of \vhs~AB we measured the orbital motion of \vhs~b relative to its host's barycenter (denoted b--AB). As mentioned in Section~\ref{sec:obs}, at some epochs we obtained imaging of all three objects in individual NIRC2 images. The relative positions of A and B are typically measured $\sim$10$\times$ more precisely than A or B to the companion b. This allowed us to approximate the errors in \vhs~AB relative astrometry to be negligible compared to those of \vhs~b. Under this assumption, the position of \vhs~b relative to \vhs~A can be written as
\begin{equation}
\hfill
\Delta{\alpha^*}_{\rm b-A} = \Delta{\alpha^*}_{\rm b-AB} + (\mu_{\alpha^*, {\rm b-AB}} \times t) - [(M_{\rm A}/\Mtot) \times \Delta{\alpha^*}_{\rm A-B}]
\hfill
\end{equation}
\vskip -0.25in
\begin{equation}
~\Delta{\delta}_{\rm b-A} = \Delta{\delta}_{\rm b-AB} + (\mu_{\delta, {\rm b-AB}} \times t) - [(M_{\rm A}/\Mtot) \times \Delta{\delta}_{\rm A-B}],
\end{equation}
where the left-hand side corresponds to the measurements in Table~\ref{tbl:absast}, the $\Delta_{\rm A-B}$ values on the far right side can be derived from Table~\ref{tbl:relast}, and the rest are free parameters.
We used {\sc mpfit} to find the best-fit solution and then used a Monte Carlo approach to derive the uncertainties in the fit by randomly drawing simulated measurements from the best-fit model with scatter equal to the individual input measurements. Unfortunately, the mass ratio is poorly constrained in this analysis ($M_{\rm A}/\Mtot = 0.45\pm0.08$), likely due to the eccentric orbit and the fact that the measurements used here (i.e., when all three components are well resolved) happen to come from a similar phase of the orbit. In contrast, the orbital motion of the companion relative to the \vhs~AB barycenter is well detected at $\mu_{\alpha^*, {\rm b-AB}} = -10.7\pm0.6$\,\masyr\ and $\mu_{\delta, {\rm b-AB}} = 0.6\pm0.7$\,\masyr.
To fit the orbit of \vhs~b relative to the AB barycenter we used the python package {\sc lofti\_gaia} \citep{2020ApJ...894..115P}.\footnote{\url{https://github.com/logan-pearce/lofti_gaia}} {\sc lofti\_gaia} is based on the Orbits-For-The-Impatient \citep[OFTI; ][]{2017AJ....153..229B} rejection-sampling method and fits orbital parameters of resolved binaries in \Gaia using their proper motions and radial velocities if available. Here we adopted the architecture of {\sc lofti\_gaia} to use our measured proper motion of b relative to the AB barycenter at the mean observation epoch 2019.853, rather than \Gaia astrometry at the mean \Gaia epoch. We fitted six orbital parameters: semimajor axis ($a_{\rm b}$), eccentricity ($e_{\rm b}$), inclination ($i_{\rm b}$), argument of periastron ($\omega_{\rm b}$), longitude of ascending node ($\Omega_{\rm b}$), and time of periastron passage ($t_{\rm p, b}$). Total system mass and distance were drawn from normal distributions of $0.152\pm0.010$\,\Msun\ and $21.14\pm0.22$~pc. This system mass is based on our measured mass for \vhs~AB and an estimated mass of $0.011\pm0.006$\,\Msun\ for \vhs~b from our evolutionary model analysis in Section~\ref{sec:evol}. OFTI rejection sampling generates trial orbits by drawing random values for four orbital parameters from priors in $e_{\rm b}$: Uniform on [0,1); $\cos(i_{\rm b})$: Uniform on [-1,1]; $\omega_{\rm b}$: Uniform on [0,2$\pi$]; orbit phase, $(t_{\rm p, b}-2019.853)/P_{\rm b}$: Uniform on [0,1]. OFTI then scales the semimajor axis and rotates $\Omega_{\rm b}$ to match the input data and determines whether to accept or reject a trial by comparing its proper motion in RA and Dec to our measured values. There is no prior on $a_{\rm b}$ or $\Omega_{\rm b}$.
We ran {\sc lofti\_gaia} on our measured proper motions until $10^5$ trial orbits were accepted. Table~\ref{tbl:orbit-comp} reports the output probability distributions of orbital parameters of \vhs~b around its host, and Figure~\ref{fig:AB-b-orbit} shows these orbits on the sky.
\begin{table}
\centering
\caption[]{\vhs~b orbital parameters derived from {\sc lofti\_gaia}.} \label{tbl:orbit-comp}
\begin{tabular}{lcc}
\hline
Property & Median $\pm$1$\sigma$ & 95.4\% c.i. \\
\hline
Semimajor axis $a_{\rm b}$ (au) & $350_{-150}^{+110}$ & \phn150, 1020 \\[3 pt]
Eccentricity $e_{\rm b}$ & $0.68_{-0.10}^{+0.11}$ & 0.49, 0.91 \\[3 pt]
Inclination $i_{\rm b}$ (\degree) & $24_{-15}^{+10}$ & \phn3, 48 \\[3 pt]
Argument of periastron $\omega_{\rm b}$ (\degree) & $180_{-130}^{+100}$ &\phn\phn0, 330 \\[3 pt]
PA of the ascending node $\Omega_{\rm b}$ (\degree) & $40_{-160}^{+50}$ & $-$140, 190\phs\\[3 pt]
Time of periastron $t_p, {\rm b}$ (yr) & $1240\pm90$\phn\phn & \phn980, 1480 \\[3 pt]
\hline
Period $P_{\rm b}$ (kyr) & $16_{-10}^{+7}$ & \phn4, 82 \\[3 pt]
$\tau_{\rm b} \equiv (t_p-2019.853)/P$ & $0.047_{-0.039}^{+0.022}$ & 0.000, 0.117 \\[3 pt]
\hline
\end{tabular}
\begin{list}{}{}
\item[Note.] Our {\sc lofti\_gaia} analysis adopted a system mass of $M_{\rm A}+M_{\rm B}+M_{\rm b}=0.152\pm0.010$\,\Msun. Free parameters from the fit are shown in the top section. These were used to compute the parameters in the bottom section.
\end{list}
\end{table}
\section{Luminosities} \label{sec:lbol}
We computed the combined-light bolometric luminosity of \vhs~AB by direct integration of its unresolved optical to mid-infrared (MIR) spectral energy distribution (SED). Our assembled SED consists of available Pan-STARRS-1 \citep[PS1;][]{2016arXiv161205560C} optical photometry ($g$, $r$, $y$), the near-infrared (NIR) IRTF/SpeX prism spectrum from Section~\ref{sec:obs}, NIR photometry from 2MASS \citep{2003tmc..book.....C}, and MIR photometry from the CatWISE catalog \citep[$W1$ and $W2$ bands;][]{2020ApJS..247...69E,2021ApJS..253....8M} and AllWISE catalog \citep[$W3$ and $W4$ bands;][]{2013wise.rept....1C}. We began by flux-calibrating the SpeX spectrum using the weighted average of calibrations derived from PS1 $y$ and 2MASS $JHK_s$ photometry, assuming a systematic noise floor of 0.01~mag for all the filters. We then integrated the flux-calibrated SpeX spectrum to determine the NIR contribution to the bolometric flux, with an error that accounts for the uncertainties in the spectral data points and the overall flux calibration. We determined the optical and MIR contributions to the bolometric flux by simultaneously fitting BT-Settl model atmospheres \citep[CIFIST2011/2015;][]{2012RSPTA.370.2765A, 2015A&A...577A..42B} to the PS1 and WISE photometry (computing synthetic photometry from the models) and the SpeX spectrum (with the models degraded to the non-linear spectral resolution of the 0$\farcs$5 slit). We found the best-fitting BT-Settl model had $\Teff = 2700$\,K and $\logg = 5.0$\,dex. Our final bolometric flux was found by adding the NIR contribution to the integration of the model outside the wavelength range of the SpeX spectrum. The uncertainty in the optical+MIR contribution was obtained from the standard deviation of the corresponding measurements derived using the four model spectra adjacent in \Teff\ and \logg\ to the best-fitting model. Our final bolometric flux of \vhs~AB is $1.47\pm0.04\times10^{-13}$\,W\,m$^{-2}$. Using its parallactic distance of $21.14 \pm 0.22$\,pc, we calculated a bolometric luminosity $\log((\Lbol_{\rm, A}+\Lbol_{\rm, B})/\Lsun) = -2.687\pm0.021$\,dex.
To derive component luminosities for \vhs~AB, we used the empirical relation between $K_s$-band absolute magnitude and \Lbol\ from \citet{2017ApJS..231...15D}. We assumed that $\Delta{K_{\rm MKO}} = \Delta{K_{\rm 2MASS}}$ here because of the near-unity flux ratio. Using a Monte Carlo method, we drew random absolute magnitudes representative of \vhs~A ($9.1\pm0.2$\,mag, truncated at 8.7\,mag, the upper limit of the empirical relation). We then simulated \vhs~B by adding $0.033\pm0.004$\,mag to this absolute magnitude and computed the difference in derived \Lbol\ values from the relation. We found $\log(\Lbol_{\rm,B}/\Lbol_{\rm,A}) = -0.012\pm0.002$\,dex. We therefore calculated component luminosities of $\log(\Lbol_{\rm,A}/\Lsun) = -2.982\pm0.021$\,dex and $\log(\Lbol_{\rm,B}/\Lsun) = -2.994\pm0.021$\,dex.
For \vhs~b, we used the value of $\log(\Lbol_{\rm, b}/\Lsun) = -4.568\pm0.009$\,dex derived by \citet{2022arXiv220900620M} integrating over the whole 1--20\,\micron\ spectrum observed by {\sl JWST}, where gaps were covered by with BT-Settl models, and the Gaia~EDR3 parallactic distance was used.
\section{Evolutionary model analysis} \label{sec:evol}
Substellar objects with well-determined luminosities enable precise evolutionary model-derived cooling ages (when mass is known) and masses (when age is known). Some key aspects of evolutionary models are quite uncertain, such as the treatment of clouds. The relatively sparse tests of models with objects of known mass, age, and luminosity have found a mixed bag of agreement and potential problems \citep[e.g.,][]{2014ApJ...790..133D,2018AJ....156..168B,Brandt2021_Six_Masses}, so we note that any mass or age derived from evolutionary models should be treated with corresponding uncertainty. In the following, we use our dynamical mass measurement of \vhs~AB to determine a substellar cooling age for the system and then use this cooling age to estimate the mass of \vhs~b.
For \vhs~AB, the most appropriate evolutionary models are from \citet{2015A&A...577A..42B}. As in our previous work \citep[e.g.,][]{2017ApJS..231...15D}, we used a Monte Carlo rejection-sampling approach to derive an age probability distribution from input luminosity and mass prior distributions. We assumed a linear-flat prior in age and a log-flat prior in $M_{\rm A}$, drawing random, uniformly distributed values, while simultaneously drawing random values of \Mtot\ from our MCMC posterior. We calculated $M_{\rm B}$ as the difference between \Mtot\ and $M_{\rm A}$. For each age-mass pair, for each component, we computed a model luminosity from bilinear interpolation of the model grid. The probability of any sample being accepted was $p=e^{-(\chi^2-\chi_{\rm min}^2)/2}$, where $\chi^2$ was computed as the sum of comparing our measured luminosities to the model-calculated ones, and $\chi^2_{\rm min}$ was the lowest value among the ensemble of trial values. A sample was accepted if a randomly drawn number $0<u<1$ for a given trial satisfied $p>u$. We then computed other model-derived properties, such as \Teff, using the accepted mass and age samples.
We found a cooling age of $140\pm20$\,Myr for \vhs~AB, with an approximately Gaussian probability distribution. This age is consistent with the nondetection of lithium in its spectrum \citep{2015ApJ...804...96G}, as extreme lithium depletion ($>10^{-4}$) corresponds to the older part of the age posterior at $<2$$\sigma$ according to the \citet{2015A&A...577A..42B} models.
We used \vhs~AB's age posterior to perform a rejection-sampling analysis of the companion's properties. The only evolutionary model grid that reaches \vhs~b's luminosity and accounts for cloud evolution is the ``hybrid'' grid of \citet{2008ApJ...689.1327S}. As seen in Figure~\ref{fig:evol}, \citet{2008ApJ...689.1327S} models predict that objects with the luminosity and age of \vhs~b should be rare because they fall in a gap between low-mass objects that cannot fuse deuterium and more massive objects that are either fusing deuterium now (and thus more luminous at this age) or have already fused their deuterium (and thus are older at this luminosity). Our rejection sampling analysis correspondingly results in a bimodal posterior distribution. The slightly less probable outcome, with 40\% of the posterior, is that \vhs~b is a deuterium-bearing object of $12.0\pm0.1$\,\Mjup. The slightly more probable outcome is that \vhs~b has already depleted its deuterium and is an object of $16\pm1$\,\Mjup.
The radius of \vhs~b, according to these models, is 1.30\,\Rjup\ in the lower-mass scenario and 1.22\,\Rjup\ in the higher-mass scenario. This translates into slightly different effective temperatures of $1153\pm5$\,K and $1194\pm9$\,K, respectively, as well as surface gravities of $4.268\pm0.006$\,dex and $4.45\pm0.03$\,dex.
The bimodality in \vhs~b's mass drives the properties we derived, so alternative model assumptions could potentially shift the balance significantly in favor of one or the other possibilities. The \citet{2008ApJ...689.1327S} models, in particular, have such a wide gap in the \Lbol--age diagram because the onset of the L/T transition, which these models assume begins at $\Teff=1400$\,K, happens to occur at nearly the same age as deuterium fusion for objects near the deuterium-fusion mass boundary. The L/T transition slows cooling, so objects stay luminous both because of cloud disappearance and deuterium fusion. The L/T transition probably occurs at lower temperatures for low-gravity objects like \vhs~b \citep[e.g.,][]{2006ApJ...651.1166M,2009ApJ...699..168D,2015ApJ...810..158F}, which may significantly impact the size and shape of the deuterium-fusing gap in \Lbol-age space. Naively, such a delayed and lower-\Teff\ L/T transition might be expected to make even lower-mass isochrones have higher luminosities in Figure~\ref{fig:evol}, which would in turn make it more likely that \vhs~b is indeed below the deuterium-fusion mass boundary.
\section{Orbital architecture \& Origins} \label{sec:arch}
We have astrometrically determined the three-dimensional orbits of both the inner host binary (A--B) and the outer companion about its barycenter (AB--b). This allows us to constrain the orbital architecture of the system and thus, potentially, shed light on its origin. One crucial measurement that our orbit determinations enables is the true mutual inclination of the A--B and AB--b orbital planes,
\begin{equation}
\hfill
\cos{i_{\rm AB-b}} = \cos{i_{\rm AB}}\cos{i_{\rm b}} + \sin{i_{\rm AB}}\sin{i_{\rm b}}\cos(\Omega_{\rm AB}-\Omega_{\rm b}).
\hfill
\label{eq:cosphi}
\end{equation}
For more detail on how to derive this mutual inclination angle, we point the reader to \citet{2011ApJ...743...61S}. Propagating all measurement uncertainties from {\sc orvara} and {\sc lofti} analyses, we find $i_{\rm AB-b} = 115\pm14$\degree. This reveals that the angular momentum vectors of the two orbital planes are misaligned (8$\sigma$) and also possibly pointing in opposite directions (1.8$\sigma$).
\vhs~AB's orbit is highly eccentric, and one possible explanation for this is that it has been pumped up by Kozai-Lidov cycles \citep{1962AJ.....67..591K,1962P&SS....9..719L}. The observed mutual inclination is consistent with the range of critical inclinations for this mechanism to operate, $39.2\degree < i_{\rm AB-b} < 140.8\degree$. The masses, eccentricities, and orbital periods also imply Kozai-Lidov oscillation periods less than the age of the system, $\log(\tau_{\rm KL}/{\rm Myr}) = 1.7^{+0.4}_{-0.5}$\,dex (Eq.~1; \citealp{2007ApJ...669.1298F}). Under the conservative assumption that \vhs~AB's initial eccentricity was zero, its maximum eccentricity attainable from Kozai-Lidov cycles is $[1-(5/3)\cos^2{i_{\rm AB-b, initial}}]^{-1/2}$ \citep{2007ApJ...669.1298F}. To achieve the observed $e_{\rm AB} = 0.883$ would thus have required an initial misalignment of $>$68.7\degree or $<$111.3\degree, the latter of which is in excellent agreement with our measured mutual inclination.
Whatever might have caused an initial misalignment between the two orbital planes may also be responsible for the unusual configuration of this system. The companion mass ratio relative to the inner binary is quite low ($M_{\rm b}/M_{\rm AB} = 0.08$--0.10), especially for a multiple system with total mass $<$0.2\,\Msun\ \citep[e.g.,][]{2007prpl.conf..427B}. At higher masses, the formation of such systems has been suggested to be due to the disintegration of high-order multiples at young ages \citep[e.g.,][]{2009MNRAS.392..413S,2015AJ....149..145R}, although such systems should be quite rare \citep[e.g.,][]{2012MNRAS.419.3115B}.
One final clue to the origins of the \vhs\ system is the eccentricity of the wide companion's orbit. Its periastron distance of $112^{+21}_{-25}$\,au is consistent with a more compact initial configuration for the system that led to \vhs~b being scattered onto a wide, eccentric, and misaligned orbit.
\section{Summary}
We measured high-precision relative astrometry of all three components of the \vhs\ system. Our orbital analysis yields a total dynamical mass of the inner binary ($M_{\rm A}+M_{\rm B} = 0.141\pm0.008$\,\Msun), high eccentricities for both the inner and outer orbits, and a mutual inclination of $116\pm16$\degree\ between them. We thus confirmed that the host binary may be a pair of brown dwarfs, derived their integrated-light luminosity from SED-fitting, and measured a cooling age of $140\pm20$\,Myr from \citet{2008ApJ...689.1327S} hybrid evolutionary models. We found that at such as young age, \vhs~b has a sufficiently low luminosity that it may be below the deuterium-fusing mass boundary or, only slightly more likely, that it is massive enough to have depleted its deuterium long ago. Regardless of the mass of \vhs~b, the orbital architecture implies a dynamical origin, perhaps from the disintegration of a high-order multiple or scattering within a protostellar disk.
If \vhs~b is indeed below the D-fusion mass boundary, then molecular absorption bands from D-bearing isotopologues of water (HDO) and methane (CH$_3$D) may be detectable in high-S/N 3--5~\micron\ {\sl JWST} spectra \citep[e.g.,][]{2019ApJ...882L..29M}. We also anticipate that similar observations will be possible for the other rare triple systems with planetary-mass companions for which substellar cooling ages are possible \citep[e.g., 2MASS~J0249$-$0557;][]{2018AJ....156...57D}.
\section*{Acknowledgements}
We are grateful to the anonymous referee for prompt and thoughtful comments that improved our manuscript.
T.~Dupuy acknowledges support from UKRI STFC AGP grant ST/W001209/1.
This research was funded in part by the Gordon and Betty Moore Foundation through grant GBMF8550 to M.~Liu.
A.~Sanghi acknowledges support from the Research Experience for Undergraduate program at the Institute for Astronomy, University of Hawaii, Manoa funded through NSF grant \#2050710.
We thank Spencer Hurt for the BT-Settl models used in the bolometric luminosity calculation.
The data presented herein were obtained at the W.M.\ Keck Observatory, which is operated as a partnership between the California Institute of Technology, the University of California, and NASA. The Observatory was made possible by the generous financial support of the W.M.\ Keck Foundation.
This work has made use of data from the European Space Agency (ESA) mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission.
\section*{Data Availability}
All of our NIRC2 data are available on the Keck Observatory Archive (KOA), which is operated by the W.\ M.\ Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration.
We include configuration files for our orbit analysis in the supplemental data.
\label{lastpage} |
Title:
Transient Simulations for Radio Surveys |
Abstract: Several new radio facilities have a field of view and sensitivity well suited
for transient searches. This makes it more important than ever to accurately
determine transient rates in radio surveys. The work presented here seeks to do
this task by using Monte-Carlo simulations. In particular, the user inputs
either a real or simulated observational setup, and the simulations code
calculates transient rate as a function of transient duration and peak flux.
These simulations allow for simulating a wide variety of scenarios including
observations with varying sensitivities and durations, multiple overlapping
telescope pointings, and a wide variety of light curve shapes with the user
having the ability to easily add more. While the current scientific focus is on
the radio regime, with examples given here from the MeerKAT telescope in South
Africa, the simulations code can be easily adapted to other wavelength regimes.
| https://export.arxiv.org/pdf/2208.00965 |
\begin{frontmatter}
\title{Transient Simulations for Radio Surveys}
\author[GWU,APSIS]{Sarah I Chastain\corref{cor1}}
\affiliation[GWU]{organization={Department of Physics, The George Washington University},%
city={Washington},
postcode={20052},
state={DC},
country={USA}}
\cortext[cor1]{Corresponding author}
\affiliation[APSIS]{organization={Astronomy, Physics and Statistics Institute of Sciences (APSIS), The George Washington University},%
city={Washington},
postcode={20052},
state={DC},
country={USA}}
\affiliation[UVI]{organization={University of the Virgin Islands},
city={Charlotte Amalie},
postcode={00802},
state={USVI},
country={USA}}
\author[GWU,APSIS]{Alexander J van der Horst}
\author[UVI]{Dario Carbone}
\begin{keyword}
Transients \sep Radio Astronomy \sep Simulations
\end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec:sample1}
We are entering an exciting era in time-domain astronomy. New and upgraded facilities such as the Vera C. Rubin Observatory \cite{2019ApJ...873..111I} and Zwicky Transient Facility \cite{2019PASP..131a8002B} in the optical, and the MeerKAT \cite{2016mks..confE...1J} and Australian Square Kilometer Array Pathfiner (ASKAP) \cite{2021PASA...38...54M} radio telescopes, have been finding, or are expected to find, transients and variables in images at rates that are orders of magnitude higher than ever before. This is in addition to exciting new transients found in time series data, such as the wealth of Fast Radio Bursts (FRBs) found using the Canadian Hydrogen Intensity Mapping Experiment (CHIME) \cite{2019Natur.566..235C}.
Many transients are discovered in blind searches, found by examining large portions of the sky for new sources or known sources that display significant flux changes. There are also transients found in a targeted way, such as those associated with gravitational wave events \cite{2017PhRvL.119p1101A,2017Natur.551...71T,2018ApJ...868L..11M}, gamma ray bursts \cite{1997Natur.389..261F,2018MNRAS.473.1512A}, tidal disruption events \cite{2011Sci...333..199L,2016Sci...351...62V}, and outbursts from X-ray binaries \cite{2004MNRAS.355.1105F,2017MNRAS.469.3141T}. Considering that transients can be found in both blind and targeted searches brings up important questions: if we are doing a targeted transient search, what is the chance that a detection may be a different transient source that happens to be in the same area of the sky, even within the same uncertainty region of the transient of interest? How many transients of a certain type or with a specific light curve shape would we expect to find in a given survey? Finding the answers to these questions is important for a variety of applications in time-domain astronomy and requires calculating transient rates with high accuracy.
The most straightforward approach to calculating a transient rate is to use the Poisson distribution to find a rate given the number of detections in a survey, but there are shortcomings in this simplified approach \cite{2016MNRAS.459.3161C}. This transient rate does not account for a number of important factors such as the relative timescales of the transients and the observations, and some of the confounding observational effects such as gaps within an observation or a survey. In addition, it does not account for the distribution of sensitivities present in the observations of a real survey. These effects can be partly mitigated in an analytical approach \cite{2016MNRAS.459.3161C}, but Monte-Carlo simulations provide a way to more easily account for the issues presented by real observations and surveys in transient rate calculations \cite{2017MNRAS.465.4106C}.
\citet{2017MNRAS.465.4106C} examined two light curve shapes: the tophat light curve, a light curve that instantaneously rises to its peak flux and at some point in time instantaneously decays; and the fast rise exponential decay (FRED) light curve, a light curve that instantaneously rises to its peak flux and exponentially decays thereafter. The differences between the resulting transient rates from these two light curve shapes indicate how the wide variety of real light curves can affect transient rate calculations. This is in addition to the previously mentioned observational effects that should be accounted for.
Observational radio surveys present a number of challenges for computing transient rates. Although radio observations can be calibrated using a sky model, many radio observations require the use of calibrator fields that need to be observed at certain time intervals before, after, and during an observation of a science target field. This means that the telescope does not continuously point at a target for an entire observation. Often when using a calibrator source, a radio observation would be broken down into observing a very bright, well-known source to calibrate the bandpass, followed by alternately observing a bright source close to the target for gain calibration and the science target. This means that the time on target is less than the total observing time, and that there are gaps in the target observations. This also means that there is the possibility of searching calibrator fields for transients \citep{2011ApJ...728L..14B}. Furthermore, typical radio observations can be broken down into shorter timescales for imaging. In addition, in order to explore a wider field of view, a survey may consist of multiple adjacent pointings on the sky with some degree of overlap between pointings. These pointings may have different limits on transient rates due to differing observing cadences, and the overlap regions will provide different transient rates as well.
The goal of this work is to calculate transient rates while accounting for the aforementioned features and complexities of radio surveys. Corrections for observational effects such as gaps in observations, systematic errors in flux measurements, different kinds of transient light curves, multiple overlapping pointings, and a distribution of observational sensitivities are accounted for. Mitigating all the aforementioned effects makes the transient rate calculations more accurate. In addition, the publicly available simulations code is relatively simple in its use, has Python 3 support, and is designed for modularity so that the user can easily add new items such as other light curve shapes than those already provided.
In section~\ref{design}, we will go into detail on how the code is written and its features implemented. We will in section~\ref{sec:results} present and discuss results from several example radio surveys illustrating the various features. In section~\ref{performance} we will look at the computational performance of the simulations code. In section~\ref{futureapplications} we will discuss the ways in which this code can be expanded in the future, and we draw conclusions in section~\ref{conclusions}.
\section{Design} \label{design}
\subsection{Language and Libraries}
The code was written in Python\footnote{http://www.python.org} and designed for its most up-to-date versions ($>3.6$). It uses several libraries: Astropy \cite{2013A&A...558A..33A} to provide accurate angular source separation calculations and any necessary coordinate system changes; Scipy \cite{2020NatMe..17..261V} for a few special mathematical functions; Bitarray\footnote{https://github.com/ilanschnell/bitarray} for storing large amounts of information efficiently; tqdm\footnote{https://tqdm.github.io} for easy-to-use progress bars; and Numpy\footnote{https://numpy.org/} for the vast majority of the numerical computations. In addition, the script to assist with creating input files uses Common Astronomy Software Applications (CASA) \cite{2007ASPC..376..127M} to read and extract metadata from radio measurement sets.
By using an interpreted language that allows for the use of classes, the code is easy to modify or extend for different use cases, or to increase accuracy. Adding new light curves can be done by creating a Python file with the name of the light curve and a class with the essential information. Using Numpy partially makes up for Python's lack of speed compared to a compiled language such as C. The information on whether a simulated source is detected is stored and written using bit arrays, in order to reduce memory usage so that these computations can be performed on a regular desktop or laptop.
\subsection{Input}
In order to accurately simulate transient rates, it is necessary to provide detailed information on the survey that will be simulated. This information includes observation times, pointings, field of view, sensitivity, and any gaps in the observations. This information is either supplied by the user as a comma-separated values (CSV) file or it can be generated using a separate script that extracts information from the metadata in the measurement sets of the survey observations.
In addition, the simulations code base contains a configuration file with settings that can be adjusted depending on the use case. These settings include items such as number of transients to simulate, number of transient detections in the survey, flux and duration ranges to simulate, detection threshold, light curve type, confidence level, output filename, and options for simulating a survey such as number of observations, sensitivity, mean and standard deviation of simulated normally-distributed error in sensitivity, interval between observations, and the duration of the observations. The light curve type can be any of the included ones or a new light curve created by the user.
\subsection{Light Curves}
In Figure~\ref{multilc} we show the light curves included in the simulations, which are the tophat, fast rise exponential decay (FRED), exponential rise fast decay (Wilma), exponential rise exponential decay (ERED), Gaussian, and parabola. The tophat light curve is defined to have an instantaneous rise to the peak flux, followed at some point in time by an instantaneous decay. This light curve represents the classic case of a transient that turns on and off, and the simplest form of transient light curves. The FRED light curve instantaneously rises to the peak flux and exponentially decays. This light curve is commonly observed in a variety of X-ray and gamma-ray transients. The Wilma is simply the time reversed FRED: it exponentially rises to the peak flux and then instantaneously decays. Including the Wilma light curve is a convenient way to introduce the simplest form of light curve with no definite start or end. The ERED is such a light curve that is formed by putting the previous two together: it exponentially rises to a peak flux and then exponentially decays. The Gaussian is a light curve that has the shape of a Gaussian function with the mean being located at the peak flux and the duration given by the standard deviation. Similar light curves can be seen arising from, for example, binary systems and magnetar bursts. The parabolic light curve is a concave down parabola that reaches the peak flux at the vertex and the duration being the range of time in which the flux is positive; its inclusion provides an example of a light curve with a definite duration but with a profile that rises and falls below the peak flux in a symmetric way (i.e., one step more complex than the tophat). More details about these light curves, including their mathematical definitions, can be found in~\ref{sec:lc:appendix}.
For a given radio survey, the simulated light curve has implications on the part of parameter space that the survey probes. One of the clearest ways to examine these implications is by looking at probability contour plots. Figures~\ref{tophat}-\ref{fred} show the probability contours for example light curves included in the simulations code. The horizontal axis shows the characteristic duration of the transient, which is defined slightly differently for each light curve shape: the tophat and parabolic light curves' characteristic duration is the duration that the transient's flux is non-zero; the characteristic duration for the Gaussian is the standard deviation; and the duration of the FRED, Wilma, and ERED light curves is the e-folding time. The vertical axis shows the characteristic flux, which is the peak flux for all light curves that are currently implemented (but could vary for more exotic light curve shapes). The color legend shows the probability of detecting a source as a transient at a given duration and flux. Note that a source that is detected in every observation would not be a transient. A probability of 1 means that the survey detects every transient source at the particular flux and duration. Note how the region where the transient is always detected changes for the different light curve shapes. The reason for some of these differences is discussed in detail in section~\ref{lcdiscussion}.
\subsection{Main Detection Algorithm}
In the transient simulations, a large number of sources need to be generated based on the user's settings. Parameters such as the source flux, duration, and the start, end or critical time (depending on the light curve type) are generated in a uniformly random fashion in log10 space via the random number generator in Numpy.
The main detection algorithm tests whether or not the simulated sources will be detected in the observations. For this step, the code iterates over each observation, calculating the integrated flux for all the simulated sources, and testing if these integrated fluxes are greater than the sensitivity of the observation multiplied by a user-specified detection threshold. After this detection step, the sources that are detected in every observation are removed from the detection list, since they are constant sources and not transients.
The number of detected transient sources together with the number of simulated sources are used to generate probabilities of detection for each flux and duration bin. Assuming that transients are distributed as a Poisson distribution, the probabilities are used to calculate limits on transient surface densities and rates. In the case of no transient detections in a survey, the Poisson probability mass function can be inverted to give an upper limit. In case of transient detections in the survey, the code uses the $\chi^2$ distribution \citep{12005udd3.inbook.....JKK} to calculate the upper and lower limits on the transients rates, by inputting the user-provided confidence level and the number of transient detections in the survey.
\subsection{Gaps}
An important ingredient in calculating transient rates accurately is taking into account gaps of varying sizes during observations and surveys. These gaps may exist for a variety of reasons. In the case of radio observations, a long observation on a particular source has to be broken up into scans that are briefly interrupted by observations of a calibrator source. For measuring the flux of a particular source of interest, these gaps are usually unimportant, but for the purposes of calculating transient rates, especially transient rates in a regime where the transients may be shorter than the size of the gaps, it is important to account for these gaps.
Gaps are accounted for in the simulations code base by specifying a gaps file. This file contains all the sub-observations, also known as scans, that make up the full length observation. By running the simulations over the scans, and averaging together the measured flux in each scan, we are able to account for realistic gaps in observations. By addressing the issue of gaps in observations, this allows the transient rate calculations to account for multiple different timescales and different sensitivities present in the same survey in an accurate way that would not be possible, or at least very challenging, to do in an analytic fashion \citep{2016MNRAS.459.3161C}.
\subsection{False Detections}
False transient detections is an issue that affects real transient searches and should therefore be included in transient simulations. When an astronomical source is close to the detection threshold, any small amount of measurement error, either statistical or systematic, can change it from a detection to a non-detection or vise-versa. Since this can be true for every observation in the survey, there can exist a fairly wide distribution of false transient detections, governed by the sensitivities of the observations. These sources will be flagged as transients, which is an issue because they are not real transients but merely faint sources of constant flux. Therefore, it is necessary to find a way to eliminate these sources from consideration as transients.
In order to solve this problem, a second run through the detection algorithm is performed using sources with tophat light curves along with duration and start times that ensure that they ought to be constant sources. After the false detections of these constant sources are calculated, the number of sources detected are counted from the minimum flux simulated until 99\% of the falsely detected sources are accounted for. At the flux level where 99\% is reached, we define this to be the false detection limit. This is shown in all of the probability contour plots, such as in figures~\ref{tophat}-\ref{fred}, and transient rate plots as a horizontal line.
\subsection{Multiple Pointings}
Real surveys can involve multiple pointings that overlap, resulting in an uneven probing of the sky. This creates opportunities and challenges for determining transient rates in these regions of the sky, due to the differences in timescales and observed area.
In order to account for this, the simulations accurately calculate the area of each region on the sky, and then determine the transient rates for each region on the sky. This is currently implemented for a maximum of three overlapping pointings with possibly varying observing timescales and cadences, but an expansion of this is easily doable.
Figure \ref{threepointings} shows a simulated example of three overlapping pointings. Each red circle represents a pointing of the telescope. It can be seen that there are also three double overlap regions and one triple overlap region. For each of these regions, the transient rates will be different due to the differences in observing cadence and time.
\section{Results and Discussion}\label{sec:results}
\subsection{A Realistic Survey}
For the purpose of demonstrating the capabilities of the transient simulations code, a survey setup similar to that in \citet{10.1093/mnras/stz3027} is used: 46 weekly observations of 13 minutes in duration, with the rms noise of the observations being varied as a Gaussian with mean 35 $\mu$Jy and standard deviation 5 $\mu$Jy and a detection threshold of $5\sigma$. Given that transients were detected in \citet{10.1093/mnras/stz3027} , we also demonstrate the ability of the simulations code to calculate transient rates based on detections. Assuming Poisson statistics, one can calculate the upper and lower limits on the transient rate, as explained in the previous section. In the configuration for this simulations run, two transient detections are used as input to calculate the rates along with a 95\% confidence interval. The light curve type used for this example is the Gaussian.
Figure \ref{realscen} shows the results of these simulations. The left-side plot shows the lower limits on the transient rate, the middle plot shows the upper limits on the transient rate, and the plot on the right shows the example of transient rate limits at 0.424 Jy as a function of transient duration. The horizontal red lines indicate the 99\% false detection rate. These plots show the transient rate limits: we can see that at 0.424 Jy and around a transient duration of 10 days, the transient rate is between $7\times10^{-4}$ and $5\times10^{-3}$ transients per day per square degree.
\subsection{Light Curves}\label{lcdiscussion}
Figures~\ref{tophat}-\ref{fred} shows the probability contours for the tophat, parabolic, Gaussian, and FRED light curves. These probabilities are the ratios of the detections to total simulated sources for each duration and flux bin. Each plot has a region of parameter space where all of the transients are detected. As shown by \citet{2017MNRAS.465.4106C}, in the tophat case this is bounded on the left by the duration of the longest gap between consecutive observations. The boundary on the right corresponds to the longest possible duration transient that will still be considered a transient and not a constant source. In other words, this duration is slightly less than the length of the entire survey, since a transient of this length would be detected in every observation except for one. For the FRED, Gaussian, Wilma, and ERED light curves, we observe that the boundaries around this same region are curves. In~\ref{sec:lc:appendix}, we go into detail on finding the equations for these curves.
Examining the probability contours for the parabolic light curve in figure~\ref{parabolic} shows a plot that looks closer to the tophat than the other light curves due to the vertical boundaries on the region where the probability is equal to 1. While this may seem counter-intuitive, a similarity between the parabolic and tophat light curve is that they both have a fixed start and stop time at which the flux drops to zero. All the other example light curves approach but never reach zero. For this reason, we use a value to characterize the duration such as the e-folding time for the FRED, or the standard deviation in the case of the Gaussian light curve. Using these values to characterize the duration is what causes the difference in these probability contours. As an example, for the FRED light curve, if the transient has a low flux compared to the sensitivity of the observations, then the duration of the transient that would be detected might be something closer to its e-folding time. In contrast, a very bright transient would be detected well past its e-folding time. This is the reason why these light curves seem to curve away to the left as flux increases in these probability contour plots: the actual duration that is detected in the survey becomes longer. If we were able to define the duration of the transient by the duration that is actually detectable in the survey, then we could make all of the probability contour plots have the kinds of vertical boundaries that we see in the tophat and parabolic light curves. However, defining the durations this way, would make the simulations much more computationally and mathematically complex to the extent that it makes this prohibitive.
\subsection{False Detections} \label{fddiscussion}
Figure~\ref{fig9} shows an example of a survey with a large number of false detections of transients. This can happen when including images that are grouped around very different sensitivity scales. In the example shown here, the survey included observations on three very different time scales and sensitivities: 4 hour observations with an rms noise of around $9~\mu$Jy; 15 minute observations with an rms noise of around $30~\mu$Jy; and 8 second observations with a noise around $350~\mu$Jy. Including all of these images in one run of the simulations creates many false detections. In this example, it is better to run simulations of these three different time and sensitivity scales separately. In figures~\ref{sample4hr}-\ref{sampleint}, the probability contours for the three different timescales are shown separately. From these plots, we can clearly see that the false detection limit is much lower on two of the three timescales and a little higher on the shortest timescale.
\subsection{Gaps} \label{Gaps}
In order to demonstrate the capability of including observations with gaps, an observation file was created with weekly 4 hour observations (instead of 13 minutes), containing gaps within the weekly observation, and a total survey duration of 46 weeks (as in the previous example). The gaps were typical for a target-gain calibration loop in radio observations: 5 minutes on a calibrator field followed by 15 minutes on a science target field. The noise in the 15-minute scans was simulated like before, with a mean of 35 $\mu$Jy and a standard deviation of 5 $\mu$Jy in the target observations. For the full four hour observations, the noise was scaled as $1/\sqrt{time}$ and simulated as a Gaussian with a mean of 8 $\mu$Jy and a standard deviation of 1 $\mu$Jy. Due to the nature of having a bright calibrator source in a field, the noise for the calibrator observations was higher than would be suggested by scaling by $1/\sqrt{time}$. The noise of the 5 minute scans of the calibrator observation was simulated to be 100 $\mu$Jy with a standard deviation of 15 $\mu$Jy. The noise of the combined image of the calibrator scans was simulated to be 25 $\mu$Jy with a standard deviation of 4 $\mu$Jy. For this example we assume that there are no detected transients in this simulated survey.
Using these simulations, we can show how accounting for gaps results in more accurate transient rate calculations. This is particularly important on timescales close to the length of the gap itself. In order to test the gaps algorithm, three observational scenarios were used: the calibrator field with 15 minute gaps, the target field with 5 minute gaps, and a full four hour observation with no gaps. These scenarios provide a comparison between different extremes of gaps in observations. These simulations were done with both tophat and FRED light curves. Figures \ref{fig3}-\ref{fig8} show the results of these scenarios. Figures~\ref{fig3} and~\ref{fig4} show upper limits on transient rates in the color legend, with transient duration on the horizontal axis and characteristic flux on the vertical axis. Figure~\ref{fig3} is for a tophat light curve and figure~\ref{fig4} is for a FRED light curve. The three plots in each figure show the difference in transient rates when not accounting for gaps in a weekly survey with 4 hour observations (top), when accounting for 5 minute gaps in a 4 hour science target observation (middle), and when accounting for 15 minute gaps in calibrator observations (bottom).
In figures~\ref{fig3} and~\ref{fig4} we can see a diagonal trend at the shortest durations below which there are no colored contours. This boundary marks the transients that are shortest in duration and lowest in flux to possibly be detected. It is a diagonal because it is the fluence that determines if a transient is detected \citep{2017MNRAS.465.4106C}; and in the FRED case, for short durations the integrated flux becomes identical to the tophat case. The blank space in the bottom left of the plots represents the region of transient parameter space that cannot be probed by the simulated survey. The red vertical lines on these plots mark 5 minutes, the length of the gaps in the target observations.
Differences in the transient rate are small and difficult to distinguish between the target gap and no gap plots in figures~\ref{fig3} and~\ref{fig4}. The calibrator gap shows a bit of a departure from the others: examining closely reveals a slightly different trend to the left of the red line for both light curves. This departure from the case of having no gaps or the case of a smaller gap only shows in the part of parameter space that has the smallest duration transients. When transients are longer in duration, they are not likely to fall in the gaps and are more likely to be detected in an observation.
Figures~\ref{fig5} through~\ref{fig8} show the differences between the different gaps in a different way. Figures~\ref{fig5} and~\ref{fig6} show the transient rates from figures~\ref{fig3} and~\ref{fig4} on the vertical axis at a constant flux of 0.464 Jy. Figures~\ref{fig7} and~\ref{fig8} show the percent difference in transient rate between the gaps and no gaps cases. The top panel of figures~\ref{fig7} and~\ref{fig8} shows the difference between the target gap and no gap cases, and the bottom plot shows the difference between the calibrator gap and no gap cases. As we can see there is an appreciable difference when accounting for 5 minute gaps in a target observation, and a significant difference of nearly 300\% when accounting for 15 minute gaps in the calibrator observation.
\subsection{Multiple Pointings}
Calculating transient rates for multiple overlapping pointings gives a more complete picture of how a survey can probe transient parameter space. An example of such a survey is used here and illustrated in figure~\ref{threepointings}: three circular fields of view each with a radius of 1.4 degrees. The details are summarized in tables~\ref{simsurvalt} and~\ref{simsurvseq} below. This setup has seven different regions: three that are probed only by one of the pointings, three that are probed by two pointings, and one that is probed by all three pointings. The three different fields may be observed at various cadences that affect the transient rates in the different areas. If one calculates the probability contours for a tophat transient, as is shown in figure~\ref{threeregion}, one can compare the single pointing (left) with a double overlapping pointing (middle) and a triple overlapping pointing (right). Note how regions with more overlap have a larger region in the transient duration space where the probability of detecting the transient is equal to 1.
For a comparison of different survey cadences, one example survey shown in table~\ref{simsurvalt} alternates between each of the three pointings each week, and another one shown in table~\ref{simsurvseq} is set up to observe each pointing exclusively before moving to the next one. Figure~\ref{multirgnprob} shows that the probability contours for the triple overlapping region, labelled 0\&1\&2, are the same for both scenarios, as expected. We do, however, see slight differences in the regions with no overlapping pointings in the part of parameter space where transients are best detected. This difference is due to the variations in the maximum gap and survey length in the two survey setups. One particular region with two overlapping pointings, labelled 0\&1, shows the most striking differences between the survey setups. Survey setup 1 produces two of the double overlap regions with good limits on transient rate and one double overlap region with poor limits on the transient rate. In this case, the region that is observed in both the first observed field and the last observed field will have an extremely large gap. Survey setup 2 produces much more consistent detection regions which may suggest that it is the better choice if more uniform transient rate limits are the goal.
\begin{center}
\begin{table*}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
RA (J2000) & DEC & ID & Area (deg$^2$) & Duration (days) & Start (MJD) & End \\
\hline
274.7345 & 7.7974 & 0 & 6.1572 & 294.01 & 58997.54 & 58703.53 \\
275.0913 & 7.1851 & 1 & 6.1572 & 294.01 & 59011.54 & 58717.53 \\
275.4482& 7.7974 & 2 & 6.1572 & 294.01 & 59004.54 & 58710.53 \\
274.9130 & 7.4913 & 0\&1 & 4.1987 & 308.01 & 59011.54 & 58703.53 \\
275.0913 & 7.7975 & 0\&2 & 4.1987 & 301.01 & 59004.54 & 58703.53 \\
275.2696 & 7.4913 & 1\&2 & 4.1987 & 301.01 & 59011.54 & 58710.53 \\
275.0913 & 7.5926 & 0\&1\&2 & 3.4360 & 308.01 & 59011.54 & 58703.53 \\
\hline
\end{tabular}
\caption{Simulated survey alternating between pointings weekly}
\label{simsurvalt}
\end{table*}
\begin{table*}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
RA (J2000) & DEC & ID & Area (deg$^2$) & Duration (days) & Start (MJD) & End \\
\hline
274.7345 & 7.7974 & 0 & 6.1572 & 98.16 & 58801.70 & 58703.53 \\
275.0913 & 7.1851 & 1 & 6.1572 & 98.16 & 59011.70 & 58913.53 \\
275.4482 & 7.7974 & 2 & 6.1572 & 98.16 & 58906.70 & 58808.53 \\
274.9130 & 7.4913 & 0\&1 & 4.1987 & 308.16 & 59011.70 & 58703.53 \\
275.0913 & 7.7975 & 0\&2 & 4.1987 & 203.16 & 58906.70 & 58703.53 \\
275.2696 & 7.4913 & 1\&2 & 4.1987 & 203.16 & 59011.70 & 58808.53 \\
275.0913 & 7.5926 & 0\&1\&2 & 3.4360 & 308.16 & 59011.70 & 58703.53 \\
\hline
\end{tabular}
\caption{Simulated survey moving between pointings sequentially}
\label{simsurvseq}
\end{table*}
\end{center}
\section{Performance}\label{performance}
Figure~\ref{fig10} shows how the simulations scale in execution time as a function of the number of sources simulated, for the example of 46 observations of 13 minutes. Figure~\ref{fig11} shows scaling in execution time as a function of the number of observations when the number of sources is held constant at $4.3\times10^{5}$. All of the simulations for this example were performed on a 2020 Apple Macbook Pro 13 inch model with the M1 chip. In addition to showing the total execution time, key portions of the code are shown as well. The conditionals and flux filtering steps take place when the algorithm is determining if the sources are detected in observations. These steps are so-named because they filter sources based on the calculated integrated flux, and do a large number of boolean and bit operations to calculate and store each transient's state as either detected or not-detected. The stats step is the step that aggregates the detections into probabilities. Finally, the plotting step is where all of the detection statistics, observation information, and false detection information is plotted. Since the data is broken down into a grid in order to plot, the plotting step has no scaling with the number of sources, so it takes a constant amount of time.
Not shown here are the impacts of a few features and algorithms. The false detection algorithm re-runs the conditionals, flux filtering, and stats steps. In the case of a tophat light curve, this means the false detection algorithm would slightly less than double the amount of time (the plotting step is not doubled). In the case of other light curves, it could have a different impact, usually lower, since the false detection algorithm always uses tophat light curves to simulate constant sources. Another factor not shown here is the impact of having multiple pointings. For example, in a survey setup with two overlapping pointings there will be three regions. Therefore, the run time will be about three times longer than for a single region, assuming equal numbers of observations in each region.
\section{Future Applications} \label{futureapplications}
\citet{2017MNRAS.465.4106C} shows the existence of a region of 100\% detection probability for the FRED and tophat light curves. The work presented here shows that this region of 100\% detection can be found in a wide variety of light curves. Optimizing the survey parameters to maximize this region of 100\% detection has many potential applications for future surveys. In addition, transient searches can use the false detection calculations to decide the best manner to plan a transient search in a given survey. The wide variety of data outputs can assist with a number of scientific goals that one might have for a survey, and the flexibility of the code can be easily adapted to those purposes.
One application of the code is using the transient simulations to optimize resource allocation. Such optimizations can be done both in terms of survey design and also the data reduction after the observation itself. Simulating a variety of survey setups, such as in section~\ref{Gaps}, is one way to ensure that the survey will accomplish it goals most effectively. In addition, simulations can also be done for varying aspects related to the data reduction, such as the timescale that images are made on. By simulating a combining or splitting up of the observations in a survey into multiple different timescales, one can find an optimal way to search for transients on these multiple time scales with a minimum of re-imaging. These optimizations are becoming increasingly more important with radio facilities such as MeerKAT \cite{2016mks..confE...1J} and the LOw-Frequency Array (LOFAR) \cite{2013A&A...556A...2V} which can easily use terabytes of disk space and considerable other computer resources as well. The Square Kilometer Array \cite{2009IEEEP..97.1482D} and next-generation Very Large Array \cite{2018ASPC..517....3M}, and other upcoming facilities, will surely require even more resources.
The previously mentioned tools for planning surveys have potential to be expanded to be even more helpful in the future. A potential future update to these simulations could include a tool to help calculate optimal pointings for smoothly probing a large area of the sky in both space and time.
The simulations code presented in this paper accounts for a large number of realistic effects that complicate transient searches and calculate transient rates from surveys. For research into particular kinds of sources, future upgrades can be made for particular light curves and population numbers that reflect certain sources of interest.
Finally, even though this simulations code has been designed for surveys in the radio regime, and the examples in this paper are based on this particular use case, it can easily be adapted and applied to other spectral regimes.
\section{Conclusions} \label{conclusions}
Simulating transients, following the methodology and code case presented here, allows for calculating transient rates that are highly accurate due to the implementation of a variety of observational effects. The simulations presented here account for a variety of observing sensitivities, pointings, survey cadences, and gaps within observations and surveys. Furthermore, it has been made easy to obtain, since it will be freely available for download through Github, and easy to use through the use of a modular design, the inclusion of scripts to extract metadata from observations, and updates for modern versions of Python.
\section{Acknowledgements}
The authors would like to thank the referee for their constructive comments that helped improve the paper.
The authors would like to acknowledge the ThunderKAT collaboration for the valuable sharing of knowledge and resources, and Michael Moss for his helpful comments and feedback on this manuscript.
This work was completed in part with resources provided by the High Performance Computing Cluster
at The George Washington University, Information Technology, Research Technology Services.
\appendix
\section{Included Light Curves}
\label{sec:lc:appendix}
Here we present and briefly discuss the light curve shapes that are currently included in the simulations code base.
\subsection{Tophat}
The tophat is the simplest transient light curve in concept:
\[F=F_{pk}~\text{for}~t_{start} \le t \le t_{end}\]
It is simply at the peak flux for the entire duration of the transient. The probability contour plot shown in figure~\ref{tophat} has a region in parameter space in which all transients are always detected, which can be referred to as a region of guaranteed detection. This region has vertical boundaries that can be found to have a quite straightforward interpretation \citep{2017MNRAS.465.4106C}. The left-most boundary is the longest gap in the observations or, in other words, the longest duration of a tophat transient that could go undetected. The right-most vertical bounding line corresponds to the longest time scale that a transient could have while not being detected as a constant source. This quantity is the duration of the entire survey minus either the first or last observation.
\subsection{Fast Rise Exponential Decay}
The fast rise exponential decay (FRED) light curve is defined as instantaneously rising to the peak flux and exponentially decaying with an characteristic duration $\tau$, defined as its e-folding time:
\[F=F_{pk}\,\exp\left[\frac{-(t-t_{start})}{\tau}\right]~\text{for}~t\ge t_{start}\]
This light curve produces a slightly different probability contour, seen in figure~\ref{fred}, in which the bounding lines for the region of guaranteed detection can be interpreted as follows. The left boundary corresponds to the boundary due to the longest gap, like the tophat. However, unlike the tophat, the flux of the FRED light curve approaches but never actually reaches zero as time progresses. Therefore, brighter transients can be detected for longer than the characteristic duration of the transient, making this boundary a curve instead of a vertical line. The boundary condition can be expressed as $F_{int}=S_{gap}$, i.e. the integrated flux of the transient needs to be equal to the sensitivity of the observation the transient would be detected in, which would be the observation after the gap. We can find the integrated flux:
\[F_{int} = F_{pk}\,\tau\,\frac{\exp\left[-\frac{\max(T_{start},t_{start})}{\tau}\right] - \exp\left[\frac{T_{end}}{\tau}\right]}{T_{end}-T_{start}}\]
Since we consider the case where the transient starts in the gap, the start of the observation that detects the transient is equal to the length of time from the start of the transient until the end of the gap, which we label $T_{gap}$: $T_{start}=T_{gap}$.
Therefore, $T_{end}=T_{start}+\Delta T_{gap}$, where $\Delta T_{gap}$ is the duration of the observation. Inserting the integrated flux into the previous equation and solving for $F_{pk}$ yields:
\[F_{pk}(\tau) = \frac{S_{gap}\,\Delta T_{gap}}{\tau\,\left(\exp\left[-\frac{T_{gap}}{\tau}\right] - \exp\left[\frac{(T_{gap} + \Delta T_{gap})}{\tau}\right]\right)}\]
The right boundary is the boundary for the longest timescale. We can follow the same procedure as the left boundary, finding that $S_{obs} = S_{last}$, the sensitivity of the last observation in the survey. We also find the following modifications:
\[T_{start} = \tau_{survey} - \Delta T_{last}\]
\[T_{end} = \tau_{survey}\]
\[F_{pk}(\tau) = \frac{S_{last}\,\Delta T_{last}}{\tau\,\left(\exp\left[-\frac{(\tau_{survey} - \Delta T_{last})}{\tau}\right] - \exp\left[\frac{\tau_{survey}}{\tau}\right]\right)}\]
\subsection{Exponential Rise Fast Decay (Wilma)}
In light of the FRED light curve, a natural extension would be to examine the reverse FRED light curve. The light curve ends at the peak flux and has no definite start:
\[F=F_{pk}\,\exp\left[{\frac{(t-t_{end})}{\tau}}\right]\text{ for }t\le t_{end}\]
The probability contour plot for this light curve is shown in figure~\ref{wilma}. As one can see, it is identical to the FRED light curve in figure~\ref{fred}. This makes sense when one realizes that if the entire survey were time-reversed, the light curve would be a FRED. For this reason, the lines bounding the region of guaranteed detection are the same as for the FRED light curve.
\subsection{Exponential Rise Exponential Decay}
The Exponential Rise Exponential Decay (ERED) light curve (figure~\ref{ered}) does not have a definite beginning nor end, only a characteristic time at which the flux is the peak flux and $\tau$, which is its e-folding time:
\[F=F_{pk}\,\exp\left[{\frac{(t-t_{char})}{\tau}}\right]\text{ for }t< t_{char}\]
\[F=F_{pk}\text{ for }t=t_{char}\]
\[F=F_{pk}\,\exp\left[{\frac{-(t-t_{char})}{\tau}}\right]\text{ for }t> t_{char}\]
The transients from this light curve also behave similarly to the previous two cases, since this light curve is a Wilma light curve immediately followed by a FRED light curve. Therefore, if we use the integrated flux to find the curve marking the boundary corresponding to the shortest duration transient that will always be detected, we find:
\[F_{pk}(\tau) = \frac{ S _{gap}\,\Delta T_{gap}}{\frac{\tau}{2}\,\left(\exp\left[-\frac{T_{gap}}{\tau}\right] - \exp\left[\frac{2\Delta T_{gap} + T_{gap}}{\tau}\right]\right)}\]
Similarly, for the flux limit on the longest timescale, we have:
\[F_{pk}(\tau) = \frac{S_{first/last}\,\Delta T_{first/last}}{\frac{\tau}{2}\,\left(\exp\left[-\frac{\tau_{survey}}{\tau}\right] - \exp\left[-\frac{2\Delta T_{first/last} + \tau_{survey}}{\tau}\right]\right)}\]
In this equation, $S_{first/last}$ and $\Delta T_{first/last}$ would correspond to either the first or last observation in the survey depending on which is more sensitive.
\subsection{Parabola}
Also included is a parabolic light curve defined as follows:
\[F = F_{pk}\,\left(1 - \frac{4}{\tau^2}\left(t - \frac{\tau}{2} - t_{crit}\right)^2\right)\]
$t_{crit}$ is the peak of the light curve, which occurs at half of the duration of the light curve. Since the parabolic light curve starts and ends at zero flux, rather than approaching zero like the exponential or Gaussian light curves, this light curve has a definite duration like the tophat. The light curves with a definite duration all have boundaries in the probability contour plots that are derived in the same way as those for the tophat light curves. The boundaries in the case of the the exponential or Gaussian light curves come about from there being a difference between the characteristic duration and the duration that the transient is actually detected in the observations.
\subsection{Gaussian}
The final light curve included is a Gaussian-shaped one:
\[F=F_{pk}\,\exp\left[\frac{-(t-t_{crit})^2}{2\left(\frac{\tau}{2}\right)^2}\right]\]
In order to find the boundaries for the region of parameter space where the transients are always detected, we follow the same process as we did for the FRED light curve, and find an equation for the integrated flux (with erf being the Gauss error function):
\begin{equation*}
\begin{split}
F_{int} = F_{pk}\,\tau\,\sqrt{\frac{\pi}{8}}\,(\text{erf}\left[\frac{\sqrt{2}(T_{end}-t_{crit})}{\tau}\right]\\
-\text{erf}\left[\frac{\sqrt{2}(T_{start}-t_{crit})}{\tau}\right])/(T_{end}-T_{start})
\end{split}
\end{equation*}
We can define the boundary on the transient flux needed to be detected as $F_{int}=S_{gap}$.
Using the equation for integrated flux, we find the boundary for the shortest possible duration transient that will always be detected:
\begin{equation*}
\begin{split}
F_{pk} = \frac{S_{obs}\,\Delta T_{obs}}{\tau\, \sqrt{\frac{\pi}{8}}}\frac{1}{\text{erf}\left[\frac{-\Delta T_{gap}}{\sqrt{2}\,\tau}\right] - \text{erf}\left[\frac{(-2\Delta T_{obs} - \Delta T_{gap})}{\sqrt{2}\,\tau}\right]}
\end{split}
\end{equation*}
We also find the boundary on the right, for the longest possible duration transient before it is considered a constant source:
\begin{equation*}
\begin{split}
F_{pk} = \frac{S_{obs}\,\Delta T_{obs}}{\tau\, \sqrt{\frac{\pi}{8}}}\frac{1}{\text{erf}\left[\frac{-(\Delta T_{survey} + 2\Delta T_{obs})}{\sqrt{2}\,\tau}\right] - \text{erf}\left[\frac{-\sqrt{2}\,\Delta T_{survey}}{\tau}\right]}
\end{split}
\end{equation*}
\bibliographystyle{elsarticle-num-names}
\bibliography{thesis}
|
Title:
Gravitational Waves from Long Gamma-Ray Bursts and Supernovae |
Abstract: Gamma-ray bursts (GRBs) are produced during the propagation of
ultra-relativistic jets. While our understanding of these jets have improved
notably during the last decades, it is currently impossible to study directly
the jet close to the central source, due to the high opacity of the medium. In
this paper, we present numerical simulations of relativistic jets propagating
through a massive, stripped envelope star associated to long GRBs, breaking out
of the star and accelerating into the circumstellar medium. We compute the
resulting gravitational wave (GW) signal, showing that several key parameters
of the jet propagation can be directly determined by the associated GW signal.
The signal presents two peaks, the first one corresponding to the jet duration,
while the second one corresponding to the end of the acceleration phase.
Depending on the observer location (with respect to the jet axis) this peak
corresponds to the break-out time for observer located close to the jet axis
(which in turn depends on the stellar size), or to much larger times
(corresponding to the end of the acceleration phase) for off-axis observers. We
also show that the slope of the GW signal before and around the first peak
tracks the jet luminosity history and the structure of the progenitor star. The
amplitude of the GW signal is $h_+D \sim$ hundreds to several thousands.
Although this signal, for extragalactic sources, is outside the range of
detectability of current GW detectors, it can be detected by future detectors
as BBO, DECIGO and ALIA. Our results illustrate that future detections of GW
associated to GRB jets will represent a revolution in our understanding of this
phenomenon.
| https://export.arxiv.org/pdf/2208.00129 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
relativistic processes --
methods: numerical --
gamma-ray burst: general --
stars: jets --
gravitational waves
\end{keywords}
\section{Introduction}
Gamma-ray bursts (GRBs) are extremely luminous pulses of gamma-rays (with an isotropic energy of 10$^{51}-10^{54}$ ergs) lasting typically from $\sim$ a fraction of a second to $\sim$ hundreds of seconds. GRBs are classified based on their duration. Short GRB (SGRBs), lasting $\lesssim 2$~s, are typically produced during the coalescence of neutron stars (NS), while long GRBs (LGRBs), lasting $\gtrsim 2$~s, are in several cases associated to the collapse of massive stars and their explosion as type Ic supernovae (SNe) (for a review, see, e.g., \citealt{Kumar_2015}). Recent observations of a kilonova associated to GRB211211a showed that the usual identification of different progenitors mainly based on the GRB duration can be misleading \citep[][]{Gao2022,Troja2022}.
The gamma-ray emission observed in these events is produced by highly relativistic jet, moving with Lorentz factors $\Gamma_j \sim$ 100 - 1000.
These jets are ejected from a black hole or a magnetar (the so-called ``central-engine'') formed during the collapse of a massive star (see, e.g. \citealt{HorthBloom2012,Cano-etal2017}) or as a result of the coalescence of a binary NS system (see, e.g., \citealt{Berger2014}).
Once the jet is ejected from the central engine, it propagates through the dense, optically thick surrounding medium formed by the progenitor star or the debris of the binary NS system, before breaking out at distances of $\sim 10^{10}-10^{11}$~cm. Theoretical studies show that, during this phase, the jet moves with sub-relativistic velocities ($\sim$ 0.1 - 0.5 $c$), being $c$ the light speed \citep[e.g.,][]{Bromberg2011,NakarPiran2017,Decolle2018a}.
When the jet breaks out from the dense environment, it accelerates to large jet Lorentz factors $\Gamma_j$ ($\sim E_j/M_j c^2$ where $E_j$ and $M_j$ are the jet energy and mass), before emitting the observed gamma radiation at larger distances from the central engine ($\gtrsim 10^{13}-10^{15}$~cm), once the hot plasma becomes optically thin to gamma-ray radiation.
The prompt gamma-ray emission is followed by a multi-wavelength afterglow emission covering the full electromagnetic spectrum, from radio to X-rays, and lasting from minutes to several years. Thus, the late phases of evolution of the relativistic jets (from $\sim 10^{13}$ cm to $\gtrsim 10^{18}$ cm) can be studied by analyzing these rich electromagnetic signatures (see, e.g., \citealt{Kumar_2015} and references therein). On the other hand, it is much more difficult to study the early phases of evolution of the jet, corresponding to distances $\lesssim 10^{10}-10^{11}$ cm, as the high densities make the jet plasma optically thick to electromagnetic radiation. In particular, only neutrinos (e.g., \citealt{Kimura2022}) and GWs could probe directly the behaviour of the jet while it is crossing the dense environment.
In addition to oscillating GWs signals associated to the coalescence of compact objects \citep{abbot2017NSmerger}, the possibility of detecting non-oscillating, low frequency signals (the so-called ``memory'' signal produced by unbound material over timescales $\gtrsim 1$ s), has been proposed long time ago \citep{1987Natur.327..123B}.
These ``memory'' signals have been studied extensively, e.g., in the context of supernovae (SNe) explosions \citep[e.g.][]{Kotake:2005zn,Murphy2009,muller2012,Muller2013,Wongwathanarat2015,Yakunin2015,Powell2019,Hubner2020,Mezzacappa2020,Richardson_2022}.
The focus of these studies was to discuss under which circumstances (in terms of specific instrument and signal morphology) the memory component of the signal spectral density is above the interferometric noise spectral density \citep[see, e.g.,][]{Moore2014}. This is a semiquantitative measure of the detectability of the memory (in the sense that it is an important metric but it is not related to a specific alghorithm). It is also worth stressing that for detectability the whole spectrum of the memory development over time matters, not just the zero frequency component produced by the asymptotic value.
Previous studies of the GWs produced by GRB jets have focused on the propagation of the jet through the dense envelope, or to the acceleration of the jet after the break-out \citep{Segalis_2001,Sago_2004,Sun2012,Akiba2013,piran13,Du_2018,Yu_2020, piran21}.
These studies have shown that the amplitude of the GW increases with time due to the continuous injection of energy into the jet from the central engine, or due to the jet acceleration once it expands through the environment.
Previous studies \citep{Segalis_2001,Sago_2004,Sun2012,Akiba2013,piran13,Du_2018,Yu_2020,piran21} estimating the GW memory from GRB jets were based on simple analytical and/or semi-analytical estimations. Although these calculations provide a qualitative understanding of the GW memory, quantitative estimations can be obtained only by detailed numerical calculations.
In this work, we study the propagation of relativistic jets associated to LGRBs through the progenitor star, and its propagation through the wind of the progenitor star up to large distances ($10^{13}$~cm). We compute the resulting GW signal as a function of time and observer angle (with respect to the main axis of the jet). We also consider the possible presence of a supernova component, and how its GW signal is affected by the presence of the jet. As we will discuss below, although the simulations presented refer to the LGRB case (in which the jet is propagating through a massive progenitor star), the expected GW signal will be qualitatively similar in short GRBs.
The paper is structured as follows: in Section \ref{sec:methos} we discuss the initial conditions of the hydrodynamic simulations, and the methods used to compute the GW directly from the simulations. Section \ref{sec:results} presents the results of the calculations, in particular, the jet dynamics as the jets propagate through the progenitor and its environment, and the calculation of the resulting GW. In section \ref{sec:discussion} we discuss our results, in the context of present and future GW detectors. Our conclusions are presented in section \ref{sec:conclusions}.
\section{Methods}\label{sec:methos}
\subsection{Numerical simulations}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Scenario & $t_{\rm inj}$ (s) & Energy (erg) & Progenitor \\ \hline \hline
Successful Jet 1 & 10 & $10^{51}\,$ & 12TH \\
Successful Jet 2 & 2.5 & $10^{52}\,$& 16TH \\
Failed Jet & 10 & $10^{51}\,$ & 12TH\\
Supernova & 1 & $10^{52}$ & 12TH\\
Jet + Supernova & 10 & $10^{51}$ & 12TH \\
\hline
\end{tabular}
\caption{Numerical simulations presented in this paper. The columns refer to: the scenarios considered, the time during which the jet/SN is injected into the computational box, its energy, and the progenitor star (see the main text for a detailed description of each model). The progenitors 12TH and 16TH correspond to 12~M$_{\odot}$ and 16~M$_{\odot}$ initial masses, respectively.}
\label{tab:models}
\end{table}
We study the first 300 s of evolution of relativistic GRB jets, associated with massive stellar collapse, by running a series of numerical simulations. The simulations employ the adaptive mesh refinement code {\it Mezcal} \citep[]{decolle12}, which integrates the special relativistic, hydrodynamics equations by using a second-order (both in space and time), shock-capturing scheme.
We consider five scenarios (summarised in Table \ref{tab:models}): an asymmetric supernova (the ``supernova'' model), two successful jets without a SN associated (the ``successful jet 1'' and ``successful jet 2'' models), differing by their duration and total energy, a successful jet associated to a SN (the ``jet + supernova'' model), and a failed jet not associated to a SN (the ``failed jet'' model).
The numerical simulations (see Table \ref{tab:models}) employ two dimensional (2D), cylindrical (axisymmetric) coordinates. In all the models, the computational box extends from $(r,z) = 0$~cm to ($r_{\rm max},z_{\rm max})= 10^{13}\,$~cm, and is resolved by employing $40 \times 40$ cells at the coarsest level of refinement and $17$ levels of refinement, corresponding to a maximum resolution of $\Delta r_{\rm min}=\Delta z_{\rm min} = 3.8 \times 10^6\,$cm. We set the density in the computational box by considering the pre-collapse stellar models 12TH and 16TH taken from \citet[]{WoosleyHeger_2006}. These models\footnote{Long GRBs are associated to broad-line, type Ic SNe, which are produced during the collapse of massive, compact Wolf-Rayet stars.} corresponds to stripped-envelope progenitor stars with stellar masses $M_\star=$ 9.23 $M_\odot$ and 11.45 $M_\odot$ and stellar radii $R_\star=4.5\times 10^{10}$~cm and $9\times 10^{10}$~cm for the 12TH and the 16TH models respectively. For radial distances $r > R_{\star}$, we consider a medium shaped by the wind of the Wolf-Rayet progenitor, i.e. with a density
\begin{equation}
\rho(r) = \frac{\dot{M}_w}{4\pi r^2 v_w},
\end{equation}
being $\dot{M}_w=10^{-5}\,$ M$_\odot$ yr$^{-1}$ and $v_{\rm w}=10^3\,$ km s$^{-1}$ typical values for the mass-loss rate and the velocity of the wind from a Wolf-Rayet star \citep[e.g.,][]{Vink2011}.
The pressure in both the star and the wind is negligible (as in strong shock it does not affect the shock dynamics) and it is set as $p=10^{-5} \rho c^2$.
In all except the ``supernova'' model, the relativistic jet is injected from an inner boundary located at $r_{\rm in}=5 \times 10^8\,$cm, with a jet Lorentz factor $\Gamma_{j}=$10.
The jet energy is largely dominated by thermal energy, with the jet pressure given as,
\begin{equation}
p_j= \frac{\rho_j c^2}{4} \left( \frac{\Gamma_{\infty}}{\Gamma_j} -1 \right),
\end{equation}
being $\rho_j$ the jet mass density and $\Gamma_{\infty}=100$ the asymptotic jet velocity, eventually achieved once the jet breaks out of the star and accelerates by converting its thermal to kinetic energy. In two of the simulations (differing by the presence of a SN and indicated in Table \ref{tab:models} as ``successful jet 1'' and ``jet + supernova''), we inject the jet during $t_j=10$~s, such that its total energy is $E_j = 10^{51}$~erg and its luminosity is $L_j=10^{50}$~erg~s$^{-1}$, while in one model (the ``successful jet 2'' model) we inject the jet during $t_j=2.5$~s with a total energy of $E_j = 10^{52}$~erg, corresponding to a much larger luminosity $L_j=4\times10^{51}$~erg~s$^{-1}$. In all these cases the {\rm jet opening angle} is $\theta_j=0.1\,$rad and, as we will discuss in detail below, the jet successfully breaks out of the star and accelerates to highly relativistic speeds through the progenitor wind. We also consider a simulation in which the jet also lasts for $t_j=10$~s, with a total energy $E_j = 10^{51}$~erg, but with a larger jet opening angle $\theta_j=0.2\,$rad (the ``failed jet'' model). In this case, the jet will not be able to break out successfully from the star. We refer to this case as the choked or failed GRB case.
To study how the GW memory signal is affected by the presence of both a SN and a GRB, we also inject, in two of the five simulations (``supernova'' and ``jet + supernova'' models, see table \ref{tab:models}), a supernova shock front from the same inner boundary at $t=0$~s.
Following \citet[]{DeColle2021} and \citet{Urrutia2022a}, we inject, from $r_{\rm in}$, a SN shock front during $t_{\rm sn}=0.1$~s, with a total energy of $E_{\rm sn}=4 \times 10^{51}$~erg and a mass $M_{\rm sn}=0.1M_{\odot}$. We assume that 10\% of the SN energy is thermal, while 90\% is kinetic. Type Ic, broad-line SNe associated to long GRBs present a certain degree of asymmetry (as inferred from polarization measurements, see, e.g., \citealt{Maund2007,Tanaka_2017}, or by the analysis of line emission during the nebular phase, see, e.g., \citealt{Taubenberger2009}). To qualitatively reproduce this asymmetry, we set an angular dependence for the energy injected in the SN as $E_{\rm SN}(\theta) \propto \cos^2 \theta$, being $\theta$ the polar angle measured with respect to the $z$-axis.
In the ``jet + supernova'' model, in which both SN and jet are present, the jet is injected with a delay of 1 s with respect to the SN. The origin of the SN associated to GRBs is debated. The models proposed include a wind from a collapsar disk \citep{MacFadyenWoosley_1999}, energy ejection from a magnetar \citep[e.g.,][]{Metzger2015}, or the jittering jet mechanism \citep[e.g.,][]{papishSoker2014}; see also the discussion by \citet[]{DeColle2021}. Thus, the time delay between the SN and the jet is uncertain.
\subsection{Gravitational wave signals}\label{sec:2.2} %
We consider a system of reference centered on the central engine, being the $z$ axis the main axis of propagation of the jet (see Figure \ref{fig1}). The direction of the observer is defined by the unit vector $\hat{n}=(\sin \theta_{\rm obs}, 0, \cos \theta_{\rm obs})$, where $\theta_{\rm obs}$ is the angle between the direction of the observer and the $z$-axis. We rotate the $x$ and $y$ axis such that $\hat{n}$ is located in the $x,z$ plane. Thus, the axes $\hat{n}$, $y$ and $x'$ (rotated by an angle $\theta_{\rm obs}$ with respect to $x$) define a system of reference in the observer frame.
We consider a fluid element $P$, at the position $\hat{r}=(\sin\theta \cos \phi, \sin\theta \sin \phi, \cos\theta)$, moving with a velocity $\vec{v}=(v_R \cos\phi, v_R \sin\phi, v_z)$, where $v_R,v_z$ are the fluid velocities along the radial and vertical axis of the cylindrical system of reference (see Figure \ref{fig1}). While in previous studies the velocity of the fluid element has been fixed as vertical of radial, in this paper we leave it completely general, and determined directly from the numerical simulations.
\citealt{1987Natur.327..123B,Segalis_2001} obtained explicit expressions for the GW memory polarization components $h_+$ and $h_\times$ in the transverse-traceless (TT) gauge. The explicit expressions for $h_+$ and $h_\times$ are:
\begin{eqnarray}
h_+\equiv h_{xx}^{TT}=-h_{yy}^{TT}&=&\frac{2G}{c^4}\frac{E}{D} \frac{\beta^2\sin^2\theta_v}{1-\beta \cos \theta_v}\cos 2 \Phi \;,
\label{eqn:new_h+}
\\
h_\times \equiv h_{xy}^{TT}=h_{yx}^{TT}&=&\frac{2G}{c^4}\frac{E}{D} \frac{\beta^2\sin^2\theta_v}{1-\beta \cos \theta_v}\sin 2 \Phi \;,
\label{eqn:new_hx}
\end{eqnarray}
where $G$ is the gravitational constant, $D$ the distance between the object and the observer, $\beta=v/c$ is the velocity normalized with respect to the speed of light, $\theta_v$ is the angle between the direction of the observer and the direction of the velocity vector, i.e.
\begin{equation}
\cos\theta_v = \hat{n}\cdot \hat{\beta} = (\beta_R \sin \theta_{\rm obs} \cos\phi + \beta_z \cos \theta_{\rm obs})/\beta \;,
\label{eq:nv}
\end{equation}
$E = (\rho H \gamma^2 c^2-p)\Delta V$ is the energy of the fluid element, being $\rho$ the mass density, $\gamma$ the Lorentz factor, $p$ the pressure, $H=1+4p/(\rho c^2)$ the specific enthalpy (by considering a hot plasma with an adiabatic index $\Gamma_{\rm ad}=4/3$), $\Delta V$ the volume of the fluid element which induces the metric perturbation, and $\Phi$ is the polar coordinate, measured in the observer frame.
To find the value of $\Phi$, we consider the following geometric relations between the angles evaluated in the observer frames (indicating the azimuthal and polar directions by the capital Greek letters $\Phi$ and $\Theta$ respectively) and those in the laboratory frame (e.g., the frame centered on the central engine; see, \citealt{Akiba2013}):
\begin{eqnarray}
\cos{\Theta} = \hat{n}\cdot\hat{r} &=& \sin\theta\cos \phi\sin\theta_{\rm obs} + \cos\theta\cos\theta_{\rm obs}, \\
\sin\theta \sin\phi &=& \sin\Theta \sin\Phi, \\
\sin\theta \cos\phi &=& \sin\Theta \cos\Phi\cos\theta_{\rm obs} + \cos\Theta \sin\theta_{\rm obs}\:,
\end{eqnarray}
which lead to
\begin{eqnarray}
\sin(2\Phi) = \nonumber \\ 2 \sin\theta
\sin\phi \left(\frac{
\sin\theta\cos \phi \cos\theta_{\rm obs} - \cos\theta\sin\theta_{\rm obs}
}{\sin^2\Theta}\right),
\label{eqn:angle_rel_1}
\\
\cos(2\Phi) = \nonumber\\ \frac{
(
\sin\theta \cos\phi\cos\theta_{\rm obs} - \cos\theta\sin\theta_{\rm obs} )^2 -\sin^2\theta \sin^2\phi }{\sin^2\Theta} .
\label{eqn:angle_rel_2}
\end{eqnarray}
In the case of an on-axis observer, i.e. located along the $z$-axis, $\theta_{\rm obs} = 0$, and we recover the obvious result $\Phi = \phi$. In this case, for the symmetry of the problem, we get $h_+=h_{\times}=0$.
On the other hand, in the case of a particle moving along the $z$ axis, we have $\theta=0$, which implies $\sin(2\Phi) = 0$, $\cos(2\Phi) = 1$, and $h_{\times}=0$. Also, being $\beta = \beta_z$ in this case, we get $ \cos \theta_v = \cos \theta_{\rm obs}$, and
\begin{equation}
\frac{\beta^2\sin^2\theta_v}{1-\beta \cos \theta_v} = \frac{\beta^2(1- \cos^2 \theta_{\rm obs})}{1-\beta \cos \theta_{\rm obs}}.
\end{equation}
This function has a maximum $\left(=2(\gamma-1)/\gamma\right)$ at $\cos \theta_{\rm obs}= \beta \gamma/(\gamma+1)$. In particular, for an ultra-relativistic flow, $\gamma \gg 1$, and the maximum ($=2$) is at $\theta_{\rm obs}^2 \sim 2/\gamma$. Thus, the GW signal determined from equation \eqref{eqn:new_h+} is weakly boosted along the direction of the observer, except for observers located nearly along the jet axis (in which case $h_+=0$ as shown above).
In practice, the calculation of the GW signals proceeds as follows. We save a large number of snapshots of our two-dimensional, axisymmetric simulations at $t=t_i$, with $i=1,..,600$ (i.e., 600 outputs, spaced by 0.5 s, during the total integration time of 300 s). The data files include the positions $R, z$ and the size $\Delta V$ of each cell, in addition to the thermal pressure, mass density and the velocity vector. Then, we remap each cell along the azimutal $\phi$ direction. We compute the values of $h_+$ and $h_{\times}$ (to verify that it remains $\sim 0$ at all times). Then, we compute the arrival time of the GW signal generated by that particular cell, that is,
\begin{equation}
t_{\rm obs} = t_i - (R/c) \cos\phi \sin \theta_{\rm obs} - (z/c) \cos \theta_{\rm obs} \;.
\label{eq:tobs}
\end{equation}
We divide the time-space in the observer frame in $N_{\rm obs}$ equally-spaced time-bins. Then, we add the contribution of a certain cell to the corresponding time bin to determine $h_+$ as a function of the observer time.
\subsection{Calculation of the amplitude spectral density}\label{sec:asd_calc}
When a GW passes through an interferometer, it produces a time-series data, i.e., a succession of data points measured at certain times. The measured data $s(t)$ is a combination of the detection noise $n(t)$ and the GW signal $h(t)$ \citep{Moore2014}:
\begin{equation}
s(t) = h(t)+n(t) ,
\label{signal}
\end{equation}
where $h(t)= F_+ h_+ + F_\times h_\times$, being $F_+$ and $F_\times$ the antenna response patterns. For an optimal oriented source, $F_+=1$, and $h(t)\simeq h_+$.
The sensitivity of a detector to these polarizations depends upon the relative orientations of the source and detector. The challenge in the data analysis is to separate the GW signal from the noise for a given observation.
In the frequency domain $f$, the characteristic GW strain {\bf $h_c(f)$} is defined as:
\begin{equation}
[h_c(f)]^2 = 4 f^2 |\tilde{h}(f)|^2,
\end{equation}
where $\tilde{h}(f)$ is the Fourier transform of the strain $h(t)$, and the noise amplitude $h_n(f)$ is:
\begin{equation}
[h_n(f)]^2 = f^2 S_n(f),
\end{equation}
where the function $S_n(f)$ is called the power spectral density of the noise (PSD) and the signal noise ratio (SNR) can be defined by:
\begin{equation}
{\rm SNR} = \int_{0}^{\infty} df \frac{4 |\tilde{h}(f)|^2}{S_n(f)}\;.
\label{eqn:SNR_c}
\end{equation}
This characteristic strain for an astrophysical source is the amplitude of the wave times the square root of the number of periods observed. Furthermore, the amplitude spectral density (ASD) is computed as
\begin{equation}
ASD = \sqrt{h_c(f) f^{-1/2}} = 2 f^{1/2} |\tilde{h}(f)| \;.
\label{eq:17}
\end{equation}
The ASD is a crucial element for characterizing the detection strain during the data analysis.
The ASD and SNR are computed in this paper by considering the strain $h(t)$ computed as described in section \ref{sec:2.2}, by computing the Fourier transform and by applying equations
\eqref{eqn:SNR_c} and \eqref{eq:17}.
The SNR for binary black holes detected by the LIGO/VIRGO network is between 6 and 26, with most events detected with a SNR of 10-20\footnote{See, e.g., \url{https://www.gw-openscience.org/eventapi/html/allevents/}}. In this paper we consider a conservative value SNR = 10 as detectability limit of the GW signal computed from a template-based analysis.
\section{Results}\label{sec:results}
\subsection{Jet dynamics}
In this section, we describe the dynamics of the system for the different numerical simulations. Figure
\ref{fig2} shows three different evolutionary times (at 7 s, 14 s and 300 s from the top to the bottom panels) for, from left to right, a successful jet without and with an associated SN (models ``successful jet 1'' and ``jet + supernova'', for the choked jet (the ``failed jet'' model) and for a SN-like explosion (the ``supernova'' model). The ``successful jet 2'' model is qualitatively similar to the ``successful jet 1'' model (although the jet breaks out on a shorter timescale, as we will discuss below) and it is not shown in the figure.
As shown in Figure \ref{fig2} (top panels), the ``successful Jet 1'' and ``jet + supernova'' models expands through the stellar material. At the shock front, the stellar material is heated and accelerated by the forward shock, while (in the lab frame) the jet material, launched from the central engine and propagating through the jet channel, is heated and decelerated by the reverse shock. The hot, entropy rich post-shock material expands sideways into the progenitor star, producing an extended cocoon \cite[see, e.g.,][]{Bromberg2011, Gottlieb_2018}, which helps collimating the jet. Despite this extra collimation, the jet velocity is sub-relativistic while the jet moves through the star (see Figures \ref{fig2} and \ref{fig3}).
Once the jet breaks out from the stellar surface (Figure \ref{fig2}, for the ``successful jet 1'' and ``jet + supernova'' models), the cocoon expands laterally quickly engulfing the low density region surrounding the progenitor star, while the entropy rich material, close to the jet axis, accelerates converting thermal to kinetic energy. The cocoon material remains strongly stratified both along the radial and the polar direction, moving at mildly relativistic speeds (close to the jet axis) and sub-relativistic speeds close to the equatorial plane.
Once the jet expands to larger distances (Figure
\ref{fig2}, left-bottom panel), the fast moving material remains confined into a thin shell with size $\gtrsim t_j c$ ($\sim 3\times 10^{11}$ in the successful jet simulations shown in the figure), where $t_j$ is the time during which the jet is injected by the central engine. On the other hand, the cocoon begins to decelerate, specially close to the equatorial plane where the cocoon energy is lower, as indicated by the presence of Rayleigh-Taylor instabilities visible in Figure \ref{fig2}.
The simulation of the jet associated to a SN (the ``jet + supernova'' model) is qualitatively similar to the one without the SN (the ``successful jet 1'' model). In this simulation, the jet is launched with a delay of 1 s with respect to the SN. After a few seconds, the jet head reaches the SN shock front, breaking out of it and expanding through the progenitor star. The late phases are also similar to the case of a jet without a SN discussed above, except that, at large times, the SN shock front breaks out from the progenitor star into the jet cocoon.
We notice that the general outcome of the system depends on the time when the jet breaks out from the SN. If, for instance, the jet energy, opening angle and duration are such that the SN shock front breaks out first from the stellar surface, then the jet will remain trapped inside the expanding SN, depositing its energy in the deep layers of the SN ejecta. The result of the interaction between the SN, the jet and its cocoon leads to a rich landscape of scenarios which have not been studied in detail yet \citep[see][for a qualitative description]{DeColle2021}.
The third column of Figure \ref{fig2} shows the case of a choked jet (the ``failed jet'' model). In this case, the jet opening angle is larger by a factor of $\sim$ 2, so that the luminosity per unit solid angle drops by a factor of $\sim 4$. Then, the jet duration (10 s) is not large enough for the jet to break through the progenitor star. Once the jet power is switched off, the relativistic moving material crosses the jet channel in a time $R_h/c\sim \beta_h t_j$, being $R_h$ and $\beta_h \sim 0.1-0.3$ c the head position and velocity, and $t_j$ the jet injection time. Once all the jet material arrives to the head of the jet, the jet quickly expands laterally and decelerate. Then, it can break out from the stellar surface into a more spherical explosion (see the bottom panel of the figure).
The last column of Figure \ref{fig2} shows a nearly spherical explosion, qualitatively representing a SN explosion (the ``supernova'' model). In this case, the shock breakout is also nearly spherical. Nevertheless, we notice that realistic 3D simulations of SN explosions show a much more asymmetric, turbulent behaviour not captured in these 2D simulations.
Figure \ref{fig3} shows the evolution of the head of the jet ($z_{\rm sh}$ hereafter) and its average velocity, as a function of time, for the different models. As discussed above, the velocity of the shock front is sub-relativistic inside the progenitor star. Once the shock front approaches the stellar surface, it quickly accelerates due to the large density gradients. This is visible both in the top panel of Figure \ref{fig3}, where the slope of the curves showing $z_{\rm sh}$ vs $t$ becomes steeper just after the breakout (represented by the vertical dotted lines), and in the bottom panel, where the average velocity increases quickly after the breakout. Then, the SN and the choked jet cases achieve a velocity of $\sim 0.2$ c, while the successful jets (with or without SN associated) continue accelerating until the end of the simulation. As mentioned before, the acceleration process is related to the conversion of thermal to kinetic energy. At the end of the process, the jet head will arrive to a terminal Lorentz factor $\Gamma_j\sim E_j/M_j c^2 \gg 1$.
Finally, we notice that the high luminosity model (``successful jet 2'') is qualitatively similar to the ``successful jet 1'' model, with the main difference being the timescales for the different phases to occur. As the luminosity is larger, the jet duration is shorter, and the progenitor star is smaller, the jet will break out from the stellar surface in a much smaller time, and it will accelerate faster to its final velocity (see Figure \ref{fig4}).
\subsection{GW emission}
To understand where the GW signal originates from, we show in Figure \ref{fig4} the amplitude of the GW signal $h_+$ as a function of $z$, at different times, i.e., integrating over the radial and azimuthal directions. During the first 10 s, the jet is continuously injected into the computational box, and the jet energy increases along the jet channel (see Figure \ref{fig3}). As shown by the black curve, corresponding to $t=10$ s, the GW signal is produced along most of the jet channel. The small fluctuations correspond to the presence of recollimation shocks. As the jet pressure is larger than the cocoon pressure, the jet expands laterally into the cocoon, until when both pressures are approximately equal. Then, a recollimation shock is created, pinching the jet onto the jet axis. This produces strong fluctuations in the jet velocity and energies, which lead to the observed fluctuations in the GW signal seen in Figure \ref{fig4}.
Once the jet breaks out from the star, the energy and velocity into the emitting region becomes more uniform. As discussed above, the jet velocity increases strongly achieving a Lorentz factor close to the terminal value (set to 100 in the simulation, see section \ref{sec:methos}). While a fraction of the total energy is stored in the cocoon, the cocoon does not contribute significantly to the GW signal, as it moves at most at mildly relativistic speeds. This can be seen in the red curve shown in Figure \ref{fig4} (corresponding to $t=140$ s), in which it is evident that the region emitting the GW signal is limited to the fast moving jet material.
Figure \ref{fig5} shows $h_{+} D$ as a function of time. $h_{\times} D$, not shown in the figure, remains close to zero (at machine precision) at all time, given that all simulations are axisymmetric. To illustrate the effect of the arrival times on the shape of the GW signal, we show the GW amplitude in the lab frame (top panel), i.e., computed assuming $t_{\rm obs} = t$ in equation \eqref{eq:tobs}, and in the observer frame (center, bottom panels) for the successful jet model without an associated SN.
In the lab frame, the GW signal presents two peaks, the first one at $t=t_j$, i.e., corresponding to the time when the jet power is switched off from the central engine, and the second at the very end of the simulation, corresponding to the acceleration of the jet to its terminal velocity.
Equation \eqref{eqn:new_h+} implies that a constantly powered jet with constant velocity (along the $z$-axis) and $E_j=L_j t$, with also $L_j(t)=L_j$ constant, would produce a GW signal increasing linearly with time (see also \citealt{Yu_2020}).
Figure \ref{fig5} shows that the increase before the first peak is not linear, due to the jet acceleration as it approaches the stellar surface and it moves through a thinner medium (see Figure \ref{fig3}, bottom panel). As soon as the jet luminosity starts dropping\footnote{The jet injection time is $t_j=10$ s, but, to avoid numerical problems related with the strong rarefaction wave produced once the jet is switched off, we set a jet luminosity dropping linearly between 9 s and 10 s.} at $t=9$ s, the GW amplitude quickly drops with time. At larger distances from the central engine, the GW amplitude increases again due to the acceleration of the jet material. Once the jet achieves its terminal velocity, that is, after transforming most of its thermal to kinetic energy, the GW amplitude achieves a second peak before dropping again with time. Unfortunately, the second peak is not completely resolved in our simulations, as it happens (in the lab frame) at times larger than the simulated 300 s. Then, the value of the GW signal at the second peak should then be taken as a lower limit to the real value.
In the lab frame, the dependence on the observing angle is weak. Except for observer located exactly on the jet axis, for which $h_+=0$, there is a difference $\lesssim 2$ between the values of $h_+$ computed at different observer angles.
The central and right panels of Figure \ref{fig5} show the same calculations, but in the observer frame. A qualitative understanding of the behaviour of $h_+$ in this case can be attained by assuming that all GW signal is coming from a region very close to the jet axis. In this case, $R=0$, and equation \eqref{eq:tobs} reduces to
\begin{equation}
t_{\rm obs} = t_n - (z/c) \cos \theta_{\rm obs} \;.
\end{equation}
Then, assuming that the emission comes from a single point source moving with constant velocity $\beta$, we get
\begin{equation}
t_{\rm obs} = t \left(1 - \beta \cos \theta_{\rm obs}\right) \;.
\end{equation}
For observers located at large observing angles, $\theta_{\rm obs}\gg 0$, $t_{\rm obs} \sim t$ and the GW arrival time is the same as the time when the signal is produced (except of course for the time $D/c$ needed for the signal to propagate from the source to the Earth). On the other hand, for observers located at small observing angles,
\begin{equation}
\cos \theta_{\rm obs}\sim 1-\frac{\theta_{\rm obs}^2}{2},
\end{equation}
and
\begin{equation}
t_{\rm obs}\sim t \left(1 - \beta + \frac{\beta \theta_{\rm obs}^2}{2}\right) \sim t\;\frac{1 + \Gamma^2 \theta_{\rm obs}^2}{2\Gamma^2} .
\end{equation}
Then, for
\begin{equation}
\theta_{\rm obs} \ll \frac{1}{\Gamma}\sim 6^\circ \left(\frac{\Gamma}{10}\right)^{-1} ,
\end{equation}
we have
\begin{equation}
t_{\rm obs} \sim \frac{t}{2\Gamma^2},
\end{equation}
and the GW signal arrival time is reduced by a factor of a few hundred with respect to the GW signal as seen in the lab frame, while for $\theta_{\rm obs} \gg 1/\Gamma $, we have
\begin{equation}
t_{\rm obs} \sim \frac{t \, \theta_{\rm obs}^2}{2}.
\end{equation}
As shown in Figure \ref{fig5}, the GW signal is very different in the observer frame with respect to the lab frame. Consistently with the discussion above, the second peak moves to increasingly smaller observer times for smaller observer angles. So, at $\theta_{\rm obs} = 5^\circ$, the second peak drops substantially, overlapping the first peak. As the simulations output files are saved every 0.5 s, this implies that, for this observer angle, the two peaks are separated by less than 0.5 s., while, e.g., the second peak moves at $\sim 12$ s, $\sim 22$ s for observers located at $\theta_{\rm obs} = 10^\circ, 20^\circ$ respectively. As more GW radiation arrives during a shorter time, the amplitude of the two peaks increase substantially, specially for small observer angles.
The bottom panel shows that the maximum in the GW signal is obtained between $\theta_{\rm obs} = 3^\circ$ and $\theta_{\rm obs} = 7^\circ$, i.e., for observers located at the edge of the jet. Although it is barely visible due to the size of the bins in time (0.5 s as mentioned before), the break-out from the progenitor star produces a small change in the slope of the curves.
Figure \ref{fig6} shows the GW amplitude $h_+ D$ for the other models considered. The ``successful jet 1'' and ``jet + supernova'' models produce similar results (compare the upper panel of Figure \ref{fig6} with the middle panel of Figure \ref{fig5}). The GWs produced by the luminous, ``successful jet 2'' shown in the second panel also presents a similar behaviour, but with peaks located at shorter times, and a much larger amplitude at peak ($\sim 13000$ cm vs $\sim 650$ cm). In the case of the ``failed jet'', $h_+$ increases for $t\leq t_j$, to then drop on a short timescale ($\lesssim 0.5$ s). The peak achieved for this model is $\sim 2-3$ order of magnitude smaller than in the other cases.
Finally, the GW signal produced by a {\rm SN} is several orders of magnitude smaller, as the velocity of the SN shock front remains always sub-relativistic. Anyway, we note that our simulations do not capture the initial, larger GW signal produced by the early propagation of the SN shock front immediately after the collapse, because we follow the propagation far away from the central engine.
\begin{table*}
\centering
\begin{tabular}{cccccccc}
\hline
Detector & \multicolumn{2}{c}{SNR } & \multicolumn{2}{c}{Distance [Mpc]} & \multicolumn{3}{c}{Rate [yr$^{-1}$]} \\
& $5^\circ$ & $70^\circ$ & $5^\circ$ & $70^\circ$& $0^\circ-10^\circ$ & $10^\circ-40^\circ$ & $40^\circ-90^\circ$ \\ \hline
LIGO O4 & $3.8\times 10^{-3}$ & $1.3 \times 10^{-2}$ & $1.5\times 10^{-2}$ & $5.1\times 10^{-2}$ & $1.5 \times 10^{-12}$ & $1.9\times 10^{-10}$ & $4.2\times 10^{-10}$ \\ \hline
VIRGO O4 & $2.0\times 10^{-3}$ & $5.5 \times 10^{-3}$ & $2.2\times 10^{-2}$ & $2.2 \times 10^{-2}$ & $7.3 \times 10^{-13}$ & $1.8\times 10^{-11}$ & $3.6\times 10^{-11}$ \\ \hline
KAGRA & $8.9\times 10^{-3}$ & $2.8\times 10^{-3}$ & $7.3\times 10^{-3}$ & $2.3\times 10^{-2}$ & $1.6\times 10^{-14}$ & $2.1\times 10^{-12}$ & $5.0\times 10^{-12}$ \\ \hline
Einstein Telescope & $4.4\times 10^{-2}$ & $6.2\times 10^{-2}$ & $3.5\times 10^{-1}$ & $5.0\times 10^{-1}$ & $3.9\times 10^{-10}$ & $2.3\times 10^{-8}$ & $5.3\times 10^{-8}$ \\ \hline
Cosmic Explorer & $3.8\times 10^{-2}$ & $6.7\times 10^{-2}$ & $3.0\times 10^{-1}$ & $5.3\times 10^{-1}$ & $3.4\times 10^{-10}$ & $2.8 \times 10^{-8}$ & $6.4 \times 10^{-8}$ \\ \hline
eLISA & $2.1\times 10^{-2}$ & $3.9\times 10^{-3}$ & $8.5\times 10^{-2}$ & $1.5\times 10^{-2}$ & $5.5 \times 10^{-11}$ & $3.7\times 10^{-10}$ & $4.0\times 10^{-11}$ \\ \hline
ALIA & $1.6$ & $9.3\times 10^{-2}$ & $6.4$ & $3.7\times 10^{-1}$ & $1.3\times 10^{-5}$ & $1.2\times 10^{-5}$ & $4.5\times 10^{-7}$ \\ \hline
DECIGO & $1.5\times 10^{2}$ & $4.7$ & $6.0\times 10^2 $ & $1.8\times 10^{1}$ & $7.5$ & $2.2$ & $1.0 \times 10^{-1}$ \\ \hline
BBO & $1.5\times 10^2$ & $5.4$ & $6.0\times 10^2$ & $2.1\times 10^1$ & $ 7.9$ & $2.5$ & $1.2\times 10^{-1}$ \\ \hline
\end{tabular}
\caption{The columns refer to:
the observatories considered (see Figure \ref{fig7}), the signal-to-noise ratio (SNR) for a jet seen at an observer angle $\theta_{\rm obs}=5^\circ, 70^\circ$ and at a distance of 40 Mpc, the distance where SNR = 10, and the number of events detected per year along different solid angles. The values refer to the ``successful jet 2'' model. }
\label{tab:SNR}
\end{table*}
\section{Discussion}\label{sec:discussion}
In this paper, we have presented numerical simulations of the propagation of relativistic jets through a massive, progenitor star, the break-out and the expansion of the jet up to distances $\sim 10^{13}$~cm, and computed the resulting GW signal as a function of the observer angle.
Previous studies of GW memory from GRB jets have focused on the neutrinos produced by the central engine during the jet formation \citep{Hiramatsu2005,Suwa2009,Kotake2012}, on internal shocks and shock deceleration during late stages of evolution \citep{Akiba2013} and on the jet acceleration \citep{piran13,Yu_2020,piran21}. These studies have used an analytic description of the jet, often taken as an accelerating point mass. In our study we compute the GW signal by using the dynamics of the jet while it crosses the progenitor star and it accelerates through the circumstellar medium.
Although our results qualitatively confirm previous findings, our numerical simulations allow us to give a quantitative prediction of the expected GW signal.
\citet{Akiba2013} showed that the GW signal computed during the shock deceleration is about $\sim 1000$ times smaller than the one determined by our simulations, although we sample different distances, with our simulations extending up to $10^{13}$ cm, while \citet{Akiba2013} studied the propagation of the jet during the prompt emission, i.e. at $R_{\rm sh} \sim 10^{13}-10^{15}$ cm.
\citet{piran13,piran21} studied the acceleration of the jet up to ultra-relativistic speeds. They showed that the jet acceleration produces a peak in the GW signal, which depends on the observer angle.
Their study can be applied, in our context, to the acceleration of the jet when it breaks out from the star. Thus, the peak they observe in their calculations is equivalent to the second peak seen in Figure \ref{fig5} and \ref{fig6}.
\citet{Yu_2020} employed an analytical model for the dynamics of the jet through the progenitor star (applying it also to sGRBs). They computed the acceleration of the shock front as it approaches the stellar surface. Although the results are qualitatively similar, the temporal evolution of $h_+D$ is different (compare, e.g., their Figure 3 with our Figures \ref{fig5} and \ref{fig6}). As they mention, observing the GW signal would probe the jet propagation and the interior of the progenitor star. Nevertheless, we argue in this paper that numerical models are needed to get a proper quantitative prediction.
The GW signal is ``anti-beamed'' \citep{Segalis_2001,Sago_2004,piran13,piran21}. Nevertheless, we notice that the GW signal is strongly suppressed only for observer located at $\theta_{\rm obs} \approx 0^\circ$. As shown in the bottom panel of Figure \ref{fig5}, it increases for larger observer angles (respect to the jet opening angle $\theta_j$), peaking at $\theta_{\rm obs}\sim \theta_j$ (e.g., the GW signal is $\sim$ 1/2 of the peak at $\theta_{\rm obs} = \theta_j/2$).
In contrast with the prediction obtained by considering analytical models, then, we expect to see GWs associated to GRBs seen nearly on-axis. Also, we expect than in three-dimensional numerical simulations, in which the symmetry with respect to the main axis of propagation of the jet is broken, the propagating jet would produce a GW signal also on-axis.
The other clear feature resulting from our models is the presence of a double peak structure in the GW signal, due to two characteristic acceleration phases: a) inside the progenitor star, as the jet move through a lower density medium as it approaches the stellar surface; and b) after the breakout, as the jet accelerates converting thermal to kinetic energy. The timescales of the two peaks reflect directly the duration of the jet $t_j$ (the first peak) and the observer angle (with larger timescales corresponding to larger $\theta_{\rm obs}$, see Figures \ref{fig5} and \ref{fig6}).
As discussed above, the slope of the GW signal before and after the first peak (see, e.g., Figure \ref{fig6}) depends on the stellar structure and on the jet luminosity. For instance, we can expect a shallower increase for a jet with a luminosity decreasing with time. Thus, GW observations by future detectors may provide direct information on the central engine activity (e.g., jet duration and luminosity history), the stellar structure, the observer angle and the acceleration process after breakout.
Figure \ref{fig7} shows the amplitude spectral density computed from the numerical simulation of the ``successful jet 2'' model, by employing the methods described in Section \ref{sec:asd_calc}. In the figure, we can observe the range of frequency $10^{-2}-10^{3}$ Hz and the ASD $10^{-26}-10^{-10}$ Hz$^{-1/2}$ for several interferometers, and for the astrophysical signal analyzed in our study.
LIGO-VIRGO detectors were the first-generation detectors. They have completed science runs O1, O2, O3. They are currently being upgraded for O4 which will start to take data during February 2023. The KAGRA \citep{Aso2013eba} interferometer detector will join the LIGO/VIRGO collaboration during 2023. Future interferometer include \citep{Moore2014} the Laser Interferometer Space Antena (eLISA), the Advanced Laser Interferometer Antenna (ALIA) \citep{Sathyaprakash:2009xs}, DECIGO, the Big Bang Observer (BBO, \citealt{Yagi:2011wg}), and the Einstein Telescope (ET)/Cosmic Explorer (CE) \citep{Hild:2010id}. The ASD for all these interferometers are included in Figure \ref{fig7}.
Figure \ref{fig7} shows the ASDs computed from the simulation assuming a GRB jet at 1 Mpc. The signal peaks at low frequencies ($\sim 0.1$ Hz), and depends strongly on the observer angle, with a peak between $5\times 10^{-21} (D/1\; {\rm Mpc})^{-1}$ at $\theta_{\rm obs} = 5^\circ$ and $2\times 10^{-22} (D/1\; {\rm Mpc})^{-1}$ at $\theta_{\rm obs} = 70^\circ$. At larger frequencies, the signal drops to much smaller values, being $\sim$ one order of magnitude below the ASD of LIGO/VIRGO. However, our times series is sampled each 0.5 s, corresponding to a maximum frequency of 2 Hz, so that results above this frequency should be taken carefully.
In table \ref{tab:SNR} we estimate the detectability of the ``successful jet 2'' model (i.e., a relativistic jet with a total energy of $10^{52}$ erg lasting 2.5 s), considering a distance of 40 Mpc (the second and third columns of table \ref{tab:SNR}) using equation \ref{eqn:SNR_c}, for present and planned interferometers (first column) , at two characteristic observer angles ($\theta_{\rm obs}=5^\circ, 70^\circ$). The SNR is very low for ground-based interferometers ($\lesssim 4.4\times 10^{-2}$), is $\approx 1$ for ALIA and $\gg 1$ for DECIGO and BBO for a nearly on-axis observer (at $\theta_{\rm obs}=5^\circ$), and drops to smaller values for off-axis observers.
The third and fourth columns of table \ref{tab:SNR} show the distance (in Mpc) where SNR = 10, by using the relation Distance = (SNR$_{40 \; \rm Mpc}$/10) $\times$ 40 Mpc\footnote{It is easy to rescale the detectability range for different SNR thresholds as the SNR is inversely proportional to distance.}. Only galactic GRBs can be detected (while crossing the progenitor star) by LIGO/VIRGO (with a SNR=10 at $1.5-5.1\times10^{-2}$ Mpc = 15-51 kpc depending on $\theta_{\rm obs}$) and Kagra (with a SNR=10 at $7.3-23\times10^{-3}$ Mpc = 7.3-23 kpc), while DECIGO and BBO can detect GRBs with an SNR=10 up to 18-600 Mpc depenging on the observer angle.
The (uncertain) expected GRB rate is 100-1000 Gpc$^{-3}$ yr$^{-1}$ \citep[see, e.g.,][]{Fryer2002,WandermanPiran2010,Cao2011,Abbott17c}.
The sixth and seventh columns of table \ref{tab:SNR} show the expected GRB/GW detection rate by assuming an (optimistic) GRB rate of 1000 Gpc$^{-3}$ yr$^{-1}$. We compute the volume corresponding to a SNR of 10 for each solid angle, and the expected GRB rate within this solid angle\footnote{This is an order magnitude estimation. A more precise calculation would require to include the GRB energy and time duration distribution. We leave it for a future study.}.
The expected rate is very low for ground-based interferometers, while $\sim 8$ LGRB jets per year are expected to be detected by future spaced-based interferometers at small observer angles ($\theta_{\rm obs} \lesssim 10^\circ$), and $\sim 2$ LGRB jets per decade for GRB jets observed at $\theta_{\rm obs} = 40-90^\circ$.
In agreement with previous estimates \citep{Sago_2004,Hiramatsu2005,Suwa2009,Kotake2012,Sun2012,Akiba2013,piran13,Du_2018,Yu_2020,piran21},
the LGRB memory from jets crossing the progenitor stars are expected to be undetectable with LIGO/VIRGO and KAGRA.
Given the (uncertain) expected GRB rate of 100-1000 Gpc$^{-3}$ yr$^{-1}$ \citep[see, e.g.,][]{Fryer2002,WandermanPiran2010,Cao2011,Abbott17c}, the GW memory from jet/shock
propagation in very rare galactic GRB jets is eventually detectable with LIGO/VIRGO.
Future space-based low-frequency instruments, as DECIGO and BBO, will easily detect the GW memory from GRB jets located up to distances $\lesssim 600$ Mpc, as shown Table \ref{tab:SNR}.
In addition to successful jets, producing the observed gamma-ray emission, other high energy transients are likely associated to a central engine activity and to the propagation of a relativistic jets, including low-luminosity GRBs \citep{Campana2006,Soderberg2006,Starling2011,Margutti2013}, relativistic SNe \citep{Soderberg2010,Margutti2014,Milisavljevic2015}, and X-ray flashes \citep{pian2006,BrombergNakarPiran2011,NakarSari2012}. In addition, it has been suggested that SNe (in particular, broad-line type Ic) could be produced by the propagation of a choked jet \citep[e.g.,][]{Piran2019,soker2022}.
These events could be detectable at shorter distances. Our results show that the GW strain depends mainly on the jet luminosity and the jet velocity.
Jets choked while deep inside the progenitor stars, as the one simulated in this paper, will have a very low signal (see Figure \ref{fig6}, third panel) as their velocity is only mildly relativistic when the jet is switched-off from the central engine. Nevertheless, jets lasting for longer times, i.e. arriving closer to the stellar surface before being choked, will accelerate to relativistic speeds producing signals similar to those of successful jets. The quoted detection distances may also be optimistic, if template-based searches cannot be used (and, consequently, the SNR threshold for detection
is raised).
Finally, we notice that, while we have simulated relativistic jets leading to LGRBs (i.e., associated to the collapse of massive stars), a similar outcome is expected for SGRBs, associated to the coalescence of massive stars. These jets are expected to last for shorter times, to have smaller total energies and can move through smaller density media, so than they could achieves relativistic velocities on shorter timescales. Detailed numerical simulations are needed to understand whereas the expected signal would be larger for jets associated to LGRBs or SGRBs.
\section{Conclusions}\label{sec:conclusions}
In this paper, we have presented numerical simulations of relativistic jets associated to long GRB. We have computed the resulting GW signal for successful jets, choked jets, and jets associated to a SN. In successful jets (accompanied or not by a SN), the GW signal is characterised by a double peak structure, with amplitudes $h_+ D$ ranging from hundreds to several thousand. The first peak corresponds to the jet injection from the central engine, while the second peak corresponds to the jet acceleration while it breaks out from the star. In addition, the slope of the GW signals track directly the luminosity history of the GRB jets, and the structure of the progenitor star.
As GRBs are the product of collimated jets seen nearly on-axis, given the detected GRB rate, the volumetric rate depends on the jet angle and on the jet structure. Thus, the GRB volumetric rate is highly uncertain ($\sim$ 100-1000 Gpc$^{-3}$ yr$^{-1}$). As illustrated in Figures \ref{fig5} and \ref{fig6}, the GW signal presents a second peak which strongly depend on the observer angle. Thus, the observer angle can be determined precisely by observing the GW signal. In addition, by observing the associated multi-wavelength afterglows, the jet structure can be determined.
Thus, observations of the GW signal may provide us with a precise estimate of the volumetric rate of GRBs.
The predicted GW signal is below the detection limits of LIGO/VIRGO, KAGRA and similar Earth-based detectors, and is expected to be seen by lower-frequency space-based detectors as BBO and DECIGO. Future detections of GWs from GRBs may provide information on optically thick regions impossible to explore by electromagnetic radiation, clarifying the jet duration, the structure of the progenitor star and the jet acceleration process.
It is also worth pointing out that the GW detectability can be improved with a network of interferometers. With the rough rule that, the SNR achievable with a network of identical interferometers is the single interferometer SNR multiplied by the square root of the number of interferometers in the network.
\section*{Acknowledgements}
We acknowledge the anonymous referee for a careful reading of the manuscript and for suggestions that improved it substantially.
We acknowledge the computing time granted by DGTIC UNAM on the supercomputer Miztli (project LANCAD-UNAM-DGTIC-281). GU and FDC acknowledge support from the UNAM-PAPIIT grant AG100820 and IG100422. GU acknowledges support from a CONACyT doctoral scholarship. This work was supported by the CONACyT Network Project No. 376127: {\it Sombras, lentes y ondas gravitatorias generadas por objetos compactos astrofГsicos}. C.M. thanks PROSNI-UDG support.
\section*{Data availability}
The data underlying this article will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
\bibliography{main} %
\bsp %
\label{lastpage} |
Title:
Zwicky Transient Facility and Globular Clusters: The Period-Luminosity and Period-Wesenheit Relations for Anomalous Cepheids Supplemented with Large Magellanic Cloud Sample |
Abstract: We present the first gri-band period-luminosity (PL) and period-Wesenheit
(PW) relations for the fundamental mode anomalous Cepheids. These PL and PW
relations were derived from a combined sample of five anomalous Cepheids in
globular cluster M92 and the Large Magellanic Cloud, both of which have
distance accurate to ~1% available from literature. Our g-band PL relation is
similar to the B-band PL relation as reported in previous study. We applied our
PL and PW relations to anomalous Cepheids discovered in dwarf galaxy Crater II,
and found a larger but consistent distance modulus than the recent measurements
based on RR Lyrae. Our calibrations of gri-band PL and PW relations, even
though less precise due to small number of anomalous Cepheids, will be useful
for distance measurements to dwarf galaxies.
| https://export.arxiv.org/pdf/2208.13950 |
\shorttitle{Anomalous Cepheid PL \& PW relations}
\shortauthors{Ngeow et al.}
\title{Zwicky Transient Facility and Globular Clusters: The Period-Luminosity and Period-Wesenheit Relations for Anomalous Cepheids Supplemented with Large Magellanic Cloud Sample}
\correspondingauthor{C.-C. Ngeow}
\email{[email protected]}
\author[0000-0001-8771-7554]{Chow-Choong Ngeow}
\affil{Graduate Institute of Astronomy, National Central University, 300 Jhongda Road, 32001 Jhongli, Taiwan}
\author[0000-0001-6147-3360]{Anupam Bhardwaj}
\affil{INAF-Osservatorio astronomico di Capodimonte, Via Moiariello 16, 80131 Napoli, Italy}
\author[0000-0002-3168-0139]{Matthew J. Graham}
\affiliation{Division of Physics, Mathematics, and Astronomy, California Institute of Technology, Pasadena, CA 91125, USA}
\author[0000-0001-5668-3507]{Steven L. Groom}
\affiliation{IPAC, California Institute of Technology, 1200 E. California Blvd, Pasadena, CA 91125, USA}
\author[0000-0002-8532-9395]{Frank J. Masci}
\affiliation{IPAC, California Institute of Technology, 1200 E. California Blvd, Pasadena, CA 91125, USA}
\author[0000-0002-0387-370X]{Reed Riddle}
\affiliation{Caltech Optical Observatories, California Institute of Technology, Pasadena, CA 91125, USA}
\section{Introduction}\label{sec1}
Anomalous Cepheids (hereafter ACep; also known as BLBOO type variable stars) are evolved core-helium burning stars that cross the instability strip with masses of $\sim1$ to $\sim2\ M_\odot$ and pulsation periods in between $\sim 0.5$ and $\sim 2.5$~days. Some of the theoretical work and recent reviews on ACep can be found in \citet{cox1988}, \citet{bono1997}, \citet{marconi2004}, \citet{fiorentino2006}, \citet{sandage2006}, and \citet{monelli2022}. Similar to classical Cepheids and RR Lyrae, ACep can pulsate in fundamental mode or first-overtone mode, and follow period-luminosity (PL) relations. On the PL plane, ACep are fainter than classical Cepheids but brighter than Type II Cepheids at a given period \citep[for example, see Figure 5 in][]{soszynski2015}.
The earliest optical PL(Z, where Z represents metallicity) relations for ACep in $B$- and/or $V$-band were derived by \cite{nemec1988}, \citet{nemec1994}, \citet{bono1997}, \citet{pritzl2002}, and \citet{marconi2004}. Later, ACep PL relations were extended to include other filters, such as $I$- and $K$-band, as well as the period-Wesenheit (PW) relations,\footnote{By construction, Wesenheit index is extinction-free \citep{madore1982,madore1991}.} in \citet{marconi2004}, \citet{ripepi2014}, \citet{soszynski2015}, \citet{groenewegen2017}, and \citet{iwanek2018}. Recently, \citet{ripepi2019} and \citet{ripepi2022} derived the PL and PW relations in the {\it Gaia} filters using ACep located in both the Large and Small Magellanic Cloud (LMC and SMC, respectively).
To date, empirical ACep PL and PW relations in the optical band are only available in the Johnson-Cousin $BVI$ filters and {\it Gaia} $GB_pR_p$ filters. On the other hand, the Sloan Digital Sky Survey (SDSS) and the SDSS-variant $ugriz$ filters (or a subset of them) are now becoming more popular in a number of time-series synoptic sky surveys, such as the representative Vera C. Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{lsst2019}. Similar to our previous work on contact binaries \citep{ngeow2021}, RR Lyrae \citep{ngeow2022a} and Type II Cepheids \citep{ngeow2022b}, we aimed to derive the $gri$-band PL and PW relations for ACep using homogeneous $gri$-band data obtained from the Zwicky Transient Facility \citep[ZTF,][]{bellm2017,bellm2019,gra19,dec20}. ZTF repeatedly observes the northern sky in customized $gri$ filters with a dedicated $47$-squared-degree wide-field mosaic CCD camera mounted on the Palomar 48-inch Samuel Oschin Schmidt telescope. The high-level surveys carried out by ZTF can be divided into public surveys, partner surveys, and California Institute of Technology (Caltech) surveys. All ZTF imaging data, regardless of the surveys, were processed via the same dedicated reduction pipeline \citep{mas19}, and the final catalog products were calibrated to the Pan-STARRS1 \citep[Panoramic Survey Telescope and Rapid Response System 1,][]{chambers2016,magnier2020} AB magnitude system.
ACep are very rare in globular clusters (hereafter GC) but there are a number of known ACep in nearby dwarf galaxies \citep[for example, see][]{nemec1994,marconi2004,monelli2022} including LMC and SMC. Therefore, the empirical PL and PW relations derived in the literature are exclusively based on dwarf galaxies and/or Magellanic Clouds. However, with exceptions of a few cases, dwarf galaxies in general are more distant than GC which limits the accurate calibration of ACep PL and PW relations. The absolute calibration of $gri$-band PL and PW relations for ACep will be useful to obtain independent distances to ACep host dwarf galaxies in the era of LSST. Since there are only a few ACep in the GC (Section \ref{sec2.1}), we also utilized the LMC sample (Section \ref{sec2.2}) to derive PL and PW relations in Section \ref{sec3}. Note that very accurate distances to nearby GCs are now known from \citet{baumgardt2021} and a percent level precise distance to the LMC is available based on late-type eclipsing binaries \citep{pie2019}. We test our relations to a dwarf galaxy in Section \ref{sec4}, followed by discussion and conclusion of this work in Section \ref{sec5}.
\section{Sample and Data} \label{sec2}
\subsection{GC Samples with ZTF Data} \label{sec2.1}
We searched the literature for ACep in GC that are located north of $-30^\circ$ declination (i.e. within the footprint of ZTF), and identified four such ACep. For M9 V12, even though \citet{af2013} argued it is an ACep, later \citet{soszynski2020} re-classified this cluster variable star as a Type II Cepheid. Hence, we discarded this variable star, which leave three GC ACep in our sample. ZTF light curves for these three ACep were extracted from the PSF (point-spread function) catalogs using a matching radius of $1\arcsec$. These ZTF PSF catalogs include the ZTF partner surveys data until 31 May 2022 and the ZTF Public Data Release 11.\footnote{See \url{https://www.ztf.caltech.edu/ztf-public-releases.html}}
NGC5466 V19 \citep{zinn1976,mccarthy1997}, a.k.a. BL BOO (the prototype of the BLBOO type variable stars), is a well-known ACep which has been classified as a first-overtone variable star \citep{nemec1988,nemec1994}. We first used the {\tt LombScargleMultiband} module from the {\tt astroML/gatspy}\footnote{\url{https://github.com/astroML/gatspy}, also see \citet{vdp2016}.} package \citep{vdp2015} to search the period of this ACep on the ZTF $gri$-band light curves. A low-order Fourier expansion was used to fit the folded light curves \citep[for more details, see][and reference therein]{ngeow2022b}, and outliers beyond $3s$ from the fitted light curves were identified and removed (where $s$ is the dispersion of the fitted light curve). We then ran the second pass of the {\tt LombScargleMultiband} module to determined the final adopted period, $P=0.821322$~days, for NGC5466 V19. The low-order Fourier expansion was fitted to the folded light curves (with outliers removed) for the second time, determining the following $gri$-band intensity mean magnitudes: $14.748$, $14.738$, and $14.806$~mag, respectively. Due to large number of available data points per light curves (see Figure \ref{fig_lc}), we estimated negligible errors on these mean magnitudes. We note that this two-step process is identical to the one adopted for the Type II Cepheids studied in \citet{ngeow2022b}. The left panel of Figure \ref{fig_lc} presents the folded ZTF light curves for this ACep.
M92 V7 is another {\it bona fide} ACep located in the GC \citep{osborn2012,yepez2020}, which pulsates in the fundamental mode. Based on the same two-step process as in the case of NGC5466 V19, we determined a period of $1.061403$~days for M92 V7, and the intensity mean magnitudes of $14.214$, $13.989$, and $13.929$~mag in the $gri$-band, respectively. The ZTF light curves for this ACep are shown in the right panel of Figure \ref{fig_lc}.
M15 V142 was re-classified as an ACep in \citet{bhardwaj2021}. However, ZTF light curves for this ACep exhibit strong evidence of blending (i.e. the light curves are ``flat''), therefore we removed this ACep from our sample.
\subsection{LMC Samples with Archival Data} \label{sec2.2}
\begin{deluxetable*}{lcclllllllr}
\tabletypesize{\scriptsize}
\tablecaption{Mean magnitudes and extinction for the five LMC ACep\label{tab_lmc}}
\tablewidth{0pt}
\tablehead{
\colhead{OGLE-IV ID} &
\colhead{ID in \citet{sebo2002}} &
\colhead{Period (days)\tablenotemark{a}} &
\colhead{$B$} &
\colhead{$V$} &
\colhead{$I$} &
\colhead{$g$} &
\colhead{$r$} &
\colhead{$i$} &
\colhead{$E(V-I)$\tablenotemark{b}} &
\colhead{$\Delta$\tablenotemark{c}}
}
\startdata
OGLE-LMC-ACEP-019 & Ogle114046 & 0.9094064 & 18.15 & 17.83 & 17.33 & 17.88 & 17.77 & 17.72 & $0.08\pm0.07$ & $\cdots$ \\
OGLE-LMC-ACEP-021 & Ogle194404 & 1.2958507 & 18.11 & 17.85 & 17.25 & 17.87 & 17.81 & 17.63 & $0.10\pm0.07$ & $-0.00$ \\
OGLE-LMC-ACEP-026 & Ogle132771 & 1.7387480 & 18.13 & 17.61 & 16.94 & 17.76 & 17.46 & 17.35 & $0.11\pm0.08$ & $+0.03$ \\
OGLE-LMC-ACEP-046 & Ogle272276 & 1.2637156 & 18.48 & 17.98 & 17.37 & 18.12 & 17.84 & 17.78 & $0.09\pm0.08$ & $+0.03$ \\
OGLE-LMC-ACEP-050 & Ogle243639 & 1.0446956 & 17.60 & 16.95 & 16.61 & 17.16 & 16.75 & 17.04 & $0.10\pm0.09$ & $-0.12$ \\
\enddata
\tablenotetext{a}{Periods adopted from the OGLE-IV LMC ACep catalog \citep{soszynski2015}.}
\tablenotetext{b}{Extinction retrieved from \citet{sko2021}.}
\tablenotetext{c}{$\Delta = (V-I)_{SEBO}-(V-I)_{OGLE}$ are the difference of the colors between \citet{sebo2002} and \citet{soszynski2015}.}
\end{deluxetable*}
The forth phase of Optical Gravitational Lensing Experiment (OGLE-IV) found 141 ACep in the LMC \citep{soszynski2015}. The $(B-V)$ colors are required to transform the Johnson-Cousin $BVI$-band photometry to the Pan-STARRS1 $gri$-band photometry \citep{tonry2012}. Hence, we cannot use OGLE-IV data which only covers $V$- and $I$-bands. Therefore, we cross-matched the OGLE-IV LMC ACep catalog with the sample of $BVI$ photometry presented in \citet{sebo2002}. In total, we found four common fundamental mode ACep (OGLE-LMC-ACEP-019, 021, 026, and 046) and one first-overtone ACep (OGLE-LMC-ACEP-050). The $BVI$-band mean magnitudes of these five LMC ACep were then transformed to the $gri$-band using the relations provided in \citet{tonry2012}. Both $BVI$-band mean magnitudes and the transformed $gri$-band mean magnitudes for these LMC ACep are provided in Table \ref{tab_lmc}.
\section{The PL and PW Relations} \label{sec3}
Mean magnitudes of the two GC ACep were converted to absolute magnitudes by adopting the GC distances presented in \citet{baumgardt2021}, together with the extinction corrections based on the {\tt Bayerstar2019} 3D reddening map \citep{green2019}.\footnote{See \url{http://argonaut.skymaps.info/usage}, the reddening values were queried via the {\tt dustmaps} package \citep{green2018} available at \url{https://dustmaps.readthedocs.io/en/latest/}.} Similarly, we adopted the most precise LMC distance from \citet{pie2019} and the LMC extinction map of \citet{sko2021} to convert the mean magnitudes of five LMC ACep to absolute magnitudes. We note that accuracy of the adopted distance to NGC5466, M92, and LMC is at the $\sim1\%$~level. As there are only two first-overtone ACep in our sample, we only fit the four LMC and one GC fundamental mode ACep (hereafter collectively referred as calibrating ACep) and obtain the following PL relations:
\begin{eqnarray}
M_g & = & -2.36[\pm0.99]\log P - 0.39[\pm0.04],\sigma=0.36, \\
M_r & = & -2.06[\pm0.76]\log P - 0.63[\pm0.04],\sigma=0.23, \\
M_i & = & -2.03[\pm0.59]\log P - 0.72[\pm0.05],\sigma=0.18,
\end{eqnarray}
\noindent where $\sigma$ is the dispersion of the fitted PL relations. Similar, we obtained the following PW relations \citep[see][]{ngeow2021} based on the calibrating ACep:
\begin{eqnarray}
W^{gr}_r & = & r - 2.905 (g-r), \nonumber \\
& = & -2.64[\pm0.51]\log P - 1.05[\pm0.07],\sigma = 0.35, \\
W^{ri}_r & = & r - 4.051 (r-i), \nonumber \\
& = & -2.01[\pm0.43]\log P - 0.95[\pm0.06],\sigma = 0.22, \\
W^{gi}_g & = & g - 2.274 (g-i), \nonumber \\
& = & -2.34[\pm0.17]\log P - 0.08[\pm0.02], \sigma = 0.11.
\end{eqnarray}
\noindent Figure \ref{fig_plw} presents the $gri$-band PL (left panel) and PW (right panel) relations derived in this work.
Since in general PW relations are expected to exhibit a smaller dispersion than the PL relations, the large dispersions seen in equation (4) and (5) could due to the less accurate colors for some of the LMC ACep derived in \citet{sebo2002}. Last column in Table \ref{tab_lmc} compares the $(V-I)$ colors listed in the OGLE-IV catalog \citep{soszynski2015} and those derived from the mean magnitudes given in \citet{sebo2002}. The color difference for OGLE-LMC-ACEP-050 is $\sim4$~times larger than other ACep listed in Table \ref{tab_lmc}, suggesting the less accurate colors might cause this ACep to be brighter in the $W^{gr}_r$ PW relation but fainter than the fitted relations in the $W^{ri}_r$ PW relations (see the right panels of Figure \ref{fig_plw}, where OGLE-LMC-ACEP-050 is labeled as LMC AC1).
In Figure \ref{fig_compare}, we compared the empirical PL relations, for fundamental mode ACep, in various filters to equation (1) -- (3). In case of the PL slopes, we notice that: (a) in contrast to other pulsating stars (classical Cepheids, Type II Cepheids, and RR Lyrae), there is no clear trends for the PL slopes as a function of wavelengths (i.e. filters), which could due to large uncertainties in the slopes determined in most studies; (b) for a given filter, the PL slopes can be either in agreement (such as in {\it Gaia} filters) or disagreement (such as in $V$-band) among different studies; and (c) PL relations from \citet{ripepi2014} have the steepest slopes, and disagreed with previous works, for the reasons discussed in their study. Despite the large errors due to small number statistics, our $g$-band slopes agree well with the $B$-band PL slope from \citet{pritzl2002}. The PL zero-points (ZP) were found to be more consistent between different studies and follow the expected trend with wavelength.
\section{An Example of Application: Distance to Crater II} \label{sec4}
\citet{vivas2020} reported a discovery of seven ACep in a dwarf galaxy Crater II, providing us an opportunity to test our derived PL and PW relations. As shown in Figure \ref{fig_cr2}, V108 (with the shortest period) is much brighter than the rest of the ACep in Crater II, suggesting this ACep could either be a foreground object or mis-classified as an ACep. Nevertheless, it is clear that V108 should not be used to derive the distance to Crater II. In the top panel of Figure \ref{fig_cr2}, we overlaid the $B$-band PL relations adopted from \citet[][without any photometric transformation]{pritzl2002} together with the $g$-band PL relation derived in equation (1). It is clear that the three longest period ACep (V1, V26, and V109) can be fit well with either the $B$- or $g$-band fundamental mode ACep PL relation, hence they are most likely pulsating in the fundamental mode. Similarly, the three shortest period ACep (V86,\footnote{V86 is located on the $i$-band PL relation and $ W^{gi}_g$ PW relation (see middle and bottom panel of Figure \ref{fig_cr2}) based on the fundamental mode ACep, implying it could be a fundamental mode pulsator. However, as pointed out by the referee, the pulsation period for this ACep is very short and it is unlikely pulsating in the fundamental mode. Unless the PL (and PW) relations for fundamental and first-overtone ACep are in parallel, using locations of the ACep on PL (and PW) plane might not be a reliable method to determine the pulsation mode of ACep, because the un-parallel PL (and PW) relations for both pulsation modes could intersect at short period.} V107 and V110) should be first-overtone pulsators.
The mean magnitudes provided in \citet{vivas2020} are in the SDSS photometric system. Therefore, following the procedures described in \citet{ngeow2022a}, we transformed our $gi$-band PL relations to the SDSS system by adding a small correction term to our PL relations. These correction terms were determined using a subset of calibrating ACep with the same mean $(g-i)_{SDSS}$ colors as the fundamental mode ACep in Crater II. Similarly, we transformed our $W^{gi}_g$ PW relation to the SDSS system and using $W^{gi}_{SDSS}= g_{SDSS}-2.058(g_{SDSS}-i_{SDSS})$. We fit the three longest period fundamental mode ACep with our derived PL and PW relations, and obtained $\mu_g = 20.45\pm0.25$, $\mu_i = 20.55\pm0.29$, and $\mu_W = 20.54\pm0.30$~mag. Errors on these distance moduli $\mu$ were based on the small number statistics \citep[][p. 202]{dean1951,keeping1962} for only three ACep. Extinction corrections were applied to the $g$- and $i$-band mean magnitudes by using the {\tt Bayerstar2019} 3D reddening map \citep{green2019} when deriving the distance moduli for these ACep.
Using RR Lyrae and a theoretical PL relation, \citet{vivas2020} derived a distance modulus of $20.33\pm0.01$~mag to Crater II. On the other hand, based on the same RR Lyrae sample but with the latest empirical PL relations, \citet{ngeow2022a} argued that the distance modulus of Crater II should be (slightly) larger. Albeit with a larger error (due to small number of samples), the distance modulus derived from ACep is consistent with RR Lyrae-based distance modulus, and supports a farther distance to Crater II.
\section{Discussions and Conclusions} \label{sec5}
Despite the small number of calibrating ACep, we derived for the first time the fundamental mode ACep PL and PW relations in the $gri$-band based on the most precise distances to date to GC and LMC available in the literature. Though the errors on the slopes of our derived PL relations are large, our PL relations fairly agree with published PL relations in other optical filters, especially our $g$-band PL relation is similar to the $B$-band PL relation. Comparison of the PL slopes from various studies also revealed that these PL slopes did not ``converge'' to a trend, implying more data are needed to improve the ACep PL relations in the future. We applied our PL and PW relations to ACep found in dwarf galaxy Crater II, and found that half of the ACep in Crater II are pulsating in fundamental mode. Based on these fundamental mode ACep, we derived a distance to Crater II which agrees with those reported in the literature.
\citet{monelli2022} compiled a list of nearby dwarf galaxies with number of pulsating stars found in these galaxies. As can be seen from their list, almost all nearby dwarf galaxies host at least one RR Lyrae, and about half of them also host ACep. Certainly, for a dwarf galaxy with both RR Lyrae and ACep, RR Lyrae is the preferred distance indicator because in general RR Lyrae are more abundant than ACep and the RR Lyre PL and PW relations are well-developed. On the other hand, ACep are in general brighter, by $\sim 0.4$ to $\sim 2.5$~mag above the horizontal branch \citep[][and reference therein]{vivas2020}, than RR Lyrae because they are more massive. This implies that newly discovered but distant dwarf galaxies near the detection limit of a synoptic sky survey (such as LSST) could miss RR Lyrae but still detecting ACep. In such cases, ACep can provide an independent distance to these dwarf galaxies, and a test for cross-validation for other distance measurements such as using the tip of the red-giant branch (TRGB) method.
\acknowledgments
We are thankful for the useful discussions and comments from an anonymous referee that improved the manuscript. We are thankful for funding from the Ministry of Science and Technology (Taiwan) under the contracts 107-2119-M-008-014-MY2, 107-2119-M-008-012, 108-2628-M-007-005-RSP and 109-2112-M-008-014-MY3.
Based on observations obtained with the Samuel Oschin Telescope 48-inch Telescope at the Palomar Observatory as part of the Zwicky Transient Facility project. ZTF is supported by the National Science Foundation under Grants No. AST-1440341 and AST-2034437 and a collaboration including current partners Caltech, IPAC, the Weizmann Institute of Science, the Oskar Klein Center at Stockholm University, the University of Maryland, Deutsches Elektronen-Synchrotron and Humboldt University, the TANGO Consortium of Taiwan, the University of Wisconsin at Milwaukee, Trinity College Dublin, Lawrence Livermore National Laboratories, IN2P3, University of Warwick, Ruhr University Bochum, Northwestern University and former partners the University of Washington, Los Alamos National Laboratories, and Lawrence Berkeley National Laboratories. Operations are conducted by COO, IPAC, and UW.
This research has made use of the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France. This research made use of Astropy,\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{astropy2013, astropy2018}. This research has made use of the Spanish Virtual Observatory\footnote{\url{https://svo.cab.inta-csic.es}} project funded by MCIN/AEI/10.13039/501100011033/ through grant PID2020-112949GB-I00.
\facility{PO:1.2m}
\software{{\tt astropy} \citep{astropy2013,astropy2018}, {\tt dustmaps} \citep{green2018}, {\tt gatspy} \citep{vdp2015}, {\tt Matplotlib} \citep{hunter2007}, {\tt NumPy} \citep{harris2020}, {\tt SciPy} \citep{virtanen2020}.}
|
Title:
Space-borne atom interferometric gravitational wave detections. Part III. Eccentricity on dark sirens |
Abstract: Eccentricity of the inspiraling compact binaries can greatly improve the
distance inference and source localization of dark sirens. In this paper, we
continue the research for the space-borne atom interferometric
gravitational-wave detector AEDGE and investigate the effects of eccentricity
on the dark sirens observed by AEDGE in the mid-band. We simulate five types of
typical compact binaries with component mass ranging from $1-100~M_{\odot}$.
The largest improvement for both distance inference and localization can be as
much as 1.5--3 orders of magnitude. We then construct the catalogs of dark
sirens observed by AEDGE in five years. We find eccentricity is crucial to the
detection of golden binary black holes (BBH) whose host galaxy can be uniquely
identified. With only 5--10 golden dark BBHs one can obtain a 2 percent
precision measurement of $H_0$ which is sufficient to arbitrate the Hubble
tension. Regardless of eccentricity, AEDGE can also observe tens of golden
binary neutron stars (BNS) and neutron star--black hole binaries (NSBH) with
unique host galaxies. These golden dark sirens can serve as early warnings for
the follow-up observations of gravitational waves in the high frequency band as
well as the search of their electromagnetic counterparts. Our results show
eccentricity is a crucial factor in the detection, data analysis, and
application of GWs with the atom interferometers in the mid-band.
| https://export.arxiv.org/pdf/2208.10998 |
\flushbottom
\section{Introduction}
The ground-based gravitational wave (GW) detectors LIGO and Virgo have achieved great success in observing the GWs from the merger of binary neuron stars (BNS), binary black holes (BBH), and neutron star-black hole binaries (NSBH)~\cite{LIGOScientific:2016aoc,LIGOScientific:2018mvr,LIGOScientific:2020ibl,LIGOScientific:2021usb,LIGOScientific:2021djp}. The third-generation ground-based detectors such as ET~\footnote{\url{http://www.et-gw.eu/}} and CE~\footnote{\url{https://cosmicexplorer.org/}} are under design and construction. Together with the space-borne LISA~\footnote{\url{https://www.lisamission.org/}} and Chinese proposed projects like Taiji~\cite{Hu:2017mde,Ruan:2018tsw} and Tianqin~\cite{TianQin:2015yph} , they will be prepared for the detections of GWs around 2035. All of these are laser interferometers (LIs) and they will form a network of GW detectors from ground to space in the future.
A novel type of GW detector called Atom interferometers (AIs) was proposed a decade ago~\cite{Dimopoulos:2007cj,Dimopoulos:2008sv,Graham:2012sy,Hogan:2015xla}. In the concept AIs, gravitational radiation is sensed through precise measurement of the light flight time between two distantly separated atomic inertial references, each in a satellite in Medium Earth orbit (MEO). Ensembles of ultra-cold atomic Sr atoms at each location serve as precise atomic clocks. Light flight time is measured by comparing the phase of laser beams propagating between the two satellites with the phase of lasers referenced to the Sr optical transitions~\cite{Graham:2017pmn}. Compared to the LIs, AIs consist of only a single baseline thus the design and building should be easier and cheaper than the traditional LIs. The AI projects such as ground based ZAIGA~\cite{Zhan:2019quq} in China, AION~\cite{Badurina:2019hst} in the UK, MIGA~\cite{Geiger:2015tma} in France, ELGAR~\cite{Canuel:2019abg} in Europe, and the space-borne MAGIS~\cite{Graham:2017pmn} and AEDGE~\cite{AEDGE:2019nxb} have been proposed and in preparation.
AIs are proposed to probe not only the gravitational waves but also the dark matter. For GWs, AIs focus on the Deci-Hz gap between LIGO/Virgo and LISA.
In the mid-frequency range, one can observe the long inspiral period of BNS, BBH, and NSBH. During the long observation, the motion of the space-borne detector around the Sun as well as in Earth orbit would induce large Doppler and reorientation effects, providing a precise angular resolution. Based on the space-borne AEDGE, we compose a series of papers focusing on the GW detections by AIs.
In the first paper~\cite{Cai:2021ooo} (hereafter Paper I) we forecast the bright sirens detected by AEDGE and their applications on cosmology. The specific analysis on the source localization for the dark sirens was conducted in the second paper~\cite{Yang:2021xox} (hereafter Paper II). The single baseline of AEDGE reorients on a rapid time scale compared to the observation duration. As a detector reorients and/or moves, the observed waveform and phase are modulated and Doppler-shifted. This allows efficient determination of sky position and polarization information~\cite{Graham:2017lmg,Graham:2017pmn}. In Paper II, we show AEDGE can even localize the dark sirens in such a small comoving volume that the unique host galaxy can be identified. These dark sirens are called ``golden dark sirens''. The measurements of Hubble constant from the simulated golden dark BNS and BBH are also performed.
Many investigations suggest compact binaries that emit GWs can have non-negligible eccentricities and may contribute observational features in the sensitivity band of ground and space-based detectors~\cite{Antonini:2012ad,Samsing:2013kua,Thompson:2010dp,East:2012xq}.
Different mechanisms of the dynamic formation of the compact binaries of black holes and neutron stars have been proposed to study their eccentricities~\cite{Rodriguez:2017pec,Samsing:2017xmd,Samsing:2017oij,Samsing:2018ykz,Wen:2002km,Pratten:2020fqn,OLeary:2008myb,Lee:2009ca}.
Orbital eccentricity is arguably the most robust discriminator for distinguishing between isolated and dynamical BBH formation scenarios~\cite{Zevin:2021rtf}. Some studies indicate that a fraction of the binaries possess eccentricities larger than 0.1 at 10 Hz~\cite{Wen:2002km,Silsbee:2016djf,Antonini:2017ash,Liu:2019gdc}. The source localization improvement by the eccentricity for the ground-based detector networks has been investigated in some detail in~\cite{Sun:2015bva,Ma:2017bux,Pan:2019anf}. They found that the eccentricity has more distinct effects on localization for higher-mass binaries than for the lower ones. For the case of $100~M_\odot$ BBH, the improvement factor is about 2 in general when the eccentricity changes from 0.0 to 0.4~\cite{Pan:2019anf}. Such an improvement is not adequate to considerably shrink the uncertainty of host galaxies (redshift) of dark sirens in the LIGO/Virgo band, and to provide a conclusive measurement of the expansion of the Universe (e.g. the Hubble constant). While as shown in~\cite{Yang:2022tig}, the eccentricity can significantly improve the distance estimation and source localization in the mid-band. The multiple harmonics induced by eccentricity can break the degeneracy between parameters in the waveform. In addition, the higher modes can enter the detector band much earlier than the dominant mode, which can provide more angular information. At some specific orientations (inclination angles), the typical compact binaries can achieve $\mathcal{O}(10^2-10^4)$ improvement for the distance inference and $1.5\sim{3.5}$ orders of magnitude improvement for the sky localization. Such a huge improvement on the 3-D localization could dramatically shrink the uncertainty of the host galaxies of the dark sirens. Up to now, only GW190521 has been reported to be eccentric in the latest GW catalog GWTC-3~\cite{Romero-Shaw:2020thy,Gayathri:2020coq}. Considering the fact that the nonvanishing eccentricity is more likely to exist at lower frequency, we expect the dark sirens observed by the mid-band detector have greater potential on probing the cosmic expansion history, dynamics of dark energy, and gravity theory.
In this paper, we extend our research in Paper II and take the eccentricity effects into account for the dark sirens with AEDGE. In Sec.~\ref{sec:typical}, we follow the methodology of~\cite{Yang:2022tig} to check the improvement of distance estimation and source localization by eccentricity for the typical BNS, NSBH, and BBH with AEDGE. In Sec.~\ref{sec:mock}, we adopt the similar method in Paper II to construct the catalogs of GWs for AEDGE. We first update the construction of catalogs of dark sirens in Paper II in the case of vanishing eccentricity. Comparing to Paper II, we update the waveform and the merger rates of BNS and BBH. We also include the NSBH into the simulation. We refine the fisher matrix calculation to ensure the convergence of the numerical derivatives. These updates and improvements make the simulation more realistic and reliable than that in Paper II. We then take account of eccentricity effects to construct the catalogs of dark sirens. We show how many potential host galaxies would be within the 3-D localization GW sources, with or without eccentricity. We estimate the the population of eccentric dark sirens and pick out the ones whose host galaxies can be best identified. We randomly select the golden dark sirens which AEDGE can track considering the limit of its operational time. The corresponding measurements of the Hubble constant are obtained in Sec.~\ref{sec:Hubble}. We give the conclusions and discussions in Sec.~\ref{sec:conclusion}.
\section{The improvement of distance estimation and localization from eccentricity \label{sec:typical}}
By adopting a similar strategy in~\cite{Yang:2022tig}, we mock up five types of typical compact binaries in GWTC-3~\cite{LIGOScientific:2021djp} with component mass ranging from $\mathcal{O}(1\sim100)~M_{\odot}$, i.e., a GW170817-like BNS with $(m_1,m_2)=(1.46,1.27)~M_{\odot}$, a GW200105-like NSBH with $(9.0,1.91)~M_{\odot}$, a GW191129-like light-mass BBH with $(10.7,6.7)~M_{\odot}$, a GW150914-like medium-mass BBH with $(35.6,30.6)~M_{\odot}$, and a GW190426-like heavy-mass BBH with $(106.9,76.6)~M_{\odot}$. Note the light, medium, and heavy mass are in the context of the stellar-mass binaries in GWTC-3. The redshifts (distances) are also consistent with the real events in the catalog. We sample 1000 random sets of the angular parameters from the uniform and isotropic distribution for each typical binary and assign six discrete initial eccentricities $e_0=0$, 0.01, 0.05, 0.1, 0.2, and 0.4 at $f_0=0.1$ Hz. Then we have $5\times 6\times 1000=3\times10^4$ cases. For each case, we perform the fisher matrix calculation to infer the errors of distance and sky location.
We use {\sc PyCBC}~\cite{alex_nitz_2021_5347736} to generate the waveform with the non-spinning, inspiral-only EccentricFD waveform approximant available in {\sc LALSuite}~\cite{lalsuite}. EccentricFD corresponds to the enhanced post-circular (EPC) model in~\cite{Huerta:2014eca}.
To the zeroth order in the eccentricity, the model recovers the TaylorF2 PN waveform at 3.5 PN order~\cite{Buonanno:2009zt}. To the zeroth PN order, the model recovers the PC expansion of~\cite{Yunes:2009yz}, including eccentricity corrections up to order $\mathcal{O}(e^8)$.
The strain can be written as~\cite{Huerta:2014eca}
\begin{equation}
\tilde{h}(f)=-\sqrt{\frac{5}{384}}\frac{\mathcal{M}_c^{5/6}}{\pi^{2/3}d_L}f^{-7/6}\sum_{\ell=1}^{10}\xi_{\ell}\left(\frac{\ell}{2}\right)^{2/3}e^{-i\Psi_{\ell}} \,.
\label{eq:epc}
\end{equation}
The waveform keeps up to 10 harmonics, which corresponds to a consistent expansion in the eccentricity to $\mathcal{O}(e^8)$ both in the amplitude and in the phase~\cite{Yunes:2009yz}. In the vanishing eccentricity case, only the dominant (quadrupole) mode $\ell=2$ remains, which is identical to the circular TaylorF2 model. With nonvanishing eccentricities, the induced multiple harmonics make the distance and angular parameters nontrivially coupled, enabling us to break the degeneracy among these parameters. In addition, the frequency of each harmonics is $\ell F$ with $F$ the orbital frequency. Thus the higher harmonics ($\ell>2$) should enter the detector band much earlier than the dominant mode ($\ell=2$), which can provide more angular information. The $\xi_{\ell}$'s depend on the antenna pattern functions (also called detector response functions) $F_{+,\times}$. For the space-borne AEDGE, we should consider the motion of the detector thus $F_{+,\times}$ are functions of time. We give the detailed calculation of the antenna pattern functions in appendix~\ref{app:F}.
We have 11 parameters in the waveform, namely the chirp mass $\mathcal{M}_c$, the symmetric mass ratio $\eta$, the luminosity distance $d_L$, the inclination angle $\iota$, the sky location ($\theta$, $\phi$), the polarization $\psi$, the time and phase at coalescence ($t_c$, $\phi_c$), the initial eccentricity $e_0$ at frequency $f_0$, the azimuthal component of inclination angles (longitude of ascending nodes axis) $\beta$. To estimate the uncertainty and covariance of the waveform parameters, we adopt the Fisher matrix technique
\begin{equation}
\Gamma_{ij}=\left(\frac{\partial h}{\partial P_i},\frac{\partial h}{\partial P_j}\right)\,,
\end{equation}
with $P_i$ one of the 11 waveform parameters.
The inner product is defined as
\begin{equation}
(a,b)=4\int_{f_{\rm min}}^{f_{\rm max}}\frac{\tilde{a}^*(f)\tilde{b}(f)+\tilde{b}^*(f)\tilde{a}(f)}{2 S_n(f)}df\,.
\label{eq:innerp}
\end{equation}
For the noise power spectral density (PSD) $S_n(f)$, we adopt the sensitivity curve of AEDGE in the resonant modes (see the envelope in figure 1 of~\cite{Ellis:2020lxl}).
Then the covariance matrix of the parameters is $C_{ij}=(\Gamma^{-1})_{ij}$, from which the uncertainty of each parameter $\Delta P_i=\sqrt{C_{ii}}$. The error of the sky localization is~\cite{Cutler:1997ta}
\begin{equation}
\Delta \Omega=2\pi |\sin(\theta)|\sqrt{C_{\theta\theta}C_{\phi\phi}-C_{\theta\phi}^2}\,.
\end{equation}
We calculate the partial derivatives $\partial \tilde{h}/\partial P_i$ numerically by $[\tilde{h}(f,P_i+dP_i)-\tilde{h}(f,P_i)]/dP_i$, with $dP_i=10^{-n}$. For each parameter, we need to optimize $n$ to make the derivative converged so that the Fisher matrix calculation is reliable.
For each typical event, the chirp mass $\mathcal{M}_c$, symmetric mass ratio $\eta$, and distance $d_L$ are calculated from the component mass and redshift. The angular parameters $P_{\rm ang}=\{\iota,~\theta,~\phi,~\psi,~\beta\}$ are sampled from the uniform and isotropic distribution with 1000 sets for each typical event. We use the inclination angle $\iota$ to represent to angular parameter since we find it is more relevant in terms of the results. Without loss of generality, we fix the coalescence time and phase to be $t_c=\phi_c=0$. We choose the frequency band of AEDGE to be [0.1, 3] Hz, where the detector is the most sensitive. This range corresponds to lower and upper bounds of frequency in the integral of Eq.~(\ref{eq:innerp})~\footnote{This is also different with Paper II in which we naively set the lower bound of frequency to be 0.2 for BNS and 0.05 for BBH.}. However, we should consider the limited operation time of AEDGE for tracking the GWs. We set quadrupole ($\ell=2$) as the reference mode and its frequency is double of the orbital's, $f_{\ell=2}=2F$. Then the evolution of the binary orbit can be calculated in terms of the quadrupole frequency. To ensure the observational time of AEDGE for all harmonics is around 400 days ($\sim1$ year), we set the starting frequency of quadrupole $f_{\rm start}(\ell=2)$ to be 0.2, 0.1, 0.059, 0.026, and 0.0105 Hz for the typical BNS, NSBH, light BBH, medium BBH, and heavy BBH, respectively. That is, for the strain Eq.~(\ref{eq:epc}) we should neglect the contribution of all the harmonics when the quadrupole's frequency is smaller than $f_{\rm start}(\ell=2)$~\footnote{Note in the mid band we can only observe the inspiral phase of these binaries, thus we do not need to care about the upper frequency limit at the innermost-stable circular orbit.}. Thus we multiply the strain by the step function
\begin{equation}
\tilde{h}_{\rm AEDGE}(f)=\tilde{h}(f)\mathcal{H}(2f-\ell f_{\rm start}) \,,
\label{eq:hAEDGE}
\end{equation}
with the unit step function
\begin{equation}
\mathcal{H}(x)=
\begin{cases}
1 & {\rm if}~x\geq0 \,, \\
0 & {\rm otherwise} \,.
\end{cases}
\end{equation}
For the orbital phase evolution, we numerically solve Eqs. (3.11) and (4.24) in~\cite{Yunes:2009yz} to obtain the time to coalescence $t(f)$ for a nonvanishing $e_0$. The time to coalescence at a specific frequency is smaller for a larger eccentricity. So for the fixed $f_{\rm start}(\ell=2)$, the observational time is shorter with a larger eccentricity.
We collect all the results of the fisher matrix for the $3\times10^4$ cases. Same as~\cite{Yang:2022tig}, for each typical event with a specific orientation, we define the ratios
\begin{equation}
R_{\Delta d_L}=\frac{\Delta d_L|_{e_0={\rm nonzero}}}{\Delta d_L|_{e_0=0}}~{\rm and}~R_{\Delta \Omega}=\frac{\Delta \Omega |_{e_0={\rm nonzero}}}{\Delta\Omega|_{e_0=0}} \,,
\end{equation}
to show the improvement induced by eccentricity in that orientation. If $R<1$, there is an improvement in the relevant parameter. A smaller $R$ indicates a larger improvement. We show the scatter plots of $\Delta d_L/d_L$, $R_{\Delta d_L}$, $\Delta \Omega$, and $R_{\Delta \Omega}$ against $\iota$. To give the statistical results, we define the minimum, mean, and maximum value of $x$ in the 1000 orientations as $\min(x)$, $\mathbb{E}(x)$, and $\max(x)$, respectively.
In figure~\ref{fig:rep}, we only show the distance inference of GW170817-like BNS and source localization of GW190426-like heavy BBH to represent
our main results. We just compare the cases with $e_0=$0, 0.1, and 0.4 to give a concise look. The complete results can be found in appendix~\ref{app:sup}. As shown in left panel of figure~\ref{fig:rep}, a nonvanishing eccentricity can significantly improve the distance inference in the near face-on orientations (small inclination angle). Among all 1000 orientations, the $\max(\Delta d_L/d_L)$ of GW170817-like BNS is reduced from 27.74 ($e_0=0$) to $0.82$ ($e_0=0.1$) and $0.35$ ($e_0=0.4$). Comparing to $e_0=0$ case, the largest improvement ($\min(R_{\Delta d_L})$) corresponds to 47 and 115 times stricter with $e_0=0.1$ and $e_0=0.4$, respectively. The huge improvement of distance inference in the near face-on orientations is true for all the typical events. The binaries with larger component mass and eccentricity can achieve more improvement. As shown in appendix~\ref{app:sup}, for the heavy BBH with $e_0=0.4$, $\min(R_{\Delta d_L})=0.0012$, corresponding to 833 times improvement. We also find that for the heavy BBH case, there is an overall improvement of distance inference in all orientations. Our results indicate that the eccentricity effects are more distinct for the larger mass compact binaries.
For the source localization, we find eccentricity can lead to significant improvement for the BBH cases which have larger component mass than BNS and NSBH cases. As shown in the right panel of figure~\ref{fig:rep}, the localization of heavy BBH is significantly improved by the eccentricities in almost all orientations. The largest improvement $\min(R_{\Delta \Omega})= 8.16\times 10^{-4}$, corresponding to $1.23\times10^3$ times tighter. Like the distance inference, the heavier binaries benefits more from the eccentricity for the source localization.
The details of the improvement by eccentricity can also be found in the figures summarized in appendix~\ref{app:sup}.
To illustrate the improvement of distance inference and localization for these typical binaries with variable eccentricities, we show the largest improvement ($\min(R)$ in 1000 orientations) of each case in Fig.~\ref{fig:Rwe}. We can see generally a heavier binary with higher eccentricity can achieve more improvement of distance inference and source localization. With eccentricity $e_0=0.4$, these typical binaries can most achieve
1.5--3 orders of magnitude
improvement for the distance inference (from BNS to heavy BBH). As for the source localization, BNS and NSBH can not benefit much from the eccentricity. While BBHs can most achieve 1.5--3 orders of magnitude improvement (from light BBH to heavy BBH). We should note some anomalies in figure~\ref{fig:Rwe}. 1) For the distance inference, the typical BNS benefits more from eccentricities than the typical NSBH and light BBH do. 2) For the source localization, BNS behaves similarly with NSBH and both have almost no improvement from eccentricity. 3) The light BBH's tendency is very close to that of medium BBH when $e_0<0.2$ and then they diverge for larger eccentricity. 4) In the BNS, NSBH, and especially for the light BBH cases, the localization achieves largest improvement when $e_0=0.2$, a higher eccentricity ($e_0=0.4$) can even worsen the performance. These anomalies are caused by many factors. On the one hand, eccentricity adds more harmonics in GWs. These harmonics can enlarge the SNR and improve the parameter estimation. The higher modes which enter the detector band much earlier can provide more angular information. On the other hand, eccentricity shrinks the inspiral time within the frequency band, which could lower the SNR and hence worsen the parameter estimation and localization. In addition, due to the different starting frequencies, for each binaries the detector band cover different length of harmonics. For instance, in BNS case, at the starting frequency $f_{\rm start}(\ell=2)=0.2$, all harmonics including $\ell=1$ falls inside the detector band (0.1--3Hz). But in NSBH case, at $f_{\rm start}(\ell=2)=0.1$, the $\ell=1$ mode's frequency is 0.05, falling outside the detector band hence should be truncated. Moreover, we have two more parameters $e_0$ and $\beta$ in the eccentric waveform, which could degrade the overall performance of the parameter estimation. All above factors compete with each other and make the parameter estimation (distance inference and localization) differ from case to case.
Here we would like to provide some more explanations for the inverse tendency of the error of the distance and localization versus the inclination angle. We can see in figure~\ref{fig:rep} that generally the the error of distance is larger in smaller orbital inclination. On the contrary, the source localization is better when inclination is smaller. For distance it is due to the degeneracy between distance and inclination angle. In the amplitude of GW waveform~$h\sim \mathcal{A}_++\mathcal{A}_\times$, the distance $d_L$ and inclination angle $\iota$ are tangled in the plus and cross polarization with different form, $\mathcal{A}_+\sim\frac{1}{d_L}\frac{1+\cos(\iota)}{2}$ and $\mathcal{A}_+\sim\frac{1}{d_L}\cos(\iota)$. In order to identify the inclination of the binary system using the polarizations of the gravitational wave, we must distinguish the contributions of the plus and cross polarizations. At small $\iota$, the two amplitudes from plus and cross polarizations have nearly identical contributions to the overall gravitational-wave amplitude. This is the main factor that leads to the strong degeneracy in the measurement of the distance and inclination~\cite{Usman:2018imj}. So we expect a larger degeneracy between $d_L$ and $\iota$ in the near face-on orientations and hence larger errors for both distance and inclination angle. As for the source localization, there is no obvious degeneracy between the sky location parameters $(\theta,\phi)$ and inclination angle $\iota$. However, at smaller $\iota$ the SNR is larger. So the parameter estimation should be better than that at larger $\iota$.
We have showed that eccentricity, which is more likely to exist in the mid-band that in LIGO/Virgo band, can improve the distance inference and source localization of dark sirens with AEDGE significantly. Note GWs are best localized at smallest orbital inclination where the distance are worst determined. But eccentricity happens to improve the distance inference most significantly there. In addition, one of the main targets for the mid-band detector like AEDGE is the intermediate mass black holes (IMBH). While in this paper we showed that the heaviest BBH can benefit most from the eccentricity for the distance inference and source localization. These facts suggests that eccentricity is the perfect ingredient for AEDGE dark sirens as precise probes of the Universe
\section{The construction of dark sirens catalogs and the host galaxy identification \label{sec:mock}}
Considering the fact that eccentricity plays an important role in the distance inference and source localization of dark sirens with the mid-band detector AEDGE, we should take eccentricity effects into the construction of the dark sirens catalogs. In this section, we first update the construction of the catalogs of dark sirens in Paper II which does not consider the eccentricity effects, i.e., $e_0=0$. We adopt the EccentricFD waveform in which the $e_0=0$ case is equivalent to TaylorF2 at 3.5 PN order. While, in paper II we only expand the waveform to 2 PN order in the phase. We also update the BNS and BBH merger rates from the latest GWTC-3, as well as the BBH population. In addition to BNS and BBH, we add NSBH catalog. More importantly, we refine the numerical derivatives for the fisher matrix calculation, which would make the results more stable and thus robust and reliable. Then we include the eccentricity effects into the construction of the catalogs, to assess the its influence on the population and localization of the binaries.
We follow Paper II and assume the formation of compact binaries tracks the star formation rate.
The merge rate per comoving volume at a specific redshift $R_m(z_m)$ is related to the formation rate of massive binaries and the time delay distribution $P(t_d,\tau)=\frac{1}{\tau}\exp(-t_d/\tau)$ with an e-fold time of $\tau=100$ Myr~\cite{Vitale:2018yhm},
\begin{equation}
R_m(z_m)=\int_{z_m}^{\infty}dz_f\frac{dt_f}{dz_f}R_f(z_f)P(t_d) \,.
\label{eq:Rm}
\end{equation}
Here $t_m$ (or the corresponding redshift $z_m$) and $t_f$ are the look-back time when the systems merged and formed. $t_d=t_f-t_m$ is the time delay. $R_f$ is the formation rate of massive binaries and we assume it is proportional to the Madau-Dickinson (MD) star formation rate~\cite{Madau:2014bja},
\begin{equation}
\psi_{\rm MD}=\psi_0\frac{(1+z)^{\alpha}}{1+[(1+z)/C]^{\beta}} \,,
\label{eq:psiMD}
\end{equation}
with parameters $\alpha=2.7$, $\beta=5.6$ and $C=2.9$. The normalization factor $\psi_0$ is determined by the local merger rates. We adopt the local merger rates of BNS, NSBH, and BBH inferred from GWTC-3, with $\mathcal{R}_{\rm BNS}=105.5^{+190.2}_{-83.9}~\rm Gpc^{-3}~\rm yr^{-1}$, $\mathcal{R}_{\rm NSBH}=45^{+75}_{-33}~\rm Gpc^{-3}~\rm yr^{-1}$, and $\mathcal{R}_{\rm BBH}=23.9^{+14.3}_{-8.6}~\rm Gpc^{-3}~\rm yr^{-1}$~\cite{LIGOScientific:2021psn}. Note we assume the observed NSBH GW200105 and GW200115 are representatives of the population of NSBH. Then we convert the merger rate per comoving volume in the source frame to merger rate density per unit redshift in the observer frame
\begin{equation}
R_z(z)=\frac{R_m(z)}{1+z}\frac{dV(z)}{dz} \,,
\label{eq:Rz}
\end{equation}
where $dV/dz$ is the comoving volume element.
Having the merger rates as redshift, we can sample the redshift distribution of BNS, NSBH, and BBH. Like Paper II, we use the median merger rates to construct the catalogs. We have 11 parameters in the waveform (for vanishing eccentricity there are 9 except $e_0$ and $\beta$). The luminosity distance $d_L$ is calculated from the sampled redshift by assuming a fiducial cosmological model $\Lambda$CDM with $H_0=67.72~\rm km~s^{-1}~Mpc^{-1}$ and $\Omega_m=0.3104$, corresponding to the mean values obtained from the latest \textit{Planck} experiment~\cite{Planck:2018vyg}. The sky localization ($\theta$, $\phi$), inclination angle $\iota$, and polarization $\psi$ are drawn from isotropic distribution. Without loss of generality we set the time and phase at coalescence to be $t_c=\phi_c=0$. As for the chirp mass and symmetric mass ration, we consider different strategy for these three binary types. In the BNS case, we assume a uniform distribution of mass in [1, 2.5] $M_{\odot}$, which is consistent with the assumption for the prediction of the BNS merger rate in GWTC-3~\cite{LIGOScientific:2021psn}. In the NSBH case, since the merger rate is inferred by assuming the observed NSBH GW200105 and GW200115 are representatives of the population of NSBH, we just randomly choose the component mass of these two events. As for the BBH case, we adopt the same strategy in Paper II with BBH population in GWTC-3. We draw the distribution of component mass of BBH from the histogram of mass distribution of BBH in GWTC-3~\footnote{We first infer the histograms of primary mass $m_1$ and mass ratio $q$ from GWTC-3. The distribution of $m_1$ and $q$ are sampled accordingly. Then the second mass is just $m_2=m_1q$. We should make sure that $m_2\ge3~M_{\odot}$.}. The primary mass and mass ratio peak around 30--40 $M_{\odot}$ and 0.7.
We sample the mergers of BNS, NSBH, and BBH in 5 years since the operation time of AEDGE is supposed to be 5--10 years~\cite{AEDGE:2019nxb}. We set the frequency band and starting frequency to be same as in section~\ref{sec:typical}. This means the observational time for each event is around 1 year. For each sampled merger, we assume four discrete eccentricities, i.e., $e_0=0$, 01, 0.2, and 0.4 at $f_0=0.1$ Hz. We select the mergers with SNR>8 as the candidate events that could be detected (within the detection range) by AEDGE in 5 years. For each events, we adopt the fisher matrix to derive their distance errors and source localizations. By assigning a uniform eccentricity for each event, we would like to assess the influence of eccentricity on the population and localization of the GWs that could be detected by AEDGE. We will give a discussion about the distribution of eccentricity and the realistic population of eccentric binaries later.
Figure~\ref{fig:hist} shows the cumulative histogram of events within the detection range of AEDGE in 5 years. The highest redshift AEDGE can reach for BNS and NSBH are around 0.13 and 0.45, respectively. For BBH, the horizon is much larger but we set a cut-off at $z=2$ since the for higher redshift we usually can not obtain the spectroscopic measurement of the redshift. In addition, the large uncertainty of localization makes the BBH at high redshift useless for our purpose in this paper. In the circular case, the total numbers are 106, 1105, and 95369 for BNS, NSBH, and BBH ($z\leq2$), respectively. The numbers of BNS and BBH are smaller than that in Paper II, which is due to the different choice of the merger rates and the lower limit of frequency band of AEDGE (we adopt $f_{\rm min}=0.05$ Hz for BBH in Paper II, while in this paper $f_{\rm min}=0.1$ Hz). We note that a larger eccentricity leads a smaller population of the events. This is due to the fact that eccentricity reduces the inspiral (orbital evolution) time of binaries in the frequency band (0.1--3 Hz). The smaller observational time leads smaller accumulation of SNR, especially for the dominant quadrupole mode. So the GWs whose SNR are just a little above the detection threshold when $e_0=0$ may not be detected if they have nonvanshing eccentricities. In the NSBH case, the largest redshift AEDGE can reach is smaller for eccentric events. Comparing to BNS and BBH, the population of NSBH decrease the most with eccentricity. The reason is that we choose GW200105 and GW200115 as the representatives of NSBH population. The component masses in the NSBH catalog are fixed to be the same as either of these two typical events. So, the high-redshift eccentric events are definitely to be below the SNR threshold. While for BNS and BBH, there may be a larger sampled component mass to compensate for the low SNR.
Figures~\ref{fig:err_dL} and~\ref{fig:err_Omega} show the error of distance and localization of the binaries that are within the detection range of AEDGE in 5 years. Eccentricity can significantly improve the overall distance inference of the binaries in the catalogs. For the source localization, BNS and NSBH can not benefit obviously from eccentricity. The localizations of eccentric events are even worsen in some cases. However, the source localization of BNS and NSBH are $\mathcal{O}(10^{-4})~\rm deg^{2}$ level even without eccentricity. While BBH's localization is considerably improved by eccentricity. The optimal localization at low redshift is improved to be better than $\mathcal{O}(10^{-3})~\rm deg^{2}$. We find that, in some cases, with $e_0=0.2$ the binaries can achieve the most improvements. All of these features can be expected based on the results in section~\ref{sec:typical}.
To assess the galaxy identification of the binaries in the catalogs, we should calculate their 3-D localization volumes which can be obtained from the errors of distance and localization in figures~\ref{fig:err_dL} and~\ref{fig:err_Omega}. We follow the method in~\cite{Yu:2020vyy} to convert $\Delta d_L$ and $\Delta\Omega$ to the 99\% confidence ellipsoid of the localization. We use $V_{\rm loc}$ to denote the 3-D volume of the localization. To estimate the numbers of potential host galaxies in the localization volume, we assume the galaxy is uniformly distributed in the comoving volume and the number density $n_g= 0.01~\rm Mpc^{-3}$. This number is derived by taking the Schechter function parameters in B-band $\phi_*=1.6\times 10^{-2} h^3 {\rm Mpc^{-3}}, \alpha=-1.07, L_*=1.2\times 10^{10} h^{-2} L_{B,\odot}$ and $h=0.7$, integrating down to 0.12 $L_*$ and comprising 86\% of the total luminosity~\cite{Chen:2016tys}. Then the threshold localization volume is $V_{\rm th}=100~\rm Mpc^3$. If $V_{\rm loc}\leq V_{\rm th}$, the host galaxy of the dark sirens can be identified uniquely and we call these golden dark sirens.
Figure~\ref{fig:V_loc} shows the the 99\% confidence level (C.L.) of the 3-D localization of the events that are within the detection range of AEDGE in 5 years. We can see in the circular case, several BNS and NSBH events at low redshift can be localized within $V_{\rm th}$. As for BBH, a few events can be localized with only one potential host galaxy. However, through the improvement from eccentricity, the eccentric BBH at low redshift can be well localized to become the golden dark sirens. The number of the golden dark sirens in the catalogs are summarized in table~\ref{tab:np}. We also show the number of dark sirens whose potential host galaxies' count $n_p$ are less than 10. The result shows BBH can benefit the most from the eccentricity. In the circular case, it is almost impossible to detect the golden dark BBH. While nonvanishing eccentricities significantly increase the possibility to detect the golden BBH at low redshift. Note in the NSBH case, eccentricity would worsen the results compared to the circular case. The $e_0=0.2$ case gives an overall better result than other cases. All of these results are consistent with the expectation in section~\ref{sec:typical}.
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{2}{c|}{$e_0=0$} & \multicolumn{2}{c|}{$e_0=0.1$} & \multicolumn{2}{c|}{$e_0=0.2$} & \multicolumn{2}{c|}{$e_0=0.4$} \\
\hline
Binary type & Golden~ & $1<n_p\leq 10$ & Golden & $1<n_p\leq 10$ & Golden & $1<n_p\leq 10$ & Golden & $1<n_p\leq 10$ \\
BNS & 8 & 17 & 11 & 23 & 12 & 26 & 11 & 25 \\
NSBH & 46 & 69 & 38 & 61 & 40 & 61 & 29 & 48 \\
BBH & 1 & 7 & 6 & 14 & 10 & 22 & 10 & 9 \\
\hline
Total & 55 & 93 & 55 & 98 & 62 & 109 & 50 & 82 \\
\hline
\end{tabular}
}
\caption{The number of golden dark sirens which are within the detection range of AEDGE in 5 years. We also list the number of dark sirens whose potential host galaxies' count are less than 10. We assume the number density of galaxies $n_g= 0.01~\rm Mpc^{-3}$.}
\label{tab:np}
\end{table}
As shown in table~\ref{tab:np}, the numbers of golden dark BNS and BBH in $e_0=0$ case are smaller than those of Paper II. The difference in the numbers arises from several factors. (1) In this paper, we adopt an updated median merger rates from GWTC-3 which is lower than that in Paper II from GWTC-2. (2) We use the EPC waveform which is identical to TaylorF2 with 3.5 PN orders when eccentricity is 0. While in paper II we use the waveform with only 2 PN orders. (3) We set the lower limit of frequency of AEDGE to be 0.1 Hz for BBH. Thus the inspiral time (thus the accumulated SNR) of quadrupole mode is smaller compared to Paper II in which we set $f_{\rm low}=0.05$ Hz. (4) We refine the Fisher matrix calculation to make the numerical derivatives more stable and reliable. All of these factors make the numbers in this paper more realistic and conservative compared to the optimistic estimation in Paper II. However, the application of AEDGE only depends on a few (5--10) golden dark sirens that it can track during the mission time. In this paper, we find that AEDGE can still observe 50--60 dark sirens regardless of the eccentricity. So the difference in the numbers of golden dark BNS and BBH will not influence the main result for the application of AEDGE on cosmology. Later we can find the constraints of the Hubble constant from 5--10 dark sirens in this paper are consistent with that in Paper II.
In this section, we construct the catalogs of GWs that could be detected by AEDGE in 5 years. However, as discussed in Paper II, in the resonant mode AEDGE can only track one event at a time. So only a small fraction of the total events in the catalogs can be observed by AEDGE in 5 years. Thus we should take the priority of observation into account. In this paper, we focus on the dark sirens with the best localization. Thus gold dark sirens are the sources we would like to track first with AEDGE. Since we set the observational time for one event is around 1 year, the total number of dark sirens AEDGE can track during its mission time (5--10 years) is 5--10. As shown in figure~\ref{fig:V_loc}, the possibility of observing golden dark sirens at $z\leq0.05$ is pretty high. The total number of golden dark sirens in table~\ref{tab:np} is from 50--60, for either circular or eccentric cases. We can rely on the real-time data analysis in the tracking process to fast identify the golden dark sirens. We need to predict the quality (also the properties) of the event as soon as possible such that we can improve the successful capture of the golden events.
When constructing the catalogs of eccentric GWs, we assume a uniform (average) eccentricity for all the events in the catalogs. By doing so we just would like to assess the influence of eccentricity on the population of GWs (and golden dark sirens). However, this is not a realistic assumption. The formation channel of compact binaries and distribution of eccentricity are still under debate and not clear~\cite{Wen:2002km,Kowalska:2010qg,Takatsy:2018euo}. Simulations suggest that 10\% of the BBH population is formed dynamically and at least half of them have eccentricity larger than 0.1 at 10 Hz~\cite{Samsing:2013kua,Samsing:2017xmd,Samsing:2017rat} (and references therein). As a rough estimation, most of these dynamical BBH should hold considerably high eccentricity at 0.1 Hz. For the isolated formation scenarios, even binaries born with high eccentricity could be fully circularized when entering the LIGO/Virgo band, by efficiently damping orbital eccentricity through angular momentum loss from GW emission. However, the eccentricity of the field binary at mid-band is still uncertain. Nevertheless, the probability of a nonvanishing eccentricity should be much higher at mid-band than at LIGO/Virgo band. To summary, we believe at least more than 10\% population of the binaries should be eccentric at mid-band. To detect the golden dark BBH with AEDGE, a nonvanishing eccentricity is crucial. From table~\ref{tab:np}, the total number of golden dark sirens are around 50--60 for either vanishing or nonvanishing eccentricity. Therefore, tracking 5--10 golden dark sirens with AEDGE in 5 years should be ensured regardless of the distribution of eccentricity.
\section{The Hubble constant from golden dark sirens \label{sec:Hubble}}
In this section we would like to estimate the constraint ability on cosmological parameters from the the eccentric dark sirens detected by AEDGE in 5--10 years. Since the golden dark sirens observed by AEDGE only reside at low redshift ($z<1$), we only forecast their applications on the measurement of Hubble constant. We assume a conservative eccentricity $e_0=0.2$ at 0.1 Hz. For BNS and NSBH, the eccentricity mainly helps for the distance inference. While for BBH, the source localization is improved significantly from eccentricity so that AEDGE can observe the golden dark BBH. We randomly select 5 and 10 golden dark sirens in the BNS, NSBH, and BBH catalogs (assuming $e_0=0.2$). We assume that the number of golden events that can be tracked by AEDGE is at least 1 per year. Thus 5 and 10 golden dark sirens correspond to 5 and 10 years data-taking period.
To measure the Hubble constant we need to assess the total distance errors of the golden dark sirens. We can safely neglect the weak lensing contribution since the golden dark sirens mainly reside in the low redshift region. However, the peculiar velocity is prominent at small $z$.
We use the fitting formula~\cite{Kocsis:2005vv},
\begin{equation}
\left(\frac{\Delta d_L(z)}{d_L(z)}\right)_{\rm pec}=\left[1+\frac{c(1+z)^2}{H(z)d_L(z)}\right]\frac{\sqrt{\langle v^2\rangle}}{c} \,,
\end{equation}
here we set the peculiar velocity value to be 500 km/s, in agreement with average values observed in galaxy catalogs. The final uncertainty of $d_L$ is the sum of the errors from GW parameter estimation and from the peculiar velocity in quadrature.
We assume the $\Lambda$CDM model with two free parameters $H_0,~\Omega_m$. However, at low redshift, $\Omega_m$ is poorly constrained. To get the posteriors of $H_0$, we run Markov-Chain Monte-Carlo (MCMC) by using the package {\sc Cobaya}~\cite{Torrado:2020dgo,2019ascl.soft10019T}. The marginalized statistics of the parameters and the plots are produced by the Python package {\sc GetDist}~\cite{Lewis:2019xzd}.
Figure~\ref{fig:H0} shows the measurements of the Hubble constant from 5--10 golden dark BNS, NSBH, and BBH, which are randomly selected from the catalogs with $e_0=0.2$ in table~\ref{tab:np}. To avoid the bias introduced by the random selection, for each case, we repeat the process (the selection, random scattering, and MCMC) for 10 times. We choose the median one among the 10 repetition as the representative result. Our result suggests that with 5 (10) golden BNS, NSBH, BBH, AEDGE can constrain $H_0$ at 6.8\% (4.6\%), 4.6\% (3.9\%), and 2.4\% (1.8\%) precision levels, respectively. Note the total number of golden dark NSBH is around 40. If somehow AEDGE can track all of them during the observational time, the constraint of Hubble constant from NSBH can be further improved to $\sim 2\%$, according to $\sigma\sim 1/\sqrt{N}$ where $N$ is the number of the events. We find the golden dark BBH is more efficient than BNS and NSBH in constraining the Hubble constant. With 5--10 golden dark BBH one can obtain a 2 percent measurement of $H_0$ which is sufficient to arbitrate the Hubble tension. To obtain tens golden dark BBH, eccentricity is of great importance as shown in table~\ref{tab:np}. Our result shows eccentricity, which is more likely to exist in the mid-band, has great significance for the dark sirens with AEDGE as probes of the Universe.
\section{Conclusions and discussions \label{sec:conclusion}}
In this paper, we first investigate the eccentricity effects on the distance inference and source localization of the typical compact binaries observed by the atom interferometer AEDGE. We simulate 5 types of typical compact binaries in GWTC-3 with component mass ranging from $1-100~M_{\odot}$. We find that eccentricity can significantly improve the distance inference of all the typical binaries in the near face-on orientations in which case the distance largely degenerates with the inclination angle. The largest improvements correspond to 1.5--3 orders of magnitude for $e_0=0.4$. More importantly, eccentricity
can improve greatly the localization of the typical BBH, most by a factor of 1.5--3 orders of magnitude. Generally, the heavier-mass binaries with higher eccentricity can achieve more improvements, which is consistent with the result in the LIGO/Virgo band~\cite{Sun:2015bva,Ma:2017bux,Pan:2019anf}. However, we find the improvements in the mid-band are much more significant than that in the LIGO/Virgo band.
To predict the eccentricity effects on the GWs detected by AEDGE in the future, we simulate the catalogs of dark sirens which are within the detection range of AEDGE in 5 years, by assuming different eccentricities. Our results show that with no eccentricity, the numbers of BNS, NSBH, and BBH ($z<2$) that pass the detection threshold of AEDGE are the order of $\mathcal{O}(10^2)$, $\mathcal{O}(10^3)$, and $\mathcal{O}(10^5)$, respectively. With nonvanishing eccentricities the numbers are slightly reduced due to the shrinkage of the inspiral phase. However, eccentricity can improve the overall distance inference of the GWs in the catalogs. Especially, the improvement of localization for BBH makes it possible to detect the golden dark BBH whose unique host galaxy can be identified. Regardless of eccentricity, the total number of golden dark sirens in the catalogs is around 50--60. This also provides an update to the forecast in Paper II (with $e_0=0$), by adopting the latest inferred merger rates and some refinements in the calculation. We forecast the constrains of the Hubble constant from 5--10 golden dark sirens with AEDGE. We find BBH is more efficient to measure the Hubble constant than BNS and NSBH. With 5--10 golden dark BBH one can obtain a 2 percent measurement of $H_0$ which is sufficient to arbitrate the Hubble tension. Since eccentricity is very crucial to the detection of golden dark BBH, our results convey an important message to the community that eccentricity has great significance for the dark sirens as precise probes of the Universe.
Though inferior to BBH when constraining the Hubble constant, golden BNS and NSBH are much more promising to be detected regardless of the eccentricity. On the one hand, if somehow 40 golden dark NSBH can be tracked by AEDGE, a $2\%$ percent Hubble constant measurement can also be guaranteed to arbitrate the Hubble tension. On the other hand, the golden dark BNS and NSBH can serve as an early warning for the follow-up observation of the EM counterparts. In addition, the host galaxies indicated by the EM counterparts can be compared with the one previously identified through the precise localization. This provides a validity check of the host galaxy identification of the golden dark sirens, which is informative to the golden dark BBH.
When applying the golden dark sirens in the catalogs to measuring the Hubble constant, we adopt the catalogs with a uniform eccentricity $e_0=0.2$. This is a conservative value we assume for the eccentricity (averagely) at 0.1 Hz. The exact distribution of eccentricity in the mid-band is still very uncertain. However, we found that the choice of eccentricity dose not influence the ability of detecting the golden dark BNS and NSBH. For BBH, except in the circular case, the detection of golden dark BBH is guaranteed. The different choice of the nonvanishing eccentricity (from $e_0=0.1$ to 0.4) dose not influence the detection of golden dark BBH seriously.
In this paper, when constructing the catalogs of GWs we adopt the median value of the merger rates which are inferred from GWTC-3. Taking account of the uncertainties of the merge rates could enlarge or reduce the population of GWs in the catalogs with AEGDE. As discussed in Paper II, since we can only track a very limited number of events in the resonant mode of AEDGE, the uncertainty of the population of GWs has small influence on our final results. This argument also holds true for the different assumptions of the number density of the galaxies. However, even assuming a very large $n_g= 0.1~\rm Mpc^{-3}$, we can still observe more than 5 golden dark sirens for each type of binaries. In addition, as discussed in Paper II, the clustering and grouping of galaxies make it much easier to infer the redshift of GWs from the cluster and group of the host galaxy instead of the host galaxy itself~\cite{Yu:2020vyy}. This means that our estimation is somehow very conservative.
From table~\ref{tab:np} AEDGE can observe hundreds of dark sirens with less than 10 potential host galaxies within the localized region. These dark sirens are also very useful by weighting the probability of hosting the GWs among these few potential galaxies. Though the constraints on cosmological parameters from one such event could be looser than that from a golden dark siren, a large number of these events combined may still provide comparable or better measurements of the Hubble constant. Moreover, with these evens at higher redshift we can measure not only the local Hubble constant but also the matter density parameter and equation of state of dark energy, etc.
\appendix
\section{Derivation of AEDGE antenna pattern functions \label{app:F}}
The GW strain tensor can be decomposed in terms of
\begin{equation}
h_{ij}(t)=h_+(t)\mathbf{e}_{ij}^++h_\times(t)\mathbf{e}_{ij}^{\times}\,,
\label{eq:hij}
\end{equation}
here $\mathbf{e}_{ij}^{+,\times}$ are the polarization tensors with $\mathbf{e}_{ij}^+=\mathbf{u}_i\mathbf{u}_j-\mathbf{v}_i\mathbf{v}_j$, and $\mathbf{e}_{ij}^\times=\mathbf{u}_i\mathbf{v}_j+\mathbf{v}_i\mathbf{u}_j$.
We first assume the polarization angle $\psi=0$. For the source located at $(\theta,\phi)$ in the heliocentric ecliptic coordinate system, the bases of GW polarization tensors are
\begin{align}
\mathbf{u}=&(\cos\theta\cos\phi, \cos\theta\sin\phi, -\sin\theta) \,,\\
\mathbf{v}=&(\sin\phi, -\cos\phi, 0) \,.
\label{eq:uv}
\end{align}
For the single-baseline detector AEDGE, the detector response tensor $D_{ij}$ from the baseline direction unit vector $\mathbf{a}(t)$ is
\begin{equation}
D_{ij}=\frac{1}{2}\mathbf{a}_i(t)\mathbf{a}_j(t) \,.
\label{eq:Dij}
\end{equation}
We parameterize the detector location on the orbit around the earth by a unit vector in the geocentric coordinates,
\begin{equation}
\mathbf{r_{AI}}_{,0}(t)=(\cos\phi_a(t),\sin\phi_a(t),0) \,,
\end{equation}
where $\phi_a(t)=2\pi t /T_{\rm AI}+\phi_0$ is the azimuthal orbit angle around the Earth. $T_{\rm AI}=10$ days is the orbit period of AEDGE around the Earth. Then the baseline direction of AEDGE in geocentric coordinates is
\begin{equation}
\mathbf{a}_0(t)=(-\sin\phi_a(t),\cos\phi_a(t),0) \,.
\end{equation}
We need to transform $\mathbf{a}_0(t)$ in the geocentric coordinates to $\mathbf{a}(t)$ in the heliocentric coordinates,
\begin{align}
\mathbf{a}(t)=
\begin{pmatrix}
\cos\phi_{\rm Ea}(t) & -\sin\phi_{\rm Ea}(t) & 0 \\
\sin\phi_{\rm Ea}(t) & \cos\phi_{\rm Ea}(t) & 0 \\
0 & 0 & 1
\end{pmatrix} \cdot
\begin{pmatrix}
\cos\theta_{\rm inc} & 0 & -\sin\theta_{\rm inc} \\
0 & 1 & 0 \\
\sin\theta_{\rm inc} & 0 & \cos\theta_{\rm inc}
\end{pmatrix}
\cdot
\mathbf{a}_0(t) \,.
\end{align}
The azimuthal angle of the Earth’s orbit around the Sun is $\phi_{\rm Ea}(t)=2\pi t/(1 \rm yr)+\phi_0'$. The inclination $\theta_{\rm inc}=28.5^{\circ}$ is the angle between the orbit plane of AEDGE around the Earth and the ecliptic.
Then the observed waveform is given by
\begin{equation}
h(t)\equiv D_{ij}h_{ij}=h_+(t)F_+(t)+h_\times(t)F_\times(t) \,.
\end{equation}
Since we assume $\psi=0$ above, we get $F_+(t,\psi=0)=D_{ij}(t)\mathbf{e}_{ij}^+$ and $F_\times(t,\psi=0)=D_{ij}(t)\mathbf{e}_{ij}^\times$. Then taking the polarization angle into account we get
\begin{align}
F_+(t)=&\cos(2\psi)F_+(t,\psi=0)-\sin(2\psi)F_\times(t,\psi=0) \,, \\
F_\times(t)=&\sin(2\psi)F_+(t,\psi=0)+\cos(2\psi)F_\times(t,\psi=0) \,.
\end{align}
These are the antenna pattern functions of AEDGE.
In the frequency domain, we need to derive the relation between time and GW frequency, $t(f)$. We numerically solve the orbital evolution equation in~\cite{Yunes:2009yz} for the eccentric cases. We use the frequency of the dominant quadrupole mode $\ell=2$ as the variable in $t(f)$. Then the antenna pattern functions for $\ell=2$ are $F_{+,\times}(t(f))$. Since different modes enter frequency band at different time, the time at the frequency of nonquadrupole modes should be related to that at quadrupole mode by $t(f)=t(2f_\ell /\ell)$. Therefore, the antenna pattern functions
for $\ell$ mode should be $F_{+,\times}(t(2f/\ell))$. The total strain is $h=\sum_\ell h_\ell=\sum_\ell h_{\ell+}F_+(t(2f/\ell))+h_{\ell\times}F_\times(t(2f/\ell))$.
\section{Supplementary results for the typical events\label{app:sup}}
Here we summarize some additional results as supplementary to section~\ref{sec:typical}. For the eccentric cases, we only show the results with $e_0=0, 0.1, 0.2$ and 0.4. (1) We first show the distance inference and source localization of all the five typical binaries in figure~\ref{fig:err_dL_sum} and~\ref{fig:err_Omega_sum}. (2) We have pointed out in the main text that eccentricity can largely break the degeneracy between distance $d_L$ and inclination $\iota$ in the near face-on orientations. We expect the errors of inclination angle can also be significantly reduced in small $\iota$. As shown in figure~\ref{fig:diota}, we can see the clear significant improvements of $\iota$ in the near face-on orientations. (3) To compare the SNR between these typical binaries, we choose $e_0=0.2$ as representative shown in figure~\ref{fig:SNR_e02}. As expected, with larger inclination the SNR is lower. In the main text, we concluded that a heavier mass binary can achieve more improvements from eccentricity. We can see this is not relevant to the SNR of the typical binaries. The heavy BBH gets the largest improvement for distance inference and localization. But its SNR is not the highest. (4) Though in this paper
we do not show the error of eccentricity, we are still curious about how precisely the eccentricity can be constrained by AEDGE in the mid-band. As shown in figure~\ref{fig:erre0_e02}, we plot the error of $e_0$ against inclination angle for these five typical binaries by assuming $e_0=0.2$ as a representative. We find the eccentricity can be constrained very precisely with $\Delta e_0\sim 10^{-5}-10^{-7}$ for these binaries.
\acknowledgments
This work is supported by National Research Foundation of Korea 2021R1A2C2012473 and 2021M3F7A1082056.
RGC is supported by National Key Research and Development Program of China Grant No. 2020YFC2201502, the National Natural Science Foundation of China Grants No.11821505, No.11991052, No.11947302 and by the Strategic Priority Research Program of the Chinese Academy of Sciences Grant No.XDB23030100, and the Key Research Program of FrontierSciences of CAS.
\bibliographystyle{JHEP}
\bibliography{ref}
|
Title:
Bounds on ultralight bosons from the Event Horizon Telescope observation of Sgr A$^*$ |
Abstract: Recent observation of Sagittarius A$^*$ (Sgr A$^*$) by the Event Horizon
Telescope (EHT) collaboration has uncovered various unanswered questions in
black hole (BH) physics. Besides, it may also probe various beyond the Standard
Model (BSM) scenarios. One of the most profound possibilities is the search for
ultralight bosons (ULBs) using BH superradiance (SR). EHT observations imply
that Sgr A$^*$ has a non-zero spin. Using this observation, we derive bounds on
the mass of ULBs with purely gravitational interactions. Considering
self-interacting ultralight axions, we constrain new regions in the parameter
space of decay constant, for a certain spin of Sgr A$^*$. Future observations
of various spinning BHs can improve the present constraints on ULBs.
| https://export.arxiv.org/pdf/2208.03530 |
\section{Introduction}
There are ample observational evidences and theoretical problems which indicate the presence of physics beyond the Standard Model (SM) \cite{Craig:2022uua,Asadi:2022njl, Barrow:2022gsu, Adams:2022pbo, Antypas:2022asj}. In order to address these, various BSM models have been proposed in the literature\,\cite{Lee:2019zbu, Preskill:1982cy, Abbott:1982af}. The ongoing lab-based experiments are leaping forward to probe, and possibly discover, these models\,\cite{Essig:2013lka, Alexander:2016aln, Rappoccio:2018qxp, Essig:2022yzw}. It is possible that the next breakthrough in particle physics may come from astrophysical and/or cosmological observations, notably in scenarios where BSM particles couple very weakly to the SM states. In such circumstances, extreme astrophysical environments might give us clues to answer some of the fundamental questions. For example, our current knowledge of various astrophysical objects like supernovae, neutron stars, BHs, etc.\,\,enable us to place leading constraints on the properties of light relics in large regions of the parameter space\,\cite{Raffelt:1990yz, Raffelt:1996wa, Baryakhtar:2022hbu}.
In this paper, we use spinning BH as a laboratory to test the presence of ULBs. We utilize the phenomenon called SR that happens for a large class of dissipative systems\,\cite{Brito:2015oca, Baryakhtar:2022hbu}. For instance, SR in electromagnetism can be understood by considering an electromagnetic wave incident on an axisymmetric, conducting cylinder which rotates at a constant angular velocity\,\cite{1971JETPL..14..180Z, 1986RaF....29.1008Z}. If the angular velocity of the incident wave is smaller than the rotational velocity of the cylinder ($\Omega$), then after scattering, the wave will extract angular momentum for
\begin{eqnarray}
\label{SRcondGeneral}
\frac{\omega_\gamma}{m} < \Omega\,,
\end{eqnarray}
where $\omega_\gamma$ and $m$ are the energy and angular momentum of the incident wave with respect to the rotation axis of the cylinder, respectively. The fact that the outgoing wave has a larger amplitude as compared to the incident one, is known as SR. A similar process can happen for a rotating BH, sometimes known as the Penrose process\,\cite{Penrose:1969pc}. For a BH, the horizon acts as the source of dissipation. In this case, a ULB having a Compton wavelength comparable to the size of the BH horizon may efficiently extract angular momentum from the rotating BH. Note that these ULBs arise from vacuum fluctuations around the BH and can form a bound state with them. These bound states exhibit hydrogen atom-like behavior. These ULBs need not have any cosmic density but must be present in the Lagrangian.
Using SR, BH spin measurements are used to probe ULB particles\,\cite{Arvanitaki:2009fg, Arvanitaki:2010sy, Yoshino:2012kn, Arvanitaki:2014wva, Gruzinov:2016hcq, Baryakhtar:2017ngi, Davoudiasl:2019nlo, Siemonsen:2019ebd, Stott:2020gjj, Baryakhtar:2020gao, Unal:2020jiy, Herdeiro:2021znw, Mehta:2020kwu, Mehta:2021pwf, Ghosh:2021zuf, Du:2022trq, Cannizzaro:2022xyw, Cheng:2022ula, Blas:2020nbs, Blas:2020kaa, Caputo:2021efm, Chung:2021roh, Payne:2021ahy, Roy:2021uye, Baumann:2021fkf, Yuan:2022bem, Chen:2022nbb, Baumann:2022pkl}\footnote{Refs.\,\cite{Day:2019bbh, Chadha-Day:2022inf} have explored SR phenoemena for stars.}. We use the recent imaging of the Milky-Way supermassive black hole (SMBH), Sgr A$^*$, by the EHT collaboration\,\cite{EventHorizonTelescope:2022xnr, EventHorizonTelescope:2022vjs, EventHorizonTelescope:2022wok, EventHorizonTelescope:2022exc, EventHorizonTelescope:2022urf, EventHorizonTelescope:2022xqj} to constrain the properties of ULBs. In particular, EHT has demonstrated that their models with dimensionless spin parameters $0.5$ and $0.94$ have passed all the tests\,\cite{EventHorizonTelescope:2022xnr}. We use these two values of the spin to put constraints on the existence of scalar, vector, and tensor particles. Further, self-interaction among the scalar particles may suppress the BH spin-down capability of these ULBs. In this context, we have put the leading constraint on the axion decay constant in some regions of the parameter space using the recent EHT observations. In particular, if Sgr A$^*$ has a spin parameter of 0.94, it constrains a new region of the QCD axion parameter space.
The paper is organized as follows: in sec.\,\ref{sec:SR-review}, we give a brief overview of BH SR. In sec.\,\ref{sec:bound}, we present our constraints. In sec.\,\ref{sec:results}, we briefly discuss our results and associated uncertainties. Then, we conclude in sec.\,\ref{sec:conclusion}.
\section{Brief review of Black Hole Superradiance}
\label{sec:SR-review}
BH SR is a phenomenon of a rotating BH losing its angular momentum and energy due to the existence of a massive bosonic particle\,\cite{Zeldovich:1972spj,Zeldovich:1971a,1971NPhS..229..177P,PhysRevLett.28.994,Starobinsky:1973aij,PhysRevD.22.2323}. Under certain conditions, the ULB will extract angular momentum and energy via superradiant instabilities. As a result, the BH will spin down. Superradiant instabilities lead to an exponential growth of the ULB field around the BH, forming a bound state with the BH, and the resulting configuration is sometimes referred to as the gravitational atom\,\cite{Arvanitaki:2009fg,PhysRevD.22.2323}. The effect of superradiant instabilities is maximal when the Compton wavelength of the particle is comparable to the gravitational radius of the BH. The ratio of the gravitational radius of the BH ($r_g = G_N M_{\rm BH}$) to the ULB's Compton wavelength ($\lambda_c=1/\mu_b$) defines the gravitational fine structure constant, $\alpha \equiv r_g/\lambda_c= G_N M_{\rm BH} \mu_b$, where $\mu_b$, $M_{\rm BH}$, and $G_N$ are the ULB mass, BH mass, and Newton's gravitational constant, respectively. The gravitational fine structure constant determines the efficiency of the superradiant instability\,\cite{Arvanitaki:2014wva,Baumann:2019eav}.
Growth of the ULB field around the BH occurs only if the angular phase velocity ($\omega_b\sim\mu_b$) of the field is smaller than the angular velocity of the BH event horizon ($\Omega_H$),
\begin{equation}\label{eq:SRcond}
\frac{\omega_b}{m} < \Omega_H\,.
\end{equation}
Here $m$ represents the azimuthal angular quantum number, and $\Omega_H$ is defined as
\begin{equation}
\Omega_H = \frac{1}{2 r_g}\frac{a_*}{1 + \sqrt{1 - a^{*2}}}\,,
\label{OmegaH}
\end{equation}
where $a_*$ is the dimensionless spin parameter defined as $a_* = J_{\rm BH}/ (G_N M_{\rm BH}^2)$ with $J_{\rm BH}$ being the magnitude of the BH angular momentum. The spin parameter lies in the range $0 \leq a_*\leq 1$.
Besides satisfying Eq.\,\eqref{eq:SRcond}, the energy extraction rate via SR should be faster than the fundamental scale for BH accretion. This can be satisfied if
\begin{equation}\label{SR:cond2}
\tau_{\rm SR}<\tau_{\rm BH}\,\,.
\end{equation}
Here $\tau_{\rm BH}$ is the characteristic timescale of the BH, and the instability timescale for SR, $\tau_{\rm SR}$, is
\begin{eqnarray}\label{eq:tau_Sr}
\tau_{\rm SR}=\frac{\ln N_{\rm max}}{\Gamma^b}\,\,,
\end{eqnarray}
where $\Gamma^b$ is the superradiant instability growth rate of the ULB cloud, and $N_{\rm max}$ is the maximum occupation number of the cloud after the BH spin downs by $\Delta a_*$. The maximum occupation number is
\begin{eqnarray}\label{eq:Nmax}
N_{\rm max}=\frac{G_N M_{\rm BH}^2\Delta a_*}{m}\,\,,
\end{eqnarray}
where we conservatively take $\Delta a_*=(1-a_*)$\,\cite{Brito:2015oca,Arvanitaki:2014wva,Arvanitaki:2016qwi}. The superradiant instability growth rate ($\Gamma^b$) is different for scalar\,\cite{Arvanitaki:2010sy,Ternov:1978gq,ZOUROS1979139,Detweiler:1980uk,Dolan:2007mj,Yoshino:2013ofa,Arvanitaki:2014wva,Brito:2015oca,Brito_2015,Arvanitaki:2016qwi,Davoudiasl:2019nlo,Unal:2020jiy,Stott:2020gjj}, vector\,\cite{Rosa:2011my,PhysRevD.86.104017,Pani:2012vp,East:2017mrj,PhysRevD.96.035019,Baumann:2019eav,Davoudiasl:2019nlo,Unal:2020jiy,Stott:2020gjj}, and tensor fields\,\cite{Brito:2013wya,PhysRevLett.124.211101,Unal:2020jiy,Stott:2020gjj}. In the following section, we provide the expressions of $\Gamma^b$ for these three cases and use them to constrain the ULB particle mass using the recent EHT result.
A conservative choice for the characteristic timescale is the Salpeter time. The Salpeter time is the timescale for BH accretion where the compact object is radiating at the Eddington limit, and it is $\tau_{\rm Salpeter} \sim 4.5\times10^7$ yr. For BHs that are radiating at the super-Eddington limit, the characteristic time is $\sim \tau_{\rm Salpeter}/10$ yr\,\cite{Brito:2015oca}. On the other hand, for accretion onto a BH at a fraction of the Eddington limit, the rate of BH mass increase is
\begin{eqnarray}
\label{accretion}
\dot{M}_{\rm acc}=f_{\rm Edd}\dot{M}_{\rm Edd}\sim0.02f_{\rm Edd}\frac{M_{\rm BH}}{10^6M_\odot}\,M_\odot \text{yr$^{-1}$}\, ,
\end{eqnarray}
where $M_{\rm BH}$ is given in solar mass unit. The above equation assumes the radiative efficiency, $\eta \approx$ 0.1\,\cite{osti_4778507,PhysRevD.89.104059,Brito_2015,Brito:2015oca}. The Eddington ratio for mass fraction, $f_{\rm Edd}$, depends on the detailed properties of the accretion disk surrounding a BH. In case of Sgr A$^*$, $f_{\rm Edd}\sim10^{-9}$\,\cite{Brito:2015oca,Wislocka:2019efh}. For an accreting BH with the accretion rate given by Eq.~\eqref{accretion}, the mass growth is exponential with e-folding time given by a fraction $( 1/{f_{\rm Edd}} )$ of the Salpeter time. Therefore, for Sgr A$^*$, the timescale relevant for gas accretion is $\sim10^{16}$ yr, which is greater than the Hubble time, $\tau_{\rm Hubble} \sim10^{10}$ yr. Thus we take a conservative choice for the characteristic timescale for Sgr A$^*$ in this work and fix it at $\tau_{\rm BH}=5\times10^9$ yr. A timescale less than this increases the required superradiant instability rate, and thus strengthens the constraints.
\section{Bounds on bosonic particles}
\label{sec:bound}
In the last section, we gave a summary of BH SR and discussed the necessary conditions for the BH spin depletion via SR. For a given observation of BH parameters, namely, BH mass and spin, we can use the SR conditions (Eqs.\,\eqref{eq:SRcond} and \eqref{SR:cond2}) to put upper and lower bounds on the mass of the ULB particles ($\mu_b$), assuming that depletion in BH spin by $\Delta a_*$ has not occurred due to the SR. In this section, we use the recent observation of Sgr A$^*$ by EHT to constrain the masses of ULB particles with spins 0, 1, and 2. We also consider the case with a self-interacting scalar field and constrain its self-interaction strength using the recent EHT measurement of Sgr A$^*$.
\subsection{Case I: Non-interacting particles}\label{subsec:bound-Ni}
First, we consider the case where ULB particles do not have any interaction other than the gravitational interaction. We refer to this as the non-interacting scenario. For this case, the superradiant instability rates ($\Gamma^b$) for the ULB particles with spins 0, 1, and 2 have already been discussed in the literature. In the following subsections, we summarize those results briefly.
\subsubsection{Spin-0}\label{subsubsec:bound-Ni_s0}
A massive scalar field ($\Phi$) obeys the Klein-Gordon (KG) equation of motion in a spacetime defined by the metric $g^{\mu \nu}$:
\begin{equation}
(g^{\mu \nu} \nabla_\mu \nabla_\nu - \mu_S^2)\Phi = 0\,.
\end{equation}
For BH SR, $g^{\mu \nu}$ will be the Kerr metric of BH under consideration and $\mu_S$ is the mass of the ultralight scalar. Analytical solution to the KG equation is determined using Detweiler's approximation~\cite{Detweiler:1980uk}. This provides the superradiant instability rate for a scalar field as~\cite{Detweiler:1980uk,Baumann:2019eav}
\begin{equation}
\Gamma^S_{n \ell m} = 2 \tilde{r}_+ C_{n \ell} \, g_{\ell m}(a_*, \alpha,\omega) \, (m \Omega_H - \omega_{n \ell m})\hskip 1pt \alpha^{4\ell+5} \, , \label{eqn:ScaRate}
\end{equation}
where
\begin{align}
C_{n \ell} &\equiv \frac{2 ^{4\ell+1} (n+\ell)!}{ n^{2\ell+4} (n-\ell-1)! } \left[ \frac{\ell !}{(2\ell)! (2\ell+1)!} \right]^2 \, , \label{eqn:Cnl} \\
g_{\ell m}(a_*, \alpha, \omega) &\equiv \prod^{\ell}_{k=1} \left( k^2 \left( 1-a_*^2 \right) + \left( a_* m - 2 r_+ \hskip 1pt \omega \right)^2 \right) . \label{eqn:glm}
\end{align}
In the above expressions, $\tilde{r}_{+} = \frac{r_g + \sqrt{r_g^2 + a^2}}{r_g} $ and
\begin{equation}\label{eq:wnlm}
\omega_{n \ell m} \approx \mu_S\left[1- \frac{1}{2} \left(\frac{\alpha}{n+\ell+1}\right)^2\right] \sim \mu_S
\end{equation}
where $n$, $\ell$, and $m$ are the principle, orbital angular momentum, and azimuthal angular momentum quantum numbers, respectively. The dominant growing mode for a scalar field is the dipole mode, $|nlm\rangle =|211\rangle$. We note that the coefficient $C_{n \ell}$ for the dominant mode obtained using Eq.~\eqref{eqn:Cnl} gives $C_{21} = 1/48$, which differs by a factor of 2 with the same of Ref.~\cite{Detweiler:1980uk}. This mismatch was also pointed out in refs.~\cite{Pani:2012bp,Baryakhtar:2017ngi}. Therefore, the growth rate for the dominant mode is
\begin{equation}\label{eq:Gammas_dom}
\Gamma^S_{211} =\frac1{48}a_*\,r_g^8\mu_S^9\,.
\end{equation}
Using the growth rate for the dominant mode and Eqs.~\eqref{SR:cond2} and \eqref{eq:tau_Sr}, we can put an upper limit on the mass of scalar particle demanding that the BH spin is not depleted by SR. The upper limit for the dominant mode is
\begin{equation} \label{eq:dom_up_lim}
\mu_S<\left(\frac{48\ln N_{\rm max}}{a_*\, r_g^8\tau_{\rm BH}}\right)^{1/9}\, .
\end{equation}
In this work, we use the recent spin measurement of Sgr A$^*$ by the EHT collaboration \cite{EventHorizonTelescope:2022xnr, EventHorizonTelescope:2022vjs, EventHorizonTelescope:2022wok, EventHorizonTelescope:2022exc, EventHorizonTelescope:2022urf, EventHorizonTelescope:2022xqj} and constrain the scalar particle mass.
\subsubsection{Spin-1}\label{subsubsec:bound-Ni_s1}
A massive vector field obeys the Proca equations of motion on a spacetime defined by metric $g^{\mu \nu}$:
\begin{equation}
\nabla_\mu F^{\mu\nu} = \mu_V^2 A^\nu\,,
\end{equation}
where the Proca field strength $F^{\mu\nu}$ is defined in terms of the vector potential $A^\mu$ as $F^{\mu\nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. The mass of the vector field is denoted by $\mu_V$. Analytical solution to the Proca equation has been obtained in the literature, and it is used to get the SR instability rate for a particle with spin-1. The instability growth rate for a spin-1 particle is given as~\cite{Baumann:2019eav}
\begin{equation}\label{eqn:VecRate}
\Gamma^V_{n\ell jm} = 2 \tilde{r}_+ C_{n \ell j } \, g_{j m}(a_*, \alpha, \omega) \left( m \Omega_H - \omega_{n \ell j m} \right) \alpha^{2 \ell + 2 j + 5} \, ,
\end{equation}
where $j$ is the total angular momentum quantum number, the energy levels are denoted by $\omega_{n \ell j m}$ and the coefficients,
\begin{align}
C_{n \ell j} \equiv \frac{2^{2\ell + 2 j +1} (n+\ell)!}{n^{2\ell+4}(n-\ell-1)!} &\left[ \frac{(\ell)!}{(\ell + j )!(\ell + j+1)!}\right]^2 \left[ 1 + \frac{ 2 \left( 1+ \ell - j \right) \left(1 - \ell + j \right) }{\ell + j}\right]^2 \, , \label{eqn:VectCoeff} \\
g_{j m}(a_*, \alpha, \omega) &\equiv \prod^{j}_{k=1} \left( k^2 \left( 1-a_*^2 \right) + \left( a_* m - 2 r_+ \hskip 1pt \omega \right)^2 \right) . \label{eqn:coeffProd_v}
\end{align}
The growth rate for the vector field is valid for the modes with $j = \ell \pm 1, \ell$. The dominant growing mode for a vector field is the mode with $|nljm\rangle =|1011\rangle$ and the corresponding growth rate is
\begin{equation}
\Gamma^V_{1011} =4a_*\, r_g^6\mu_V^7\,.
\end{equation}
Using the dominant mode, an upper limit on $\mu_V$ can be obtained as
\begin{equation}\label{eq:muv_dom_lim}
\mu_V <\left(\frac{\ln N_{\rm max}}{4a_* \, r_g^6\tau_{\rm BH}}\right)^{1/7}\, .
\end{equation}
\subsubsection{Spin-2}
The theories of spin-0 and spin-1 particles are apparent in the SM of particle physics. Particles with higher spin are proposed in several theoretical models \cite{Sorokin:2004ie, Bouatta:2004kk, Sagnotti:2011jdy} and possibly General Relativity is the simplest theory of spin-2 fields.
The SR instability rate for spin-2 field is\,\cite{Brito:2020lup}
\begin{equation}
\Gamma^T_{n \ell m j} = -C_{j\ell}\frac{{\cal P}_{jm}(a_*)}{{\cal P}_{jm}(0)}\alpha^{2(\ell+j)+5}(\omega_{nlm}-m\Omega_{\rm
H})\,, \label{Eq:GammaSpin2}
\end{equation}
where
\begin{equation}
{\cal
P}_{jm}(a_*)=(1+\Delta)\Delta^{2j}\prod_{q=1}^j\left[1+4M_{\rm BH}^2\left(\frac{\omega_{nlm}-m\Omega_{\rm
H}}{q\kappa}\right)^2\right]
\end{equation}
is proportional to the BH absorption probability. Here $\Delta=\sqrt{1-{a_*}^2}$, and $\kappa=\Delta/(1+\Delta)$. The total angular momentum is represented by $j$ and the numerical values of the constant $C_{j\ell}$ for different modes are given in Ref.\,\cite{Brito:2020lup}. Compared to scalar and vector cases, we have more superradiant instability modes with the requirement that the modes have to be nonaxisymmetric. There are two dominant modes, namely dipole ($j=\ell=1$) and quadrupole ($j=2, \ell=0$), however, the numerical values of $C_{j\ell}$ indicate that the quadrupole is the dominant unstable mode. The leading order expression is given by
\begin{equation}\label{eq:j2l0}
\Gamma_{0022}^T
\simeq \frac{64}{45} {a_*}r_g^8 \mu_T^9\,.
\end{equation}
Using Eq.\,\eqref{eq:j2l0}, and assuming SR has not depleted the BH spin, an upper limit on the mass of the spin-2 particle can be obtained as follows
\begin{equation}
\label{eq:ulspi2}
\mu_T < \left( \frac{45 \, {\rm ln}N_{\rm max}}{64 a_* r_g^8 \tau_{\rm BH}} \right)^{1/9}.
\end{equation}
\subsection{Case II: Self-interacting particles}
The presence of gravitational interaction alone can give rise to BH SR. Many well-motivated theories beyond the SM predict ultralight scalar particles having interaction among themselves and with SM states, for example, the QCD axion \cite{Weinberg:1977ma, Wilczek:1977pj, Peccei:1977hh, Preskill:1982cy}, Kaluza-Klein (KK) modes in string axiverse \cite{Svrcek:2006yi,Arvanitaki:2009fg}, etc. In this section, we explore the effect of scalar self-interaction on BH SR in light of recent EHT results. Particularly our focus will be on axion of mass $\mu_a$ and decay constant $f_a$.
A reasonably strong attractive self-interaction may lead to the collapse of the scalar cloud developed through SR, which sometimes is known as bosenova\,\cite{Arvanitaki:2014wva}. This would essentially suppress the spin-down capability of the SR cloud. The cloud will collapse when the number of particles in the cloud reaches $N_{\rm BOSE}$,
\begin{equation}
N_{\rm BOSE}= c \times 10^{94} \frac{n^4}{r_g \mu_a}\frac{M_{\rm BH}}{10^9 M_{\odot}} \frac{f_a}{M_P},
\end{equation}
with $c \sim 5$, obtained through numerical analysis and $M_P$ being the Planck mass.
Therefore in the presence of self-interaction, SR can spin down the BH only if the SR rate is large
\begin{equation}
\label{eq:SI}
\Gamma^b \tau_{\rm BH} \frac{N_{\rm BOSE}}{N_{\rm max}} > {\rm ln}N_{\rm BOSE}\,.
\end{equation}
Using the measured BH mass and spin, we obtain an upper bound in the axion mass and decay constant through Eq.\,\eqref{eq:SI}. It has been argued in Ref.\cite{Baryakhtar:2020gao} that due to self-interaction, the energy exchange between different SR levels may lead to a quasi-equilibrium state, which may prevent further growth of SR levels. For small enough self-coupling, the rate of exchange will not be able to compete with the SR rate, and SR can spin down the BH. The constraint obtained from this consideration would be similar to the bosenova one.
\begin{table}[h!]
\centering
\begin{tabular}{ |p{4cm}|p{3.5cm}|p{3.5cm}| p{3.5cm}| }
\hline
Sgr A$^*$ spin &\multicolumn{3}{c|}{Bounds on ULB mass in units of $10^{-19}$ eV}\\ \cline{2-4}
($a_*$) & Scalar & Vector & Tensor \\
\hline
$0.94$ & $16.7 \le\mu_{19}\le 117$ & $3.52 \le\mu_{19}\le 117$ & $8.81 \le\mu_{19}\le 117 $ \\
\hline
$0.5$ & $18.6 \le\mu_{19}\le 44.7$ & $3.88 \le \mu_{19} \le 44.7$ & $10.4 \le \mu_{19} \le 44.7$ \\
\hline
\end{tabular}
\caption{Constraints on non-interacting spin-0, spin-1 and spin-2 particles from Sgr A$^*$ with $\mu_{19}=\mu_b/(10^{-19}\, {\rm eV})$. We have taken $M_{\rm BH}$= 4$\times10^6M_\odot$ and $\tau_{\rm BH}=5\times10^9$ yr.}
\label{Tab: table}
\end{table}
\section{Results and Discussion}
\label{sec:results}
Using the expressions mentioned above, we put conservative bounds on the masses of spin-0, spin-1, and spin-2 ULBs, from EHT observation of Sgr A$^*$. Our results for non-interacting ULBs are shown in Fig.\,\ref{fig:bound_svt}, and given in Table\, \ref{Tab: table}. The obtained bounds are derived for two spin values ($a_*=0.5$ and $0.94$) of Sgr A$^*$. Following Refs.\,\cite{Do:2019txf, abuter2019geometric, reid2020proper}, we take $M_{\rm BH}\, = \, 4\times10^6M_\odot$. From the figure, we see that the constraints on spin-1 and spin-2 ULBs are stronger than in the spin-0 case. This comes from the fact that spin-1 and spin-2 particles interact more with the BH, and as a result, the superradiant instability growth is faster. Thus we get a stronger bound in these two cases.
The constraints from BH SR depend on the BH spin. In Fig.\,\ref{fig:bound_svt1}, we show the dependence of the constraints on the BH spin for Sgr A$^*$ . For example, when $a_*=0.8$, the bounds on scalar, vector, and tensor ULBs are, $\mu_S\in(1.6\times10^{-18},8.3\times10^{-18})$\,eV, $\mu_V\in(3.2\times10^{-19},8.3\times10^{-18})$\,eV, and $\mu_T\in(8.8\times10^{-19},8.3\times10^{-18})$\,eV, respectively. The lower limits of the constrained region mildly depend on the spin parameter, unlike the upper limit. The smallest values of $a_*$ for which we have a constraint on scalar, vector, and tensor ULBs are 0.22, 0.05, and 0.13, respectively. Below these spin values, the lower bounds come out greater than the upper bounds, which implies that there is no common constrained region. We also note that our constraints overlap with those found by Ref.\,\cite{Unal:2020jiy}, where the spin measurements have some uncertainties associated with them. Future EHT measurements will accurately determine the spin parameter for Sgr A$^*$, and that will make these bounds much more robust.
Self-interactions between axions can prohibit the superradiant growth around a BH. Therefore, absence of SR also constrains the axion decay constant\,\cite{Arvanitaki:2014wva, Baryakhtar:2020gao,Unal:2020jiy,Stott:2020gjj,Mehta:2020kwu }. We show this in Fig.\,\ref{fig:selfInteract}. For SR instability rate, we use the dominant mode given in Eq.\,\eqref{eq:Gammas_dom}. In the shaded region of the parameter space, SR can spin down the BH. Light red and green shaded regions correspond to Sgr A$^{\star}$ with spins $0.5$ and $0.94$ respectively. The grey shaded region is the combination of previous bounds in the literature taken from refs.\,\cite{Baryakhtar:2020gao,Mehta:2020kwu,Unal:2020jiy,Mehta:2021pwf}. Interestingly, the latest EHT observation of Sgr A$^*$ allows us to probe some new regions of parameter space for $a_*=0.94$. From Fig.\,\ref{fig:selfInteract} we see that if Sgr A$^*$ has a spin of 0.94 and if it has not been spun down by SR, then this observation probes a new part of the QCD axion parameter space in the mass range $1.73\times10^{-17}$ eV to $2.33\times10^{-17}$ eV. It should be noted that unlike other laboratory experiments, SR spin-down probes smaller coupling. We have presented our results assuming the presence of an axion however, the scenario is also applicable to scalar particles having quartic coupling ($\lambda \leftrightarrow \mu_a^2/f_a^2$) \cite{Arvanitaki:2014wva}.
Several works have investigated the possible effects of BH environment on the superradiant cloud growth \cite{Arvanitaki:2014wva, Brito:2015oca, Ng:2019jsx, Ng:2020ruv, Cardoso:2020hca, Takahashi:2021eso, Takahashi:2021yhy}. As material from the accretion disk falls into the BH, the spin of the BH increases. This is opposite to the effect of SR, where the ULB cloud spins down the BH. Similarly, for a BH with a small spin, accretion can spin it up and make it subject to superradiant instability growth by satisfying Eq.\,(\ref{eq:SRcond}). If another BH or neutron star merges with the host BH, it can cause substantial perturbation to the superradiant cloud. To estimate all these effects in the case of Sgr A$^*$, we need to know its exact evolution history. We leave this for future work.
\section{Conclusion}
\label{sec:conclusion}
In this work, using the latest observation of Sgr A$^*$ by the EHT collaboration, we constrain ULB masses, assuming that the BH spin has not been depleted via SR. The presence of superradiant instability contradicts the observation of old highly spinning BHs. This in turn puts bounds on purely gravitationally interacting ULBs (see Figs.\,\ref{fig:bound_svt} and \,\ref{fig:bound_svt1}). We also take into consideration the possibility that a non-zero self-interaction can hinder superradiant cloud growth. For ultralight axion this probes a new region of its decay constant (see Fig.\,\ref{fig:selfInteract}). The two most important parameters in this analysis are the BH mass and spin. Though the mass of Sgr A$^*$ is known to considerable accuracy, its spin is yet to be precisely measured. Other works have used various stellar BHs and SMBHs to constrain ULBs using SR\,\cite{Davoudiasl:2019nlo, Unal:2020jiy, Mehta:2020kwu,Ng:2019jsx, Ng:2020ruv, Stott:2020gjj, Arvanitaki:2014wva, Baryakhtar:2020gao}. It is worth noting that near future discoveries of more intermediate-mass BHs can constrain new unexplored regions of parameter space for ULBs \cite{Greene2019IntermediateMassBH, Gais:2022xir, Payne_2022}. Besides, the transitions within superradiant `gravitational atom' cloud give rise to gravitational waves that can be potentially detected by various present and future detectors\cite{Brito:2015oca, Ng:2019jsx, Ng:2020ruv, Arvanitaki:2014wva, Baryakhtar:2020gao}. With the advancement in both theoretical and experimental frontiers, SR can become a very important astrophysical probe to search for new physics in the near future.
\paragraph*{Acknowledgements\,:} PP and TNM acknowledge IOE-IISc fellowship program for financial assistance. RL acknowledges discussions with Prateek Sharma and Nirupam Roy. RL acknowledges financial support from the Infosys foundation, Bangalore and institute start-up funds.
The first three authors contributed equally to this work.
\bibliographystyle{JHEP}
\bibliography{ref.bib} |
Title:
Analytical Equation of Three-point Correlation Function of Galaxies: to Third Order of Density Perturbation |
Abstract: Applying functional differentiation to the density field with Newtonian
gravity, we obtain the static, nonlinear equation of the three-point
correlation function $\zeta$ of galaxies, to the third order density
perturbations. We make the equation closed and perform renormalization of the
mass and the Jeans wavenumber. Using the boundary condition inferred from
observations, we obtain the third order solution $\zeta(r, u, \theta)$ at fixed
$u=2$, which is positive, exhibits a $U$-shape along the angle $\theta$, and
decreases monotonously along the radial $r$ up to the range $r \leq 30\,
h^{-1}$Mpc in our computation. The corresponding reduced $Q(r, u, \theta)$
deviates from 1 of the Gaussian case, has a deeper $U$-shape along $\theta$,
and varies non-monotonously along $r$. The third order solution agrees with the
SDSS data of galaxies, quite close to the previous second order solution,
especially at large scales. This indicates that the equations of correlation
functions with increasing orders of density perturbation provide a stable
description of the nonlinear galaxy system.
| https://export.arxiv.org/pdf/2208.13988 |
Key words: gravitation - hydrodynamics - cosmology: large-scale structure of universe
\section{Introduction}
\label{Intro}
In study of the distribution of galaxies,
the n-point correlation functions (nPCF) are important tools
which contain the dynamical and statistical information of
the system of galaxies
\cite{PeeblesGroth1975,GrothPeebles1977,FryPeebles,Peebles1980,Peebles1993,
Fry1983,Fry1984,Fry1994,Bernardeau2002}.
The analytical, closed equations of the 2PCF $\xi$
(also denoted as $G^{(2)}$)
up to the second order of density perturbation have been derived
for the static case
\cite{Zhang2007,zhang2009nonlinear,ZhangChen2015,ZhangChenWu2019},
as well as for the evolution case \cite{ZhangLi2021}.
And the associated solutions have simultaneously provided
simple explanations
of several seemingly-unrelated features of the observed correlation of galaxies,
such as the power law of the correlation
$\xi \simeq (r_0/r)^{1.7}$ in a range $r=(0.1\sim 10) h^{-1}$Mpc,
the correlation amplitude being proportional to the galaxy mass $\xi \propto m$,
the correlation function of clusters having
a similar form to that of galaxies $\xi_{cc} \simeq (10\sim 20)\, \xi_{gg}$
with a higher amplitude,
the scaling behavior of the cluster correlation amplitude,
the 100Mpc-periodic bumps of the observed $\xi_{gg}(r)$
on very large scales,
the small wiggles in the power spectrum caused by acoustic oscillating waves,
etc.
The statistic of galaxy distribution
is non-Gaussian due to long-range gravity,
and $G^{(2)}$ is insufficient to reveal the non-Gaussianity.
It is necessary to study the 3PCF $G^{(3)}(\mathbf{r, r', r''})$
which statistically describes
the excess probability over random of finding three galaxies located at
the three vertices (${\bf r}$, ${\bf r}'$, ${\bf r}''$)
of a given triangle \cite{FryPeebles,Peebles1980,Fry1984}.
There are a few preliminary analytical studies of $G^{(3)}$.
Ref.\cite{Fry1984} did not give the equation of $G^{(3)}$
and tried to calculate $G^{(3)}$ to lowest non-vanishing order in density perturbation,
assuming initial conditions that are Gaussian and have a power-law spectrum.
Similarly, using the BBGKY hierarchy,
Ref.\cite{Inagaki1991} calculated the Fourier transformation
of $G^{(3)}$ perturbatively under Gaussian initial conditions.
For the system of galaxies, however,
its non-Gaussian distribution function is unknown,
so that generally one is not able to compute $G^{(3)}$
even if the density perturbation as one realization is given.
Besides, the initial power spectrum of the system of galaxies
is not of a simple power-law form even at the early epoch
when galaxies are newly formed at some high redshifts.
Ref.\cite{Bharadwaj19941996} adopted the BBGKY hierarchy method
and formally wrote down an equation of $G^{(3)}$
for a Newtonian gravity fluid without pressure and vorticity.
But the formal equation contains no pressure and source terms
and will not be able to exhibit oscillation and clustering properties.
Moreover, the formal equation is not closed yet
and involves other unknown functions beside $G^{(3)}$.
This situation is similar to that in Ref.\cite{DaviesPeebles1977}
which gave an equation for $G^{(2)}$ involving other unknown functions.
These unclosed equations are hard to use
for the actual system of galaxies,
since appropriate initial conditions are difficult to specify
for several unknown functions.
In Ref.\cite{WuZhang2021} the static equation of $G^{(3)}$
was studied to the second order of density perturbation,
and the solution describes the overall profile
of the observed 3PCF \cite{marin2011}.
In this paper, we shall work on the third order density perturbation,
and also give renormalization of the mass $m$ and the Jeans wavenumber,
and compare the solution with observations.
Within a small redshift range,
the expansion effect is small,
and the correlations of galaxies can be well described by the static equation.
As demonstrated for the case of 2PCF \cite{ZhangLi2021},
the expansion term in the evolution equation
is about two orders smaller than the pressure and gravity terms,
and the 2PCF increases slowly,
$\xi \propto (1+z)^{-0.2}$ for $z= 0.5 \sim 0.0$.
\section{Equation of 3PCF to third order of density perturbation}
\label{sec:deri3PCF}
The equation of the density field with Newtonian gravity
\cite{Zhang2007,zhang2009nonlinear,ZhangChen2015,ZhangChenWu2019,WuZhang2021}
\be \label{psifieldequ}
\nabla^2 \psi-\frac{(\nabla \psi)^2}{\psi}+k_J^2 \psi^2+J \psi^2 =0 ,
\ee
where $\psi(\mathbf{r}) \equiv \rho(\mathbf{r}) / \rho_0$
is the rescaled mass density field with $\rho_0$ being the mean mass density,
and $k_J \equiv (4\pi G \rho_0/c_s^2)^{1/2}$ is the Jeans wavenumber,
$c_s $ is the sound speed,
and $J$ is the external source employed to carry out functional derivatives
conveniently.
The $n$-point correlation function is defined by
$G^{(n)}({\bf r}_1,\cdots,{\bf r}_n)
=\la \delta \psi({\bf r}_1) \cdots \delta \psi({\bf r}_n)\ra
=\frac{1}{\alpha^{n-1}}\frac{\delta^{n-1} \la \psi({\bf r}_1) \ra }
{\delta J({\bf r}_2)\cdots\delta J({\bf r}_n)}\vert_{J=0},
$
where $\delta \psi({\bf r}) = \psi({\bf r}) - \la \psi \ra$
is the fluctuation around the expectation value $\la \psi \ra $,
and $\alpha=c_s^2/4\pi G m$.
(See Refs.\cite{BinneyDowrickFisherNewman1992,Goldenfeld1992,Zustin1996,
Zhang2007,zhang2009nonlinear,ZhangChen2015,ZhangChenWu2019,ZhangLi2021}.)
To derive the equation of $G^{(3)}({\bf r}, {\bf r}', {\bf r}'')$,
we take the ensemble average of Eq.(\ref{psifieldequ}) in the presence of $J$,
and take the functional derivative of this equation
twice with respect to the source $J$, and set $J=0$.
The second term in Eq.(\ref{psifieldequ}) is expanded as
\bl \label{secondterm}
\la \frac{(\nabla \psi)^2}{\psi} \ra
= & \frac{(\nabla \la \psi \ra)^2}{\la \psi \ra}
+ \frac{\la (\nabla \delta \psi)^2 \ra}{\la \psi \ra}
-\frac{\nabla \la \psi \ra}{\la \psi \ra^2} \cdot \la \nabla (\delta \psi)^2 \ra
+\frac{(\nabla \la \psi \ra)^2}{\la \psi \ra^3} \la (\delta \psi)^2 \ra \nonumber \\
&
-\frac{1}{\la \psi \ra^2} \la \delta \psi (\nabla \delta \psi)^2 \ra
+ \frac{2}{3} \frac{1}{\la \psi \ra^3} \nabla \la \psi
\ra \cdot \la \nabla (\delta \psi)^3 \ra
-\frac{(\nabla \la \psi \ra)^2}{\la \psi \ra^4} \la (\delta \psi)^3 \ra
+ ,,, .
\el
containing $(\delta \psi)^3$,
higher than our previous work \cite{WuZhang2021}.
Calculations yields the following equation
\ba \label{3PCF}
&& \Big( 1+ \frac{1}{\psi_0^2} G^{(2)}(0) \Big)
\nabla^2 G^{(3)}(\mathbf{r, r', r''})
+\Big( \frac{2}{\psi_0^2} \nabla G^{(2)}(0)
- \frac{2}{\psi_0^3} \nabla G^{(3)}(0) \Big) \cdot
\nabla G^{(3)}(\mathbf{r, r', r''}) \nonumber \\
& &+ \Big( 2 k_J^2 \psi_0
+ \frac{1}{2 \psi_0^2} \nabla^2 G^{(2)}(0)
-\frac{2}{3 \psi_0^3} \nabla^2 G^{(3)}(0)
-\frac{1}{\psi_0^2} k_J^2 G^{(3)}(0) \Big) G^{(3)}(\mathbf{r, r', r''})
\nonumber \\
& &+\frac{1}{2 \psi_0^2} G^{(2)}(\mathbf{r, r''})
\nabla ^2 G^{(3)}(\mathbf{r, r, r'})
+\frac{1}{2 \psi_0^2} G^{(2)}(\mathbf{r, r'})
\nabla^2 G^{(3)}(\mathbf{r, r, r''}) \nonumber \\
& &+\frac{2}{\psi_0^2} \nabla G^{(3)}(\mathbf{r, r, r'})
\cdot \nabla G^{(2)}(\mathbf{r, r''})
+\frac{2}{\psi_0^2}\nabla G^{(2)}(\mathbf{r, r'})
\cdot \nabla G^{(3)}(\mathbf{r, r, r''}) \nonumber \\
& &+\frac{1}{\psi_0^2} G^{(3)}(\mathbf{r, r,r'}) \nabla^2 G^{(2)}(\mathbf{r, r''})
+\frac{1}{\psi_0^2} G^{(3)}(\mathbf{r, r, r''})
\nabla^2 G^{(2)}(\mathbf{r, r'}) \nonumber \\
& & -\frac{1}{2 \psi_0} \nabla ^2 G^{(4)}(\mathbf{r, r, r', r''}) \nonumber \\
& & -\frac{2}{3 \psi_0^3} G^{(2)}(\mathbf{r, r''})\nabla^2 G^{(4)}(\mathbf{r, r, r, r'})
-\frac{2}{3 \psi_0^3} G^{(2)}(\mathbf{r, r'})
\nabla^2 G^{(4)}(\mathbf{r, r, r, r''}) \nonumber \\
& &- \frac{2}{\psi_0^3} \nabla G^{(2)}(\mathbf{r, r''})
\cdot \nabla G^{(4)}(\mathbf{r, r, r, r'})
- \frac{2}{\psi_0^3} \nabla G^{(2)}(\mathbf{r, r'})
\cdot \nabla G^{(4)}(\mathbf{r, r, r,r''})
\nonumber \\
& &-\frac{1}{\psi_0^2} k_J^2 G^{(2)}(\mathbf{r, r''}) G^{(4)}(\mathbf{r, r, r, r'})
-\frac{1}{\psi_0^2} k_J^2 G^{(2)}(\mathbf{r, r'}) G^{(4)}(\mathbf{r, r, r, r''})
\nonumber \\
&&-\frac{2}{\psi_0} \Big( 1 + \frac{3}{\psi_0^2} G^{(2)}(0)
-\frac{3}{\psi_0^3} G^{(3)}(0) \Big) \nabla G^{(2)}(\mathbf{r, r'})
\cdot \nabla G^{(2)}(\mathbf{r, r''}) \nonumber \\
& &+\Big( 2 k_J^2 -\frac{1 }{\psi_0^3} \nabla^2 G^{(2)}(0)
+\frac{2}{\psi_0^4} \nabla^2 G^{(3)}(0)
+\frac{2}{\psi_0^3} k_J^2 G^{(3)}(0) \Big) G^{(2)}(\mathbf{r, r'})
G^{(2)}(\mathbf{r, r''})
\nonumber \\
& &- \Big( \frac{4}{\psi_0^3} \nabla G^{(2)}(0) - \frac{6}{\psi_0^4}
\nabla G^{(3)}(0) \Big) \cdot \Big( \nabla G^{(2)}(\mathbf{r, r'})
G^{(2)}(\mathbf{r, r''})
+ \nabla G^{(2)}(\mathbf{r, r''}) G^{(2)}(\mathbf{r, r'}) \Big)
\nonumber \\
& & -\frac{2}{\psi_0^3} G^{(2)}(0) \Big( G^{(2)}(\mathbf{r, r'})
\nabla^2 G^{(2)}(\mathbf{r, r''})
+ G^{(2)}(\mathbf{r, r''}) \nabla^2 G^{(2)}(\mathbf{r, r'}) \Big)
\nonumber \\
&=&- \frac{\psi_0}{\alpha } \big[ \big( 2 - \frac{1}{\psi_0^3} G^{(3)}(0) \big)
G^{(2)}(\mathbf{r, r'})
+\frac{1}{ \psi_0^2} G^{(4)}(\mathbf{r, r, r, r'}) \big]
\delta^{(3)}(\mathbf{r-r''}) \nonumber \\
&&- \frac{\psi_0}{\alpha } \big[ \big( 2 - \frac{1}{\psi_0^3} G^{(3)}(0) \big)
G^{(2)}(\mathbf{r, r''})
+\frac{1}{ \psi_0^2} G^{(4)}(\mathbf{r, r, r, r''}) \big]
\delta^{(3)}(\mathbf{r-r'}) ,
\ea
where $ G^{(2)}(0) \equiv G^{(2)}(\mathbf{r, r})$,
$G^{(3)}(\mathbf{r, r, r}) \equiv G^{(3)}(0)$,
$\psi_0 \equiv \la \psi \ra |_{J=0}=1$,
and $\nabla \equiv \nabla_{\mathbf{r}}$.
We have neglected $G^{(5)}$ as a cutoff of the hierarchy.
Comparing with the 2nd-order equation \cite{WuZhang2021},
eq.(\ref{3PCF}) contains also $G^{(2)} G^{(4)}$ terms,
and $ G^{(4)}$ in the delta source.
As expected, eq.(\ref{3PCF}) reduces to that of
the Gaussian approximation \cite{ZhangChenWu2019},
when all the higher order terms,
such as $ G^{(2)} G^{(2)}G^{(2)}$, $ G^{(2)} G^{(3)}$, $G^{(4)}$, are dropped,
Since eq.(\ref{3PCF}) contains $G^{(4)}$, it is not closed for $G^{(3)}$.
To cutoff the hierarchy,
we adopt Fry-Peebles ansatz \cite{FryPeebles}
\ba \label{frypeeblesansatz}
G^{(4)}(\mathbf{r}_1, \mathbf{r}_2, \mathbf{r}_3, \mathbf{r}_4)
&=&R_a[ G^{(2)}(\mathbf{r}_1, \mathbf{r}_2)
G^{(2)}(\mathbf{r}_2, \mathbf{r}_3) G^{(2)}(\mathbf{r}_3, \mathbf{r}_4)
+\mathrm{sym. (12 \, \, terms)} ] \nonumber \\
& &+R_b[ G^{(2)}(\mathbf{r}_1, \mathbf{r}_2)
G^{(2)}(\mathbf{r}_1, \mathbf{r}_3) G^{(2)}(\mathbf{r}_1, \mathbf{r}_4)
+\mathrm{sym. (4 \, \, terms)} ],
\ea
where $R_a$ and $R_b$ are dimensionless constants,
and $(3R_a+R_b)/4 \simeq 2.5 \pm 0.5$ as constrained by observations
\cite{Fry1983,Fry1984,Szapudi1992, Meiksin1992,Peebles1993}.
The ansatz \eqref{frypeeblesansatz} leads to
\ba \label{g4rr}
G^{(4)}(\mathbf{r, r, r', r''})
&=&2 R_a G^{(2)}(0) G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r', r''})
+2 R_a G^{(2)}(0) G^{(2)}(\mathbf{r, r''}) G^{(2)}(\mathbf{r', r''}) \nonumber \\
& &+2 (R_a + R_b) G^{(2)}(0) G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})
+2 R_a G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})
G^{(2)}(\mathbf{r', r''}) \nonumber \\
& &+2 R_a G^{(2)}(\mathbf{r, r'})^2 G^{(2)}(\mathbf{r, r''})
+2 R_a G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})^2 \nonumber \\
& &+R_b G^{(2)}(\mathbf{r, r'})^2 G^{(2)}(\mathbf{r', r''})
+R_b G^{(2)}(\mathbf{r, r''})^2 G^{(2)}(\mathbf{r', r''}) ,
\ea
and
\be \label{g4rrr1}
G^{(4)}(\mathbf{r, r, r, r'})
=3 (2 R_a + R_b )G^{(2)}(0)^2 G^{(2)}(\mathbf{r, r'})
+6 R_a G^{(2)}(0) G^{(2)}(\mathbf{r, r'})^2
+R_b G^{(2)}(\mathbf{r, r'})^3.
\ee
Eq.\eqref{3PCF} also contains the squeezed
$G^{(3)}(\mathbf{r, r, r'})=\lim\limits_{{\bf r}''
\rightarrow {\bf r}}G^{(3)}(\mathbf{r, r', r''})$
with three points being reduced to two.
In observations and simulations $G^{(3)}(\mathbf{r, r, r'})$ can not be resolved,
\cite{Gaztanaga2005, McBride2011a, McBride2011b, Yuan2017}.
To avoid the divergence,
we adopt the Groth-Peebles ansatz \cite{GrothPeebles1977,PeeblesGroth1975}
\be \label{Groth-Peebles-ansatz}
G^{(3)}(\mathbf{r, r', r''})=Q[ G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r', r''})
+G^{(2)}(\mathbf{r', r''}) G^{(2)}(\mathbf{r'', r})
+G^{(2)}(\mathbf{r'', r}) G^{(2)}(\mathbf{r, r'})] ,
\ee
where the constant $Q \sim 1$ as constrained by observations.
Then the squeezed 3PCF becomes
\be \label{g3rrr'}
G^{(3)}(\mathbf{r, r, r'})
=2 Q G^{(2)}(0) G^{(2)}(\mathbf{r, r'})+Q G^{(2)}(\mathbf{r, r'})^2,
\ee
consisting of regular $ G^{(2)}(\mathbf{r, r'})$.
Substituting \eqref{g4rr} \eqref{g4rrr1} \eqref{g3rrr'} into
eq.(\ref{3PCF}),
we obtain the closed equation of the 3PCF
\begin{eqnarray}
&& \nabla^2 G^{(3)}(\mathbf{r, r', r''})
+{\bf a}^{(3)}\cdot \nabla G^{(3)}(\mathbf{r, r', r''})
+ 2 g^{(3)}k_J^2 G^{(3)}(\mathbf{r, r', r''})
-\mathcal{A}^{(3)}(\mathbf{r, r', r''}) \nn \\
&=&-\frac{1}{\alpha} \bigg(2
- \big( 1+ b \big) e + 3 (2 R_a + R_b )b^2
+6 R_a bG^{(2)}(\mathbf{r, r''})
+R_b G^{(2)}(\mathbf{r, r''})^2 \bigg)
\delta^{(3)}(\mathbf{r-r'}) G^{(2)}(\mathbf{r, r''})\nonumber \\
&&-\frac{1}{\alpha} \bigg(2
- \big( 1+ b \big) e + 3 (2 R_a + R_b )b^2
+6 R_a b G^{(2)}(\mathbf{r, r'})
+R_b G^{(2)}(\mathbf{r, r'})^2 \bigg)
\delta^{(3)}(\mathbf{r-r''}) G^{(2)}(\mathbf{r, r'}), \nonumber \\
\label{3PCF_02}
\end{eqnarray}
where
\begin{eqnarray}
&& \mathcal{A}^{(3)}(\mathbf{r, r', r''}) \nn \\
& = & \bigg[ \bigg( 2 + 2 \big(3
+ R_a + R_b - 4 Q \big) b + 12 (2 R_a + R_b ) b^2 \bigg)\big( 1+ b \big)^{-1}
-6 e\bigg] \nabla G^{(2)}(\mathbf{r, r'})
\cdot \nabla G^{(2)}(\mathbf{r, r''}) \nonumber \\
& &-\bigg[2 k_J^2 \big( 1+ b \big)^{-1}
-6 k_J^2 (2 R_a + R_b ) b^2 \big( 1+ b \big)^{-1}
+ 2 k_J^2 e
- \bigg( 1 + R_a + R_b- 2 Q + 8 (2 R_a + R_b ) b \bigg) c
\nonumber \\
&& - 2 (2 R_a + R_b ) \big( 1+ b \big) \lvert \mathbf{a}^{(2)} \rvert^2
+ 2 f \bigg] G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})
+ R_a c G^{(2)}(\mathbf{r', r''}) \big( G^{(2)}(\mathbf{r, r'})
+ G^{(2)}(\mathbf{r, r''}) \big) \nonumber \\
& &+ \bigg[ \bigg( R_a + R_b - 3 Q -1 + 10 (2 R_a + R_b ) b \bigg) \mathbf{a}^{(2)}
+3 \mathbf{a}^{(3)} \bigg] \cdot \bigg( \nabla G^{(2)}(\mathbf{r, r'})
G^{(2)}(\mathbf{r, r''})
+ \nabla G^{(2)}(\mathbf{r, r''}) G^{(2)}(\mathbf{r, r'}) \bigg) \nonumber \\
& & +\bigg( 2 - 3 Q + R_a + R_b
+ 2 (2 R_a + R_b ) b \bigg) \frac{b}{1+ b}
\bigg( \nabla^2 G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})
+ \nabla^2 G^{(2)}(\mathbf{r, r''}) G^{(2)}(\mathbf{r, r'}) \bigg) \nonumber \\
&&+ R_a G^{(2)}(\mathbf{r', r''})
\bigg[\frac{b}{1+ b} \big( \nabla^2 G^{(2)}(\mathbf{r, r'})
+ \nabla^2 G^{(2)}(\mathbf{r, r''}) \big)
+ \mathbf{a}^{(2)} \cdot
\big( \nabla G^{(2)}(\mathbf{r, r'})
+ \nabla G^{(2)}(\mathbf{r, r''}) \big) \bigg] \nonumber \\
& &+\big( 2 R_a + 8 R_a b - Q \big) \big( 1+ b \big)^{-1}
\bigg[ \bigg( \lvert \nabla G^{(2)}(\mathbf{r, r'}) \rvert^2
+ G^{(2)}(\mathbf{r, r'}) \nabla^2 G^{(2)}(\mathbf{r, r'}) \bigg)
G^{(2)}(\mathbf{r, r''}) \nonumber \\
&&+ \bigg( \lvert \nabla G^{(2)}(\mathbf{r, r''}) \rvert^2
+ G^{(2)}(\mathbf{r, r''}) \nabla^2 G^{(2)}(\mathbf{r, r''}) \bigg)
G^{(2)}(\mathbf{r, r'}) \bigg] \nonumber \\
&&+ \big(R_a - Q\big) \big(1+ b\big)^{-1} \bigg[4 \big( G^{(2)}(\mathbf{r, r'})
+ G^{(2)}(\mathbf{r, r''}) \big) \nabla G^{(2)}(\mathbf{r, r'})
\cdot \nabla G^{(2)}(\mathbf{r, r''}) \nonumber \\
&&+ G^{(2)}(\mathbf{r, r'})^2 \nabla^2 G^{(2)}(\mathbf{r, r''})
+ \nabla^2 G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})^2 \bigg]
\nonumber \\
&& + 24 R_a b \big(1+ b\big)^{-1} \big( G^{(2)}(\mathbf{r, r'})
+ G^{(2)}(\mathbf{r, r''}) \big) \nabla G^{(2)}(\mathbf{r, r'}) \cdot
\nabla G^{(2)}(\mathbf{r, r''}) \nonumber \\
& &+\bigg( 4 R_a c + 6 k_J^2 R_a b \big( 1+ b \big)^{-1} \bigg)
\big( G^{(2)}(\mathbf{r, r'})
+ G^{(2)}(\mathbf{r, r''}) \big) G^{(2)}(\mathbf{r, r'})
G^{(2)}(\mathbf{r, r''}) \nonumber \\
&&+ R_a \mathbf{a}^{(2)} \cdot \bigg[8 \bigg( \nabla G^{(2)}(\mathbf{r, r'})
+ \nabla G^{(2)}(\mathbf{r, r''}) \bigg) G^{(2)}(\mathbf{r, r'})
G^{(2)}(\mathbf{r, r''}) \nonumber \\
&&+ 6 \bigg( \nabla G^{(2)}(\mathbf{r, r''}) G^{(2)}(\mathbf{r, r'})^2
+ \nabla G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})^2 \bigg)\bigg] \nonumber \\
& &+ R_a \big(1+b\big)^{-1} G^{(2)}(\mathbf{r', r''})
\bigg( \nabla^2 G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})
+ G^{(2)}(\mathbf{r, r'}) \nabla^2 G^{(2)}(\mathbf{r, r''}) \nonumber \\
&&+ 2 \nabla G^{(2)}(\mathbf{r, r'}) \cdot \nabla G^{(2)}(\mathbf{r, r''}) \bigg)
+ R_b \big(1+ b\big)^{-1} G^{(2)}(\mathbf{r', r''})
\bigg( \lvert \nabla G^{(2)}(\mathbf{r, r'}) \rvert^2 \nonumber \\
&&+G^{(2)}(\mathbf{r, r'}) \nabla^2 G^{(2)}(\mathbf{r, r'})
+ \lvert \nabla G^{(2)}(\mathbf{r, r''}) \rvert^2
+G^{(2)}(\mathbf{r, r''}) \nabla^2 G^{(2)}(\mathbf{r, r''}) \bigg) \nonumber \\
&&+2 R_b \big(1+ b\big)^{-1} \bigg[ G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r, r''})
\big(G^{(2)}(\mathbf{r, r'}) \nabla^2 G^{(2)}(\mathbf{r, r'})
+ G^{(2)}(\mathbf{r, r''}) \nabla^2 G^{(2)}(\mathbf{r, r''}) \nonumber \\
& & + 2 \lvert \nabla G^{(2)}(\mathbf{r, r'}) \rvert^2
+2 \lvert \nabla G^{(2)}(\mathbf{r, r''}) \rvert^2 \big)
+ 3 \big( G^{(2)}(\mathbf{r, r'})^2+G^{(2)}(\mathbf{r, r''})^2 \big)
\nabla G^{(2)}(\mathbf{r, r'}) \cdot \nabla G^{(2)}(\mathbf{r, r''}) \bigg]
\nonumber \\
&&+ R_b k_J^2 \big(1+ b\big)^{-1} \big( G^{(2)}(\mathbf{r, r'})^2
+ G^{(2)}(\mathbf{r, r''})^2 \big) G^{(2)}(\mathbf{r, r'})
G^{(2)}(\mathbf{r, r''}) ,
\label{Adef}
\end{eqnarray}
In eqs.\eqref{3PCF_02} and \eqref{Adef},
${\bf a}^{(3)}\equiv (1+b)^{-1} (\frac{2}{\psi_0^2}
\nabla G^{(2)}(0)- \frac{2}{\psi_0^3} \nabla G^{(3)}(0) )$,
$\mathbf{a}^{(2)} \equiv (1+b)^{-1} \frac{2}{\psi_0^2} \nabla G^{(2)}(0)$,
$b \equiv \frac{1}{\psi_0^2} G^{(2)}(0)$,
$c \equiv \nabla^2 G^{(2)}(0) / [(1+b) \psi_0^2]$,
$e \equiv G^{(3)}(0) / [(1+b) \psi_0^3]$,
$f \equiv \nabla^2 G^{(3)}(0) / [(1+b) \psi_0^3]$,
$g^{(3)} =\frac{1}{1+b} + \frac{c}{4 k_J^2 }
-\frac{f}{3 k_J^2} -\frac{e}{2}$,
and $\alpha$ has absorbed a factor $(1+b)$.
The six constants,
${\bf a}^{(3)}, {\bf a}^{(2)}, b,c,e,f,$
are combinations of six unknowns:
$G^{(2)}(0)$, $G^{(3)}(0)$,
$\nabla G^{(2)}(0)$, $\nabla G^{(3)}(0)$,
$\nabla^2 G^{(2)}(0)$ and $\nabla^2 G^{(3)}(0)$,
which can be formally divergent, and are not directly measurable.
These constants are inevitable in the perturbation approach to
any field theory with interactions,
and are often treated by some renormalization.
In our case,
we shall set $g^{(3)} =1$ as the renormalization
of the Jeans wavenumber $k_J$,
and take $(1+b)m$ as the renormalized mass.
Eq.(\ref{3PCF_02}) is a generalized Poisson equation \cite{Hackbusch2017}
with the two delta sources located at $\bf r'$ and $\bf r''$ respectively,
and the inhomogeneous term $\mathcal{A}^{(3)}$.
Its structure is similar to
the second order equation \cite{WuZhang2021},
but $\mathcal{A}^{(3)}$ has more terms.
It also contains
a convection term $\mathbf{a}^{(3)} \cdot \nabla G^{(3)}(\mathbf{r, r', r''})$
and a gravitating term $g^{(3)} k_J^2 G^{(3)}(\mathbf{r, r', r''})$.
The Jeans wavenumber $k_J$ determines the 3-point correlation length
of the system of galaxies.
$\alpha^{-1} \propto m$
determines the correlation amplitude at small scales,
so that massive galaxies will have a higher amplitude of $G^{(3)}$.
These two properties are analogous to those of 2PCF \cite{Zhang2007,ZhangLi2021}.
When all nine nonlinear parameters are neglected,
eq.(\ref{3PCF_02}) reduces to the Gaussian approximation
as the next order to the mean field theory \cite{Zhang2007,ZhangChenWu2019},
$G^{(2)}( {\mathbf r}_1, { \mathbf r}_2)
\propto \frac{\cos(\sqrt2 k_J \, r_{12})}{ r_{12}}$
with $r_{12} =|{\mathbf r}_1 - { \mathbf r}_2|$, and
$G^{(3)}(\mathbf{r, r', r''})$
given by the Groth-Peebles ansatz \eqref{Groth-Peebles-ansatz} with $Q=1$.
We plot the Gaussian $G^{(3)}$ in Fig.\ref{zetagauss}.
Here the Gaussian approximation of the self-gravity density field
is conceptually not the same as
the Gaussian random process in statistics.
A ``reduced" 3PCF is often introduced as the following
\cite{Jing2004, Wang2004, Gaztanaga2005,
Nichol2006, McBride2011a, McBride2011b, Guo2013, Guo2016}
\be \label{grothpeeblesansatz}
Q(\mathbf{r, r', r''})
\equiv \frac{G^{(3)}(\mathbf{r, r', r''})}
{G^{(2)}(\mathbf{r, r'}) G^{(2)}(\mathbf{r', r''})
+G^{(2)}(\mathbf{r', r''}) G^{(2)}(\mathbf{r'', r})
+G^{(2)}(\mathbf{r'', r}) G^{(2)}(\mathbf{r, r'})} .
\ee
which is an extension of the Groth-Peebles ansatz \eqref{Groth-Peebles-ansatz}.
$Q(\mathbf{r, r', r''})\ne 1$ is a criterion of non-Gaussianity.
\section{Solution and comparison with observations }
\label{sec:sol3PCF}
In a homogeneous and isotropic Universe,
$G^{(2)}(\mathbf{r, r'})= G^{(2)}(\mathbf{|r- r'|})$.
The 3PCF is parametrized by
$G^{(3)}(\mathbf{r, r', r''}) \equiv \zeta(r, u, \theta )$,
where
$ r_{12} \equiv r$,
$u=\frac{r_{1 3}}{r_{12}}$,
$\theta =\cos^{-1}(\hat{{\bf r}}_{12}\cdot\hat{{\bf r}}_{13})$ \cite{marin2011}.
For convenience,
we take ${ \mathbf r}''=\mathbf{0}$
and put the vector $\mathbf{r}'-\mathbf{r}''={ \mathbf r}'$
along the polar axis (see Fig.1 in Ref.\cite{WuZhang2021}),
and write
$G^{(2)}(\mathbf{r, r''}) =\xi(r),$
$G^{(2)}(\mathbf{r, r'}) =\xi(l)$,
$G^{(2)}(\mathbf{r', r''}) =\xi(r')=\xi(u r)$,
where
$l \equiv \lvert \mathbf{r}- \mathbf{r'} \rvert =\beta r$,
$\beta \equiv \sqrt{1+u^2-2u \cos \theta}$.
Then eq. (\ref{3PCF_02}) is written in spherical coordinate
\bl \label{3PCF_02sp}
&\frac{1}{r^2} \frac{\partial}{\partial r}
\big(r^2 \frac{\partial}{\partial r}\zeta(r, u, \theta) \big)
+\frac{1}{r^2 \sin \theta}\frac{\partial}{\partial \theta}
\big(\sin \theta \frac{\partial \zeta(r, u, \theta)}{\partial \theta} \big)
\nn \\
& +a^{(3)}_r \frac{\partial \zeta(r, u, \theta)}{\partial r}
+2 k_J^2 \zeta(r, u, \theta)
- \mathcal{A}^{(3)}(r, u, \theta) \nonumber \\
& = -\frac{b}{\alpha} \Big(2- ( 1+ b ) e
+ 4 (3 R_a + R_b )b^2 \Big)
\Big( \frac{2}{\lvert 1-u \rvert } \frac{\delta(\theta)}{\sin \theta} + 1 \Big)
\frac{\delta(r)}{4 \pi r^2},
\el
where $a^{(3)}_r$ is the $r-$component of $\mathbf{a}^{(3)}$, and
\begin{eqnarray} \nonumber
&&\mathcal{A}^{(3)}(r,u,\theta) \nonumber \\
&=&\bigg[ \bigg( 2 + 2 \big(3
+ R_a + R_b - 4 Q \big) b + 12 (2 R_a + R_b ) b^2 \bigg) \big( 1+ b \big)^{-1}
-6 e\bigg] \beta \xi'(l) \xi'(r) \nonumber \\
& &-\bigg[2 k_J^2 \big( 1+ b \big)^{-1}
- \bigg( 1 + R_a + R_b- 2 Q + 8 (2 R_a + R_b ) b \bigg) c
- 2 (2 R_a + R_b ) \big( 1+ b \big) (a^{(2)}_r)^2 \nonumber \\
&& -6 k_J^2 (2 R_a + R_b ) b^2 \big( 1+ b \big)^{-1}
+ 2 f + 2 k_J^2 e \bigg] \xi(l) \xi(r)
\nonumber \\
& &+ \bigg[ \bigg( R_a + R_b - 3 Q -1 + 10 (2 R_a + R_b ) b \bigg) a^{(2)}_r
+3 a^{(3)}_r \bigg] \bigg(\beta \xi'(l) \xi(r) + \xi(l) \xi'(r)\bigg)\nonumber \\
& & +\bigg( 2 - 3 Q + R_a + R_b
+ 2 (2 R_a + R_b ) b \bigg) \frac{b}{1+ b}
\bigg[ \big( \frac{2}{r} \xi'(r) + \xi''(r) \big)\xi(l) \nonumber \\
& & +\bigg( \big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l)
+\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big) \xi''(l) \bigg) \xi(r)
\bigg] \nonumber \\
& &+\big( 2 R_a + 8 R_a b - Q \big) \big( 1+ b \big)^{-1}
\bigg\{ \big(\beta^2 + \frac{u^2 \sin^2 \theta}{\beta^2}\big)\xi'(l)^2 \xi(r)
+ \xi(l) \xi'(r)^2 \nonumber \\
&&+\bigg[ \big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l)
+\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big) \xi''(l)
+ \frac{2}{r} \xi'(r) + \xi''(r) \bigg]
\xi(l) \xi(r) \bigg\} \nonumber \\
&&+\big( R_a - Q \big) \big( 1+ b \big)^{-1}
\bigg[ \bigg(\big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l)
+\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big)
\xi''(l)\bigg) \xi(r)^2 \nonumber \\
&&+\xi(l)^2 \big( \frac{2}{r} \xi'(r) + \xi''(r) \big)
+4 \big( \xi(l) + \xi(r) \big) \beta \xi'(l) \xi'(r) \bigg]
\nonumber \\
&& + R_a b \big( 1+ b \big)^{-1} \bigg[ 24\big( \xi(l)
+ \xi(r) \big) \beta \xi'(l) \xi'(r)
+\xi(u \, r) \bigg(\big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l) \nonumber \\
&& +\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big) \xi''(l)
+ \frac{2}{r} \xi'(r) + \xi''(r) \bigg) \bigg]
+\bigg( 4 R_a c + 6 k_J^2 R_a b / \big( 1+ b \big) \bigg) \big( \xi(l)
+ \xi(r) \big) \xi(l) \xi(r) \nonumber \\
&&+ R_a a^{(2)}_r \bigg[8 \big( \beta \xi'(l) + \xi'(r) \big) \xi(l) \xi(r)
+ 6 \big( \xi'(r) \xi(l)^2
+ \beta \xi'(l) \xi(r)^2 \big)
+ \xi(u \, r) \big( \beta \xi'(l) + \xi'(r) \big)\bigg] \nonumber \\
&&+ \frac{R_b}{ 1+ b } \bigg\{ \xi(u \, r)
\bigg[ \big(\beta^2 + \frac{u^2 \sin^2 \theta}{\beta^2}\big)\xi'(l)^2
+ \xi'(r)^2
+ \big( \frac{2}{r} \xi'(r) + \xi''(r) \big) \xi(r) \nonumber \\
& & + \bigg( \big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l)
+\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big)
\xi''(l) \bigg) \xi(l) \bigg]\nonumber \\
&&+6\big( \xi(l)^2+\xi(r)^2 \big) \beta \xi'(l) \xi'(r)
+2 \xi(l) \xi(r) \bigg[ 2\bigg( \big(\beta^2
+ \frac{u^2 \sin^2 \theta}{\beta^2}\big)\xi'(l)^2
+ \xi'(r)^2 \bigg) \nonumber \\
&& +\bigg( \big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l)
+\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big)
\xi''(l) \bigg) \xi(l) + \big(\frac{2}{r} \xi'(r) +
\xi''(r) \big) \xi(r) \bigg] \bigg\}\nonumber \\
&&+\frac{R_b k_J^2}{ 1+ b } \big( \xi(l)^2 + \xi(r)^2 \big) \xi(l) \xi(r)
+ R_a c \xi(u \, r) \big( \xi(l) + \xi(r) \big) \nonumber \\
& &+\frac{R_a}{ 1+ b } \xi(u \, r)
\bigg[ \bigg(\big( \frac{2}{r} \beta +\frac{2 u}{\beta r}\cos \theta
-\frac{u^2 \sin^2 \theta }{\beta^3 r} \big) \xi'(l)
+\big( \beta^2 + \frac{u^2}{\beta^2} \sin^2 \theta \big)
\xi''(l)\bigg) \xi(r) \nonumber \\
&&+ \xi(l) \big( \frac{2}{r} \xi'(r) + \xi''(r) \big)
+ 2\beta \xi'(l) \xi'(r) \bigg] .
\label{Ainsphericalcoord}
\end{eqnarray}
In observation and simulations,
the ratio $u=2$ is often taken,
so that $\zeta(r, u, \theta)$ will have only two variables.
The 2PCF $\xi(r)$ is involved in eq.\eqref{3PCF_02sp}.
Although $\xi(r)$ has been solved to various nonlinear orders
\cite{zhang2009nonlinear,ZhangChen2015,ZhangChenWu2019},
we shall use the observed $\xi(r)$ \cite{marin2011}
for a coherent comparison with observation.
An appropriate boundary condition is needed to solve eq.\eqref{3PCF_02sp}.
Ref.\cite{marin2011} has observed the redshift-space $Q(s, u, \theta)$
of ``DR7-Dim" (61,899 galaxies in the range $0.16 \leq z \leq 0.36$)
from SDSS in the domain
$s \in [7.0 , 30.0]\, h^{-1} {\rm Mpc}$,
$\theta \in [0.1 , 3.04]$
at five respective values $s=7,10,15,20,30\, h^{-1}$Mpc at a fixed $u=2$,
where $s$ is the redshift distance.
(See Fig. 6 and Fig. 7 of Ref.\cite{marin2011}.)
$s$ may differ from the real distance $r$ due to the peculiar velocities.
We shall this error and take $r=s$ in our computation.
From this data,
we get the fitted $Q(r, u, \theta)$,
as well as $\zeta(r, u, \theta)$ via the relation \eqref{grothpeeblesansatz},
on the boundary of the domain,
which is taken as the boundary condition of eq.\eqref{3PCF_02sp}.
The effect of the delta source is absorbed
by the boundary condition \cite{ZhangLi2021,Hackbusch2017}.
We solve eq.(\ref{3PCF_02sp}) numerically by the finite element method,
and obtain the solution $\zeta(r, u, \theta)$
and the reduced $Q(r, u, \theta)$ defined by (\ref{grothpeeblesansatz}).
To match the observational data \cite{marin2011},
using the $\chi^2$ test,
the parameters are chosen as the following:
$a^{(3)}_r \simeq -4.4\, h$Mpc$^{-1}$,
$a^{(2)}_r \simeq 0.35\, h$Mpc$^{-1}$,
$b \simeq 0.73$,
$c \simeq 0.03\, h^2$Mpc$^{-2}$,
$e \simeq -6.9$,
$Q \simeq 1.7$,
$R_a \simeq 4.1$,
$R_b \simeq -0.47$,
$k_J \simeq 0.038 \, h$Mpc$^{-1}$.
In particular,
the values of $Q$, $R_a$ and $R_b$ of the anzats
are consistent with that inferred from other surveys \cite{Peebles1993}.
Besides, the chosen $k_J$ is also consistent
the value used in our previously work on
the 2PCF \cite{Zhang2007,zhang2009nonlinear,ZhangChen2015,ZhangChenWu2019}.
The parameter $\alpha$ has not been accurately fixed
because the delta source has been absorbed into the boundary condition
in our numeric solution \cite{ZhangLi2021,Hackbusch2017}.
Fig.\ref{fig3ab} (a) shows the solution $\zeta(r, u, \theta)$
at fixed $u=2$ as a function of ($r,\theta$).
It is seen that $\zeta(r, u, \theta)>0$ in the range of computation,
and exhibits a shallow $U$-shape along the $\theta-$direction.
This feature is consistent with observations \cite{Guo2013,Guo2016}.
Along the $r-$direction $\zeta(r, u, \theta)$ decreases monotonously
up to $30 h^{-1}$Mpc in the range.
The highest values of $\zeta(r, u, \theta)$ occur at small $r$,
just as $\xi(r)$ does.
This is also expected since the correlations are stronger
at small distance due to gravity.
Fig.\ref{fig3ab} (b) shows the nonlinear reduced $Q(r, u, \theta)\ne 1$,
deviating from the Gaussianity $Q=1$.
$Q(r, u, \theta)$ exhibits a deeper $U$-shape along $\theta$,
and varies non-monotonically along $r$.
The variation along $r$ is comparatively weaker than the variation along $\theta$.
These features are consistent
with what have been observed \cite{marin2011,McBride2011a,McBride2011b}.
To compare with the observational data \cite{marin2011},
Fig.\ref{qcompare} plots the solution $Q(r, u, \theta)$
as a function of $\theta$
at $r=10, 15 ,20 \, h^{-1} {\rm Mpc}$, respectively.
It is seen that
$Q(r, u, \theta)$ has a $U$-shape along $\theta=[0,3]$,
agreeing with the data.
Overall, the equations of 3PCF gives a reasonable account
of the data of galaxies with redshifts
$0.16 \leq z \leq 0.36$.
For a comparison, in Fig.\ref{qcompare} we also plot the second order solution
(dashed lines).
Note that we have renormalized the parameters of
the third order solution in this paper,
the number of parameters also differs from that
of the the second order.
It is clear that the third order solution fits
the data ($\chi^2=470.9$)
better than the second order one ($\chi^2=777.08$),
especially at small scales, and the two
solutions are close at large scales.
\section{Conclusions and discussions}
Based on the density field equation \eqref{psifieldequ},
we have derived the equation \eqref{3PCF}
of the 3-point correlation function $G^{(3)}$ of galaxies,
up to the third order density fluctuation.
This work is a continuation of
the previous Gaussian approximation \cite{ZhangChenWu2019},
and the second order work \cite{WuZhang2021}.
By neglecting the 5PCF,
adopting the Fry-Peebles ansatz to deal with the 4PCF,
and the Groth-Peebles ansatz to deal with the squeezed 3PCF, respectively,
we have made eq.\eqref{3PCF} into
the closed equation \eqref{3PCF_02}.
Aside the three parameters from the anzats,
there are six nonlinearity parameters that occur inevitably
in the perturbation treatment of a gravitating system.
We carry out renormalization of the Jeans wavenumber and the mass.
Although the terms $(\delta\psi)^3$ are included,
nonlinear terms such as $(G^{(3)})^2$ do not appear
in eq.\eqref{3PCF_02} of $G^{(3)}$,
and higher order terms than $(\delta\psi)^3$ are needed
for $(G^{(3)})^2$ to appear.
We apply the equation to the system of galaxies,
using the boundary condition
inferred from SDSS DR7 \cite{marin2011} for a consistent comparison.
The solution $\zeta(r, u, \theta)$ exhibits a shallow $U$-shape along $\theta$,
and decreases monotonously along $r$.
The reduced $Q(r, u, \theta)$
deviates from $1$ of the Gaussian case,
and exhibits a $U$-shape along $\theta$.
Along $r$, however, $Q(r, u, \theta)$ varies non-monotonically,
scattering around $1$.
It is interesting that the third order solution in this paper
is quite close to the second order solution \cite{WuZhang2021},
especially at large scales.
This indicates that
the density field theory
with increasing orders of perturbation
provides a rather stable description of the nonlinear galaxy system.
Besides,
from the study on 3PCF and the previous work on 2PCF,
it is seen that the static equations of correlation functions
present a reasonable analytical account of
the galaxy distribution at small redshifts.
The future work will be application to new observational data,
and extension to the case of expanding Universe.
\section*{Acknowledgements}
Y. Zhang is supported by NSFC Grant No. 11675165, 11633001, 11961131007,
and in part by National Key RD Program of China (2021YFC2203100).
|
Title:
RELICS: Strong Lens Model of SMACSJ0723.3-7327 |
Abstract: We present the details of a strong lens model of SMACS J0723.3-7327, which
was made public as part of the data and high level science products (HLSP)
release of the RELICS HST treasury program (Reionization Lensing Cluster
Survey; GO-14096, PI: Coe). The model products were made available on the
Mikulski Archive for Space Telescopes (MAST) via 10.17909/T9SP45 in 2017. Here,
we provide the list of constraints that were used in the HST-based RELICS lens
model, as well as other information related to our modeling choices, which were
not published with the data and HLSP release. This model was computed with
Lenstool, used multiple images of 8 sources, with no spectroscopic redshifts.
The image plane RMS was 0".58.
| https://export.arxiv.org/pdf/2208.08483 |
\title{RELICS: Strong Lens Model of SMACSJ0723.3-7327 \footnote{Based on observations made with the NASA/ESA {\it Hubble Space Telescope}, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs GO-12166, GO-12884, GO-14096}}
\correspondingauthor{Keren Sharon}
\email{[email protected]}
\author[0000-0002-7559-0864]{Keren Sharon}
\affiliation{Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA}
\author[0000-0002-8739-3163]{Mandy C. Chen}
\affiliation{Department of Astronomy and Astrophysics, The University of Chicago, Chicago, IL 60637, USA}
\author[0000-0003-3266-2001]{Guillaume Mahler}
\affiliation{Department of Astronomy, University of Michigan, 1085 S. University Ave, Ann Arbor, MI 48109, USA}
\author[0000-0001-7410-7669]{Dan Coe}
\affiliation{Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA}
\collaboration{20}{RELICS: Reionization Lensing Cluster Survey}
\section{Introduction}
The strong lensing cluster \clustername\ was observed as one of the \JWST\ Early Release Observations (ERO) targets, and was revealed to the public in July 2022 \citep{Pontoppidan2022}.
Following the ERO release, the field of \clustername\ was the subject of a number of publications taking advantage of JWST's capabilities, as well as the depth added by the strong lensing magnification boost, to study the background Universe, identify high-$z$ galaxies, and analyze the foreground cluster.
In particular, the ERO prompted computation of strong lens models based on the new data \citep{mahler2022,Pascale2022,Caminha2022}.
Preceding \JWST, \clustername\ was observed by the \HSTlong\ (\hst), and its lensing signal analyzed as part of the \relicslong\ Treasury program (\RELICS; PI: Coe).
The RELICS collaboration made all high-level data products available to the community, including reduced images, catalogs, and lens models, via the Mikulski Archive for Space Telescopes (MAST) Portal at
\dataset[10.17909/T9SP45]{\doi{10.17909/T9SP45}}\footnote{\url{https://archive.stsci.edu/prepds/relics/}}. Product-specific README files were provided, and a full description of the catalogs was given in \citet{coe2019}. However, for a large fraction of the 41 RELICS clusters, the details of the lensing analysis (multiple images, spectroscopic redshifts, and modeling choices) were not published.
Here, we provide the community with the details of the HST-based lens model of \clustername, which was released as part of the RELICS program in 2017, in order to provide context for the public model outputs and facilitate comparisons to the new \JWST-based models.
This modeling analysis assumed a flat cosmology with $\Omega_{\Lambda} = 0.7$, $\Omega_{m}=0.3$, and $H_0 = 70$ \kms\ Mpc$^{-1}$. In this cosmology, $1''=5.27$ kpc at the cluster redshift, $z=$\zcluster, which we rounded to $z=0.39$.
\section{Data}
This work used \hst\ mosaics that were produced by the RELICS collaboration, combining archival imaging obtained with ACS/F606W (GO-12166) and ACS/F814W (GO-12884), with the new RELICS imaging taken with ACS/F435W, F606W, F814W, and WFC3IR/F105W, F125W, F140W, F160W (GO-14096). The depth in each filter varies between 0.5-2 orbits. For details of the observations, data reduction, photometric and photo-$z$ catalogs we refer the reader to \citet{coe2019}.
\section{Lensing Analysis}
\subsection{Multiple Images}
Composite color images of the field were visually inspected to identify instances of multiple images of the same background lensed sources, to be used as lensing constraints. As is common procedure \citep[e.g.,][]{Sharon2020} we proceeded to build the lens model iteratively, starting with the most obvious and secure identification, and using preliminary iterations of the lens model to identify new constraints.
We identified 8 sets of multiply-imaged sources (\autoref{tab:arcstable},\autoref{fig:model}), belonging to 7 unique sources (the sources labeled 3 and 5 may be associated with the same galaxy).
At the time of the computation of the model, none of the lensed sources had spectroscopic redshifts. We obtained photometric redshifts based on the extensive \hst\ imaging, using the BPZ \citep{bpz2000} algorithm \citep[see][]{Cerny2018, coe2019, Salmon2020}.
\begin{deluxetable}{lll}
\tablecolumns{3}
\tablecaption{List of lensing constraints \label{tab:arcstable}}
\tablehead{\colhead{ID} &
\colhead{R.A. [deg]} &
\colhead{Decl. [deg]} \\[-8pt]
\colhead{} &
\colhead{J2000} &
\colhead{J2000} }
\startdata
\hline
1.1 & 110.80504 & $-$73.454536 \\
1.2 & 110.80681 & $-$73.458358 \\
1.3 & 110.81306 & $-$73.448694 \\
Source 2: & &\\
2.1 & 110.84047 & $-$73.450975\\
2.2 & 110.84270 & $-$73.454747\\
2.3 & 110.83876 & $-$73.458692\\
Source 3: & &\\
3.1 & 110.82491 & $-$73.459581\\
3.2 & 110.83152 & $-$73.455153\\
3.3 & 110.83019 & $-$73.448453\\
3.4 & 110.82297 & $-$73.454786\\
Source 4: & &\\
4.1 & 110.82364 & $-$73.451800\\
4.2 & 110.82210 & $-$73.452686\\
4.3 & 110.82067 & $-$73.460128\\
Source 5: & &\\
5.1 & 110.82522 & $-$73.459686\\
5.2 & 110.83176 & $-$73.455108\\
5.3 & 110.83027 & $-$73.448542\\
5.4 & 110.82316 & $-$73.454736\\
Source 6: & &\\
6.1 & 110.83212 & $-$73.454375\\
6.2 & 110.82170 & $-$73.454108\\
6.3 & 110.82948 & $-$73.448914\\
6.4 & 110.82289 & $-$73.461631\\
Source 7: & &\\
7.1 & 110.83560 & $-$73.451735\\
7.2 & 110.83652 & $-$73.453005\\
Source 8: & &\\
8.1 & 110.83842 & $-$73.451014\\
8.3 & 110.83610 & $-$73.458784\\
\enddata
\tablecomments{The coordinates match the WCS solution of the RELICS data reduction version \texttt{v1}, which is available on MAST.
}
\end{deluxetable}
\subsection{Lens model components}
We modeled the cluster using the public software \lenstool\ \citep{jullo07}. This algorithm uses MCMC formalism to explore the parameter space and identify the best-fit set of parameters, which minimize the scatter between the observed and predicted lensing evidence. The cluster component was represented by a parametric pseudo-isothermal mass distribution halo (PIEMD, a.k.a dPIE; \citealt{eliasdottir07}), with parameters $x$, $y$, $e$, $\theta$, $r_{core}$, $r_{cut}$, and $\sigma$. All the parameters were allowed to vary within broad priors, except for $r_{cut}=1000$ kpc. Another PIEMD halo represented the brightest cluster galaxy (BCG), with $x$, $y$, $e$ and $\theta$ fixed to the observed values as measured with Source Extractor \citep{Bertin1996} in the F814W image, and the others allowed to vary. We selected cluster-member galaxies based on their color in a F606W-F814W vs. F814W diagram using the red sequence technique \citep{gladdersyee2000}. To measure magnitudes and colors, we used Source Extractor \citep{Bertin1996} in dual-image mode with the F814W band used for reference and photometry. Stars were identified and removed from the catalog based on their location in a \texttt{MU\_MAX} vs \texttt{MAG\_AUTO} diagram. Galaxies that were selected as cluster members were included as PIEMD halos in the model. The positional parameters of the galaxies were fixed to their catalog values, whereas $r_{core}$, $r_{cut}$ and $\sigma$ were determined by \lenstool\ based on the scaling relations that are described in \citet{limousin05}, with pivot parameters \texttt{mag0}=19.12 mag and \texttt{corekpc}=0.15 kpc; the scaling relation parameters \texttt{sigma} and \texttt{cutkpc} were optimized by the model. The model used a total of 145 halos, of which two were individually optimized.
We fixed the redshift of Source~1 to the its photometric redshift, $\zphot=2.2$. The redshifts of all the other lensed sources were entered as free parameters with broad flat priors ($0.5<z<5$).
The broad priors were used in order to not be affected by possible catastrophic photo-$z$ outliers. Some photo-$z$ measurements exhibited large uncertainties, or gave inconsistent results for multiple images of the same source. This was in part due to contamination from other sources (e.g., images of source 3/5 are projected near a bright star).
Other than fixing one redshift to the most secure photo-$z$, we only used the photo-$z$ information statistically, to check that overall the lens model predictions are consistent with the photometric redshifts (see \citealt{Cerny2018} for a description of this approach).
The diagnostic plot of model-$z$ vs. photo-$z$ is shown in the bottom-right panel of \autoref{fig:model}.
Including the free redshifts, this model has a total of 18 free parameters (6 for the cluster halo, 3 for the BCG halo, 2 for cluster member galaxies scaling, and 7 free redshifts), and 34 constraints (25 images from 8 sources).
\section{results}
The observed images of lensed sources were well-reproduced by the lens model, with a typical image plane scatter per system of $<0\farcs5$. A somewhat higher image plane scatter for System 2 drove the overall image-plane RMS to $0\farcs58$.
\autoref{fig:model} shows the lensing constraints and the best-fit critical curve for a source at $z=3$, overplotted on the \hst\ imaging.
The bottom-right panel of \autoref{fig:model} shows a comparison of the model-predicted redshift and the photometric redshifts measured by RELICS. As noted above, this plot was used as a diagnostic tool to assess the model, in lieu of spectroscopic redshifts. We found that overall the model did not appear to systematically over-predict or under-predict the photometric redshifts.
Spectroscopic redshifts were not available at the time this model was computed, but were recently published by several authors \citep{mahler2022,Golubchik2022}.
We compare the spectroscopic redshifts from \citet{mahler2022} to the posterior distributions of the free redshifts of systems 2--8 (the redshift of System 1 was fixed to its photo-$z$, $\zphot=2.2$, which does not have a spectroscopic redshift). The best-fit redshift of each system is indicated in orange, and the spectroscopic redshift in blue. We find that the model correctly predicted the redshift(s) of system(s) 3/5, but underpredicted the redshifts of systems 2, 4, and 8 by $\Delta z / (1+z) = 0.065, 0.027, 0.062$, respectively.
RELICS made the following lensing products publicly available through MAST: shear ($\gamma$), convergence ($\kappa$), lensing potential ($\psi$), deflection in the $x$ and $y$ direction ($\alpha_x$,$\alpha_y$), and magnification maps ($\mu(z)$) for several redshifts. With the exception of magnification, the files are scaled to effectively $D_{ls}/D_{s}=1$, where $D_{ls}$, $D_{s}$ are the angular diameter distances from the lens to the source and from the observer to the source, respectively. They can be re-scaled to any source redshift by multiplying by the relevant $D_{ls}/D_{s}$. We also made available a set of 100 files of each of the above, sampled from the MCMC chain, which can be used to estimate the statistical uncertainties related to the lens modeling process.
This paper complements the public models and provides context for current works that aim to take full advantage of the new \JWST\ data, and have either already used the public models, or wish to
compare a \JWST-based lensing analyses to the pre-\JWST\ models.
\acknowledgements
Based on observations with the NASA/ESA \HST, obtained at STScI, which is operated by AURA under NASA contract NAS5-26555, associated with the RELICS Treasury Program GO-14096, and with programs GO-12166 and GO-12884.
The data were obtained from Mikulski Archive for Space Telescopes (MAST).
Support for GO-14096 was provided through a grant from the STScI under NASA contract NAS5-26555.
\vspace{5mm}
\facilities{HST(ACS), HST(WFC3), HST(MAST)}
\software{\lenstool, \citep{jullo07}
Source Extractor \citep{Bertin1996}
MATLAB Astronomy and Astrophysics Toolbox \citep[MAAT][]{Ofek2014}
}
\bibliography{smacs}{}
\bibliographystyle{aasjournal}
|
Title:
Stability of Hairy Black Holes in Shift-Symmetric Scalar-Tensor Theories via the Effective Field Theory Approach |
Abstract: Shift-symmetric Horndeski theories admit an interesting class of
Schwarzschild-de Sitter black hole solutions exhibiting time-dependent scalar
hair. The properties of these solutions may be studied via a bottom-up
effective field theory (EFT) based on the background symmetries. This is in
part possible by making use of a convenient coordinate choice --
Lema\^itre-type coordinates -- in which the profile of the Horndeski scalar
field is linear in the relevant time coordinate. We construct this EFT, and use
it to understand the stability of hairy black holes in shift-symmetric
Horndeski theories, providing a set of constraints that the otherwise-free
functions appearing in the Horndeski Lagrangian must satisfy in order to admit
stable black hole solutions. The EFT is analyzed in the decoupling limit to
understand potential sources of instability. We also perform a complete
analysis of the EFT with odd-parity linear perturbations around general
spherically symmetric space-time.
| https://export.arxiv.org/pdf/2208.02823 |
\setcounter{page}{1}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
\rightline{KOBE-COSMO-22-10}
~
\vspace{.80truecm}
\begin{center}
{\fontsize{21}{18} \bf Stability of Hairy Black Holes in Shift-Symmetric Scalar-Tensor Theories via the Effective Field Theory Approach}
\end{center}
\vspace{1cm}
\begin{center}
{\fontsize{13}{18}\selectfont
Justin Khoury,${}^{\rm a}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
Toshifumi Noumi,${}^{\rm b}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
Mark Trodden,${}^{\rm a}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
and
Sam S. C. Wong${}^{\rm a}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
}
\end{center}
\vspace{0.8cm}
\centerline{{\it ${}^{\rm a}$Center for Particle Cosmology, Department of Physics and Astronomy,}}
\centerline{{\it University of Pennsylvania 209 S. 33rd St., Philadelphia, PA 19104, USA}}
\vspace{.3cm}
\centerline{{\it ${}^{\rm b}$Department of Physics, Kobe University, Kobe 657-8501, Japan}}
\vspace{.25cm}
\vspace{1cm}
\newpage
\setcounter{tocdepth}{2}
\tableofcontents
\renewcommand*{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\section{Introduction}
In the more than a hundred years since the discovery of black hole solutions to the theory of General Relativity (GR), enormous theoretical efforts have been devoted to understanding these remarkable space-time objects. Nevertheless, it has only been in the last decade that two stunning observational results --- the measurement of gravitational waves from binary mergers by the LIGO collaboration~\cite{TheLIGOScientific:2016pea}, and the imaging of super massive black hole shadows by the Event Horizon Telescope~\cite{Akiyama:2019cqa} --- have provided direct observational evidence for astrophysical black holes. These observations have lifted the curtain on the new era of black hole science, and these and other upcoming precision measurements will provide powerful tools for scrutinizing theoretical ideas beyond our currently established theories. Among these new tools, of particular importance for this paper are the measurements of black hole quasinormal mode frequencies.
A binary black hole merger proceeds through three somewhat distinct phases; inspiral, merging and ringdown. During the ringdown phase, deviations from the black hole space-time gradually decay, and therefore can be treated as small perturbations around the black hole solution~\cite{Regge:1957td,Zerilli:1970se}. The study of the linear system of these small perturbations around a black hole is sometimes called black hole spectroscopy, since the decaying oscillations exhibit a set of characteristic wave forms called quasinormal modes, with corresponding frequencies known as {\it quasinormal} frequencies \cite{Berti:2009kk}. The quasinormal frequency spectrum of a neutral, spinning black hole in GR is entirely determined by two parameters, the mass and spin of the black hole. Thus, it is theoretically possible to probe proposed modifications to pure GR black holes by studying even just the lowest two ringdown tones. Thus, quasinormal mode analysis provides a particularly promising way to constrain, rule out or even provide hints for theoretical constructions beyond GR, and we expect that the increasingly accurate measurements provided by forthcoming missions, such as LISA, will yield important new insights through this method.
A particularly well-motivated and well-studied class of modifications to GR is scalar-tensor theories, which arise in many different settings. While a general effective field theory (EFT) construction of such theories would include all operators up to a certain order, if one only allows a restricted set of operators involving a scalar and the metric, such that they give rise to second-order equations of motion, then such theories are free of a particularly pernicious instability known the ``Ostragradsky ghost"~\cite{Woodard:2015zca}. (See~\cite{Solomon:2017nlh} for a systematic discussion of the extent to which it is necessary to include higher-derivative operators in the EFT of general scalar-tensor theories, and the circumstances under which it is correct to restrict to only second-order operators.) The resulting theory is known as the Horndeski theory~\cite{Horndeski:1974wa}, which can be thought of as a generalization of Galileon theories to curved space-time~\cite{Deffayet:2009mn,Deffayet:2011gz,Kobayashi:2011nu}. It is also possible to study generalizations of this theory to GLPV~\cite{Gleyzes:2014dya} or DHOST~\cite{Langlois:2015cwa,Crisostomi:2016czh,BenAchour:2016fzp,Takahashi:2017pje,Langlois:2018jdg}, in which the equations of motion are nominally higher order, but which can be shown to be reducible to second-order ones.
It is well-known that scalar-tensor theories admit numerous {\it hairy} black hole solutions, in which one or more fields take on a nontrivial profile around the black hole, and for which, as a result, quantum numbers beyond the charge, spin, and mass are required to characterize them. One class of hairy solutions admits a radially-dependent scalar profile~$\phi(r)$~\cite{Sotiriou:2013qea,Sotiriou:2014pfa,Babichev:2016rlq,Benkel:2016rlz,Babichev:2017guv,Lehebel:2017fag,Minamitsuji:2018vuw,BenAchour:2019fdf,Minamitsuji:2019tet}. The general quasinormal mode analysis for this type of black hole has been carried out~\cite{Kobayashi:2012kh,Kobayashi:2014wsa} and, remarkably, it has also been shown that the properties of perturbations around black holes with such a radial hair profile can be captured by an EFT description~\cite{Franciolini:2018uyq}. Perhaps more relevant to a cosmological setting, another class of hairy black hole solutions features a time-dependent profile~$\phi(t,r)$, with a time-like gradient~\cite{Babichev:2013cya,Kobayashi:2014eva,Babichev:2016kdt,Babichev:2017lmw,BenAchour:2018dap,Motohashi:2019sen,Takahashi:2019oxz,Minamitsuji:2019shy,Minamitsuji:2019tet,Khoury:2020aya}. These black holes exist in a wide class of scalar-tensor theories satisfying a shift symmetry $\phi\to\phi +c$. They are akin to the time-dependent scalar profile that drives cosmological expansion in ghost condensation~\cite{ArkaniHamed:2003uz,Mukohyama:2005rw}. These time-dependent hairy black hole solutions require careful study, since it has been shown in certain circumstances that they may suffer from a strong coupling problem \cite{Ogawa:2015pea,Babichev:2017lmw,Babichev:2018uiw,deRham:2019gha} and/or a gradient instability~\cite{Khoury:2020aya,Takahashi:2021bml,Minamitsuji:2022mlv,Minamitsuji:2022vbi}.
In this work we analyze hairy black holes with time-like hair using an EFT approach. The construction of the EFT follows the logic of the EFT of inflation~\cite{Cheung:2007st} and the EFT of the purely radial hairy solution~\cite{Finelli:2018upr}. The first example of such a construction was presented in~\cite{Mukohyama:2022enj}. Here, we further constrain the EFT coefficient functions using the isometries of the background space-time and the shift symmetry of the scalar field, with the primary goal of diagnosing the existence of gradient instabilities. While a complete analysis of tensor fluctuations in the EFT remains highly technical, it can be shown that gradient instabilities that persist in the decoupling limit cannot be cured through mixing with tensor fluctuations away from that limit. Thus, it is possible to obtain useful results by studying the theory in the decoupling limit. We analyze effective operators in the EFT individually to provide constraints and insights about potential instabilities from the choices of coefficient functions. We also carry out a comprehensive analysis of odd sector perturbations. We further obtain useful constraints on the EFT by considering the stealth black hole limit, in which the background geometry reduces to Schwarzschild-de Sitter, while the scalar profile remains non-trivial.
\section{Background geometry and Ingredients for the EFT} \label{sec:setup}
We focus on black hole (BH) solutions in general scalar-tensor theories such that the geometry is a static, spherically symmetric solution,
\begin{equation}
\label{eqn:gtr}
\rd s^2 = - f(r) \rd t^2 + \frac{\rd r^2}{g(r)} + r^2 \rd \Omega^2\,,
\end{equation}
with~$\rd\Omega^2=\rd\theta^2+\sin^2\theta \rd\varphi^2$. We adopt Lema\^itre-type coordinates for this type of geometry,
\begin{align} \label{eqn:glemaitre}
\rd s^2 = -\rd \tau^2 +\big(1-f(r)\big) \rd \rho^2 + r^2 \rd \Omega^2\,,
\end{align}
where \{$\tau,\rho$\} are related to \{$t,r$\} through
\begin{align}
\rd t &= \frac{1}{f} \rd \tau + \left(1 - \frac{1}{f} \right) \rd \rho\,; \nonumber \\
\rd r &=\sqrt{\frac{(1-f)g}{f} } \big( - \rd \tau + \rd \rho \big)\,.
\end{align}
Throughout this paper we use the notation $(\;)\dot{} \equiv \frac{\partial}{\partial \tau}(\;)$ and $(\;)' \equiv \frac{\partial}{\partial \rho}(\;)$. It is clear from the metric that this coordinate system is synchronous, and constant $(\rho ,\, \Omega)$ are free-falling time-like geodesics. When the theory enjoys a shift symmetry for the scalar, $\phi$, it supports a class of interesting hairy black holes with time-dependent hair of the form\cite{ArkaniHamed:2003uy},\footnote{In general, the gradient of a time-dependent scalar hair does not necessarily align with~$\partial_\tau$. In such a case, it is more convenient to use a time coordinate that aligns with $\nabla_\mu \phi$.}
\begin{align} \label{eqn:phi}
\bar{\phi} (x) = m^2 \tau\,.
\end{align}
Furthermore, for the subclass of shift-symmetric theories we are interested in, the action of fluctuations exhibits the set of symmetries inherited from the background isometries, as we will see in Sec.~\ref{Sec:EFT_perturbations}.
The first step in constructing the EFT of fluctuations about this black hole is to note that the gradient of the scalar field,
\begin{equation}
n^{\mu} = -\frac{\nabla^{\mu}\phi}{\sqrt{-\nabla_{\nu}\phi\nabla^{\nu}\phi }}\,; \quad n^{\mu} n_{\mu}=-1\,,
\end{equation}
defines a foliation of space-time. This gradient is the vector normal to surfaces of constant-$\phi$. From here until Sec.~\ref{Sec:EFT_perturbations}, we work in unitarity gauge defined by $\phi=\bar{\phi}$, in which these surfaces reduce to constant-$\tau$ surfaces, denoted by $\Sigma_{\tau}$. From this one can define an induced metric on such hypersurfaces,
\begin{equation}
h_{\mu\nu}=g_{\mu\nu}+n_{\mu}n_{\nu}\, ,
\end{equation}
which also serves as a projector onto these surfaces. For example, the covariant derivative on the surface $D_{\mu}$ is defined by
\begin{align}
D_{\mu} V_{\nu} = h^{\alpha}_{\;\mu} h^{\beta}_{\;\nu} \nabla_{\alpha} V_{\beta}\, ,
\end{align}
and the extrinsic curvature of surfaces is given by
\begin{align}
K_{\mu\nu} = h_{\;\mu}^{\alpha}h_{\;\nu}^{\beta}\nabla_{\alpha}n_{\beta}= h_{\;\mu}^{\alpha}\nabla_{\alpha} n_{\nu} = \nabla_{\mu}n_{\nu} + n_{\mu}n^{\alpha}\nabla_{\alpha} n_{\nu} \, .
\end{align}
Using the ADM form of the metric,
\begin{align}
\rd s^2 = -N^2 \rd \tau^2 + h_{ij}\left( \rd x^i+N^i \rd \tau\right)\left( \rd x^j +N^j\rd \tau\right) \, ,
\end{align}
this can be written more explicitly in component form,
\begin{align}
K^{\tau\nu} =0\,; \quad K_{ij} = \frac{1}{2N}\left( \dot{h}_{ij} - D_i N_j - D_j N_i \right)\,,
\end{align}
where we have written $n^{\mu}$ in terms of the lapse function $N$ and the shift vector $N_i$,
\begin{equation}
n^{\mu} = \left( \frac{1}{N} , -\frac{N^i}{N} \right), \quad n_{\mu} = \left( -N ,\vec{0}\right)\,.
\end{equation}
Another important geometric quantity is the intrinsic curvature of the hypersurface,~$\hat{R}_{\mu\nu\rho \sigma}$, which is defined by
\begin{align}
\hat{R}_{\mu\nu\rho}^{\quad\;\; \sigma} V_{\sigma} = [D_{\mu},D_{\nu}] V_{\rho}\, .
\end{align}
Together with $K_{\mu\nu}$, this is related to the Riemann tensor of the full space-time via the Gauss-Codazzi relations
\begin{align}
\hat{R}_{\mu\nu\rho}^{\quad\;\; \sigma} + K_{\mu \rho} K_{\nu}^{\;\;\sigma} - K_{\nu\rho}K_{\mu}^{\;\; \sigma} & = h_{\mu}^{\;\;\alpha}h_{\nu}^{\;\;\beta}h_{\rho}^{\;\;\gamma}h^{\sigma}_{\;\;\delta}R_{\alpha \beta \gamma}^{\quad \;\; \delta} \,; \\
D_{\mu}K^{\mu}_{\;\;\nu} - D_{\nu} K &= R_{\alpha \beta} n^{\alpha} h^{\beta}_{\;\; \nu} \,.
\end{align}
Evaluated on the background~\eqref{eqn:glemaitre}, the above geometric quantities are
\begin{align} \label{eqn:bgK}
\bar{K}^{\rho}_{\;\; \rho} &= \frac{f_r}{2(1-f)}\sqrt{\frac{g}{f} }\,; \nonumber \\
\bar{K}^{\theta}_{\;\; \theta}& = \bar{K}^{\varphi}_{\;\; \varphi} = -\frac{1}{r} \sqrt{\frac{(1-f)g}{f}}\,; \\
\bar{\hat{R}}^{\rho}_{\;\; \rho} &= \frac{g f_r - f g_r}{r f^2 } \,;\nonumber \\
\bar{\hat{R}}^{\theta}_{\;\; \theta}& = \bar{\hat{R}}^{\varphi}_{\;\; \varphi} = \frac{2 f^2 -2fg + r gf_r - r f g_r }{2 r^2 f^2 }\,,
\end{align}
where $(\;)_r = \frac{\partial}{\partial r}(\;)$. One can then see that the traceless part of $\bar{K}_{ij}$ and $\bar{\hat{R}}_{ij}$,
\begin{align}
\bar{K}^{\rm T}_{ij} &= \bar{K}_{ij} - \frac{1}{3} \bar{h}_{ij} \bar{K}, \nonumber \\
\bar{\hat{R}}^{\rm T}_{ij} &= \bar{\hat{R}}_{ij} - \frac{1}{3} \bar{h}_{ij} \bar{\hat{R}},
\end{align}
are proportional to each other, $\bar{\hat{R}}^{\rm T}_{ij} \propto \bar{K}^{\rm T}_{ij}$. Note that when $f(r) = g(r)$, all components of $\bar{\hat{R}}_{\mu\nu\rho}^{\quad\;\; \sigma}$ on $\Sigma_{\tau}$ vanish. With all of these three-dimensional covariant quantities at hand, we are now ready to build the EFT.
\section{The effective action for perturbations}
\label{Sec:EFT_perturbations}
The most general action for perturbations, including all terms up to second order in fields and second derivatives that describe scalar tensor theories of our interest, is given by
\begin{align} \label{eqn:EFT}
S_{(2)} = \int \rd ^4x \sqrt{-g}\bigg[& \frac{M_{1}(x)}{2}R - \Lambda(x) + \alpha(x) g^{\tau\tau} + \beta(x) \bar{K}_{\mu\nu}K^{\mu\nu} \nonumber \\
&+ M_2(x) (\delta g^{\tau\tau})^2 + M_3(x) \delta g^{\tau\tau} \delta K + M_4(x)\bar{K}_{\mu\nu} \delta g^{\tau\tau} \delta K^{\mu\nu} \nonumber \\
&+ M_5(x) (\partial_{\tau}\delta g^{\tau\tau})^2 + M_6(x)(\partial_{\tau}\delta g^{\tau\tau}) \delta K + M_7(x) \bar{K}_{\mu\nu}(\partial_{\tau}\delta g^{\tau\tau}) \delta K^{\mu\nu}+ M_8(x) (D_i \delta g^{\tau\tau })^2 \nonumber \\
& + M_9(x) \delta K^2 + M_{10}(x) \delta K_{\mu\nu}\delta K^{\mu\nu} + M_{11}(x) \bar{K}_{\mu\nu}\delta K \delta K^{\mu\nu} + M_{12}(x)\bar{K}_{\mu\nu} \delta K^{\rho \mu}\delta K^{\nu}_{\;\;\rho} \nonumber \\
& + M_{13}(x)\delta g^{\tau\tau} \delta \hat{R} + M_{14}(x) \bar{K}_{\mu\nu}\delta g^{\tau\tau} \delta \hat{R}^{\mu\nu} \bigg] \, ,
\end{align}
where $x\equiv \{\tau,\,\rho\}$. As we will soon show, the coefficient functions $\{\alpha(x), \,\beta(x) ,\,\dots,\, M_i(x),\,\dots\} $ will reduce to functions of $r(\tau,\rho)$ when describing perturbations around \eqref{eqn:gtr}. However, unless otherwise specified, in the following sections we will use the slightly more general action above.
We are of course free to perform the analysis in any frame, such as Jordan frame or Einstein frame, by performing a suitable conformal transformation. Such transformation would only affect the coupling of~$\phi$ to matter fields, which we ignore in our analysis. Without loss of generality, therefore, we choose to work in Einstein frame, setting~$M_1(x)$ to be constant and equal to the reduced Planck mass:~$M_1(r) = M_{\rm Pl}^2 \equiv \frac{1}{8\pi G}$.
The above action can in principle be derived from the expansion in perturbations of a general function in terms of the building blocks of the EFT,
\begin{align} \label{eqn:masteraction}
S = \int\rd^4x \sqrt{-g}\, {\cal L}(g^{\mu\nu}, R_{\mu\nu\rho \sigma}, g^{\tau\tau}, K_{\mu\nu}, \nabla_{\mu}; \tau ) \,.
\end{align}
In the following we will explain the construction of tadpoles and quadratic terms, and their relation to the most general action~\eqref{eqn:masteraction}.
\subsection{Tadpole terms}
The tadpole terms of the action are constructed from the building blocks $\delta g^{\tau\tau}$, $K_{\mu\nu}$, $R_{\mu\nu\rho \sigma}$ and scalar functions of the background coordinates. Their most general form is
\begin{align}
S_{\rm tad.} = \int \rd^4x \sqrt{-g}\,\Big[ \Lambda(x) +\alpha(x) g^{\tau\tau} + \beta_{\mu\nu}(x)K^{\mu\nu} + \zeta_{\mu\nu\rho\sigma}(x) R^{\mu\nu\rho\sigma} \Big]\,.
\end{align}
By construction, only the components $K_{ij}$ are non-zero, and hence the third term reduces to~$\beta_{ij}K^{ij}$, where $\beta_{ij}$ is constructed from the set of tensor functions that obey the background isometries, namely $\bar{g}_{ij}$ and $\bar{K}_{ij}$.\footnote{Since the 2-sphere is maximally-symmetric, all other tensors compatible with the background isometries should be linear combinations of~$\bar{g}_{ij}$ and~$\bar{K}_{ij}$. We have seen in \eqref{eqn:bgK} that $\bar{\hat{R}}_{ij}$ is an example.} Note also that, obviously,~$K = \nabla_\mu n^{\mu}$ is a total derivative term. Thus the two index tensor $\beta_{\mu\nu}$ can be chosen as $\beta(x) \bar{K}_{\mu\nu}$. We refer the reader to~\cite{Cheung:2007st,Finelli:2018upr} for the detailed construction of the term~$\zeta_{\mu\nu\rho\sigma}(x) R^{\mu\nu\rho\sigma} $. The tadpole action gives rise to background equations of motion for $g^{\mu\nu}$ as shown in appendix \ref{app:bgEOM}. When applied to the ansatz \eqref{eqn:glemaitre}, it generates a set of consistency relations between the coefficient functions $M_1$, $\Lambda$, $\alpha$ and $\beta$ and the background metric.
\subsection{Quadratic terms}
It is useful to start the analysis by thinking of possible terms in the action \eqref{eqn:masteraction}. There are infinitely many contractions of~$K_{\mu\nu}$ that obey diffeomorphism invariance on $\Sigma$, for instance
\begin{align}
&a_1(g^{\tau\tau}, \partial g^{\tau\tau})(K)^{n_1} \,, \quad a_2(g^{\tau\tau}, \partial g^{\tau\tau})(K)^{n_2} K_{\mu\nu}K^{\mu\nu}\,, \quad a_3(g^{\tau\tau}, \partial g^{\tau\tau})K_{\mu\nu} K^{\mu}_{\;\;\rho}K^{\rho\nu}\,, \nonumber \\
& \quad a_4(g^{\tau\tau}, \partial g^{\tau\tau}) \hat{R}\,, \quad a_5(g^{\tau\tau}, \partial g^{\tau\tau}) K \hat{R}\,, \quad a_6(g^{\tau\tau}, \partial g^{\tau\tau}) K_{\mu\nu} \hat{R}^{\mu\nu} \,,~\ldots
\end{align}
where the $a_i$ are arbitrary functions of~$g^{\tau\tau}$, and where $n_1= 0, 1,\, 2,\, 3$ and $ n_2 =0, 1$.
Since we have in mind that the EFT describes perturbations of Horndeski or DHOST theories, the maximal number of derivatives in each operator is three~\cite{Gleyzes:2013ooa}. For example, the operators $K^3$, $K_{\mu\nu}K^{\mu\nu}$ and $K_{\mu\nu}K^{\nu}_{\;\;\rho} K^{\rho\mu}$ originate from the Horndeski~$G_5$ or cubic DHOST term. Therefore, the above set of operators are all the possible operators involving $K_{\mu\nu}$ and $\hat{R}_{\mu\nu\rho\sigma}$. Expanding these up to and quadratic order in perturbations~$\delta g^{\tau\tau}= g^{\tau\tau} - \bar{g}^{\tau\tau} $, $\delta K = K - \bar{K}$, $\delta K_{\mu\nu} = K_{\mu\nu} - \bar{K}_{\mu\nu}$, $\delta \hat{R} = \hat{R} - \bar{\hat{R} }$ and $\delta \hat{R}_{\mu\nu} = \hat{R}_{\mu\nu} - \bar{ \hat{R}}_{\mu\nu}$, and up to second-order in derivatives, we arrive at the quadratic terms in \eqref{eqn:EFT}.
Note that when one specializes to the EFT of perturbations around the background~\eqref{eqn:glemaitre} with $n^{\nu}$ defined by~\eqref{eqn:glemaitre}, the quadratic terms in \eqref{eqn:EFT} can actually describe perturbations of additional operators such as $K_{\mu_1 \nu_1}K^{\nu_1}_{\;\; \nu_2} \dots K^{\nu_n \mu_1}$, since $\bar{K}_{\mu_1 \nu_1}\bar{K}^{\nu_1}_{\;\; \nu_2} \dots \bar{K}^{\nu_{n-1}}_{\;\; \nu_n}$ can be written as a linear combination of~$\bar{h}_{\mu_1\nu_n }$ and~$\bar{K}_{\mu_1\nu_n}$. In principle, if one seeks to describe an arbitrary modified gravity theory, more independent operators (up to second-order in derivatives) can be added to the action, such as $(\bar{K}^{\mu\nu}\delta K_{\mu\nu})^2$ and $\bar{K}^{\mu\nu} \bar{K}^{\alpha \beta} \delta K_{\mu\alpha}\delta K_{\nu \beta}$.
\subsection{Consistency with diffeomorphisms on the surface $\Sigma_{\tau}$}
Since the operators~$\delta K^{\mu\nu}$ and~$\delta \hat{R}^{\mu\nu\rho\sigma}$ do not transform covariantly under diffeomorphisms on~$\Sigma_{\tau}$, the above action for perturbations, with general coefficients, explicitly breaks diffeomorphism invariance on $\Sigma_{\tau}$. However, as mentioned in~\cite{Franciolini:2018uyq, Mukohyama:2022enj}, the EFT~\eqref{eqn:EFT} for perturbations derives from the parent action~\eqref{eqn:masteraction}, which is constructed to be invariant under {\it all} symmetries. As such, the action for perturbations should respect diffeomorphisms on $\Sigma_{\tau}$. More explicitly, it has been shown in~\cite{Mukohyama:2022enj} that these two observations are reconciled by diffeomorphism invariance of the action implying a set of non-trivial constraints on the EFT coefficient functions in~\eqref{eqn:EFT}.
\subsection{Background isometry and shift symmetry}
\label{sec:isometry}
The background~\eqref{eqn:glemaitre} has four Killing vector fields,
\begin{align} \label{eqn:killvec}
v_1 &= \partial_t = \partial_{\tau}+ \partial_{\rho}\,; \nonumber \\
J_1 &= - \sin \varphi \partial_{\theta} - \cot \theta \cos \varphi \partial_{\varphi} \,; \nonumber \\
J_2 &= \cos \varphi \partial_{\theta} - \cot \theta \sin \varphi \partial_{\varphi} \,; \nonumber \\
J_3 &= \partial_{\varphi} \,,
\end{align}
associated with the time translation and rotation invariances of the black hole background.
The coefficient functions in the perturbation Lagrangian are obtained by expanding~\eqref{eqn:masteraction}. For example:
\begin{align}
\left. \frac{\partial^2 {\cal L} }{(\partial g^{\tau\tau})^2 }\right|_{\bar{g}_{\mu\nu} ,\bar{n}^{\mu}} (\delta g^{\tau \tau })^2 + \left. \frac{\partial^2 {\cal L} }{\partial g^{\tau\tau} \partial K }\right|_{\bar{g}_{\mu\nu} ,\bar{n}^{\mu}} \delta g^{\tau \tau }\delta K + \left. \frac{\partial^2 {\cal L} }{\partial g^{\tau\tau} \partial K^{\mu\nu} }\right|_{\bar{g}_{\mu\nu} ,\bar{n}^{\mu}} \delta g^{\tau \tau }\delta K^{\mu\nu} +\ldots
\end{align}
Since all the coefficients~$ \left.\frac{\partial^2 {\cal L} }{(\partial \dots )^2 }\right|_{\bar{g}_{\mu\nu}, \bar{n}^{\mu}}$ are evaluated on the background, and since the action~${\cal L}$ is itself constructed to respect the symmetries, the coefficients must respect all the isometries generated by~\eqref{eqn:killvec}. Therefore they should obey
\begin{align}
{\cal L}_{v_1} \left.\frac{\partial^2 {\cal L} }{(\partial \dots )^2 }\right|_{\bar{g}_{\mu\nu}, \bar{n}^{\mu}} = {\cal L}_{J_1} \left.\frac{\partial^2 {\cal L} }{(\partial \dots )^2 }\right|_{\bar{g}_{\mu\nu}, \bar{n}^{\mu}} = {\cal L}_{J_2} \left.\frac{\partial^2 {\cal L} }{(\partial \dots )^2 }\right|_{\bar{g}_{\mu\nu}, \bar{n}^{\mu}} = {\cal L}_{J_3} \left.\frac{\partial^2 {\cal L} }{(\partial \dots )^2 }\right|_{\bar{g}_{\mu\nu}, \bar{n}^{\mu}} =0 \,,
\end{align}
where ${\cal L}_{X}$ denotes the Lie derivative. As a result, as we mentioned earlier, we must demand that all the scalar coefficients $\{\alpha(x), \,\beta(x) ,\,\dots,\, M_i(x),\,\dots\} $ are {\it functions of $r$ only}, and that any background tensor of the form $\bar{T}_{AB}$ {\it must be proportional to $\gamma_{AB}$}, where~$\gamma_{AB}$ is the metric on the 2-sphere, and~$A,B$ are coordinate indices on the sphere.
However, we must take care here, since the time-dependent scalar profile $\bar{\phi} = m^2\tau$ spontaneously breaks the isometry generated by $v_1$. Nevertheless, in addition to the isometries, there is an additional shift symmetry, $\phi \to \phi +c$, in the class of scalar-tensor theories of interest. Because of this, the isometry generated by $v_1$ remains a good symmetry in the action for perturbations. In other words, in the particular case we are interested in, due to the existence of the time isometry, {\it the shift symmetry is equivalent to an isometry.} To see this, we write the background scalar profile as
\begin{align}
\bar{\phi} = m^2\tau = \Lambda^2(t + \psi(r) )\,.
\end{align}
A shift in $\bar{\phi} $, due to transforming $\phi \to \phi + c$, can therefore be absorbed into~$t$ by making use of an isometry, $t \to t-\frac{c}{m^2}$. In other words, the spontaneously broken time translation due to $\bar{\phi}$ is absorbed by a shift in $\phi$, while the ``{\it diagonal}" part of the time translation and $\phi$-shift symmetry remains unbroken. Only when the scalar profile is more complicated, in such a way that the diffeomorphism performed to absorb the shift in $\phi$ cannot be an isometry, will this lead to more non-trivial constraints on the EFT coefficients~\cite{Finelli:2018upr}.\footnote{An explicit example is the ultra slow-roll regime, where~$\bar{\phi} \approx \frac{c}{\tau ^6}$ in conformal time $\tau$.}
\section{Second order action in the decoupling limit}
Previous studies of a class of black holes with time-dependent hair in scalar-tensor theories have shown that gradient instabilities are inevitable, since the radial and angular sound speed squared of perturbations have opposite signs~\cite{Khoury:2020aya,Takahashi:2021bml}. We have also learned that it is the scalar modes that are responsible for this instability~\cite{Khoury:2020aya}. While establishing the stability of a theory against perturbations requires a complete analysis, proving the existence of an instability requires much less work.
For this purpose, it is practically useful to analyze stability in the decoupling limit. In general, our EFT has two characteristic scales. One is the decoupling scale $\Lambda_{\rm dec}$, at which gravity decouples from the Nambu-Goldstone mode $\pi$. The other is the cutoff scale $\Lambda_{\rm EFT}$, beyond which the EFT is not trustworthy, either due to strong coupling or a breakdown of the derivative expansion. One may study the dispersion relation of $\pi$ up to the second order derivatives. If a gradient instability appears at some scale $\Lambda_{\rm grad}$ in the regime $\Lambda_{\rm dec}<\Lambda_{\rm grad}<\Lambda_{\rm EFT}$, it indicates an instability. Recall that $\Lambda_{\rm EFT}$ depends on details of higher order terms, which is beyond the scope our present work. Thus, we study (in)stability in the decoupling limit based on the dispersion relation up to second-order derivatives.
One way to study the decoupling limit in the EFT language is to introduce a St\"uckelberg field $\pi(x^{\mu})$ to explicitly restore time-translational invariance in~$\tau$~\cite{Cheung:2007st}. Under $\tau \to \tilde{\tau} = \tau + \pi(x^{\mu})$, the important elements are
\begin{align}
\left[ \frac{\partial \tilde{x}^{\mu}(x) }{\partial x^{\nu}}\right] &= \begin{pmatrix}
1+ \dot{\pi}(t,\vec x) & \partial_k \pi(t,\vec{x}) \\
\mathbf{0} & \delta^{i}_{\;j}
\end{pmatrix}\,; \nonumber \\
\left[ \frac{\partial x^{\mu}(\tilde{x}) }{\partial \tilde{x}^{\nu}} \right] &= \begin{pmatrix}
\frac{1}{1+ \dot{\pi}(t,\vec x)} & -\frac{\partial_k \pi(t,\vec{x})}{1+ \dot{\pi}(t,\vec x)}\\
\mathbf{0} & \delta^{i}_{\;j}
\end{pmatrix} \,,
\end{align}
where~$\mu$ and~$\nu$ are row and column indices, respectively. For instance, the transformation~$\tilde{g}_{\mu\nu}(\tilde{x}) = \frac{\partial x^{\alpha}(\tilde{x})}{ \partial \tilde{x}^{\mu}} \frac{\partial x^{\beta}(\tilde{x})}{ \partial \tilde{x}^{\nu}}g_{\alpha\beta}(x)$ leads to
\begin{align}
g_{\tau\tau} &\to g_{\tau\tau}\frac{1}{(1 +\dot{\pi})^2} \,; \nonumber\\
g_{ \tau i} & \to g_{\tau i}\frac{1}{1 +\dot{\pi}} - g_{\tau \tau} \frac{\partial_i \pi}{(1 +\dot{\pi})^2} \,; \nonumber \\
g_{ij} &\to g_{ij}- g_{\tau i} \frac{\partial_j \pi}{1+ \dot{\pi}}- g_{\tau j} \frac{\partial_i \pi}{1+ \dot{\pi}} + g_{\tau\tau} \frac{\partial_i \pi\partial_j \pi}{(1 +\dot{\pi})^2} \,.
\end{align}
The transformation rule for the corresponding components in the ADM decomposition is then
\begin{align}
h_{ij} \to h_{ij} -(1-\dot{\pi})\Big(N_i\partial_j \pi + N_j\partial_i \pi \Big) + \left(-N^2 +N^kN_k\right) \partial_i\pi \partial_j\pi + {\cal O}(\pi^3) \,,
\end{align}
while the spatial Christoffel connection~$\hat{\Gamma}^{l}_{ij} = \frac{1}{2} h^{lk} \left( \partial_{i} h_{jk} + \partial_{j} h_{ik} - \partial_{k} h_{ij} \right)$ used to define $D_{i}$ transforms as
\begin{equation}
\hat{\Gamma}^{l}_{ij} \to \hat{\Gamma}^{l}_{ij} - \frac{1}{2}h^{lk}\left(\dot{h}_{ik} \partial_j \pi + \dot{h}_{jk}\partial_i \pi - \dot{h}_{ij} \partial_k \pi \right) + {\cal O}(\pi^2)\,.
\end{equation}
Here we have used that~$\bar{N}_i=0$ on the background geometry. Therefore the relevant transformations for~$\delta K_{ij}$,~$\delta K$,~$\delta \hat{R}_{ij}$ and~$\delta \hat{R}$, up to first order in~$\pi$, become
\begin{align} \label{eqn:curvpi}
\delta K_{ij} & \to \delta K_{ij} - \dot{\bar{K}}_{ij} \pi - N D_i D_j \pi\,; \nonumber \\
\delta K & \to \delta K - \dot{\bar{K}}\pi - N h^{ij} D_i D_j \pi\,; \nonumber \\
\delta \hat{R}_{ij} & \to \delta \hat{R}_{ij} -\dot{\bar{\hat{R}}}_{ij} \pi -\partial_k\pi \dot{\hat{\Gamma}}^k_{ij} -\frac{1}{2}h^{kl} \dot{\hat{\Gamma}}^m_{kl} \left(\partial_i \pi h_{mj} +\partial_j\pi h_{mi} \right) \nonumber \\
&\quad\, +\frac{1}{2}\left(\partial_i\pi\dot{\hat{\Gamma}}^m_{jm} +\partial_j\pi\dot{\hat{\Gamma}}^m_{im} + \partial^k\pi \left(\dot{\hat{\Gamma}}^m_{ik}h_{mj} + \dot{\hat{\Gamma}}^m_{jk}h_{mi}\right) \right ) \nonumber \\
&\quad\, - \frac{1}{2} \left( \dot{h}_{li} D^l D_j \pi+\dot{h}_{lj} D^l D_i \pi - h^{kl}\dot{h}_{kl} D_i D_j \pi -\dot{h}_{ij} \hat{\square} \pi \right) ; \nonumber \\
\delta\hat{R} & \to \delta \hat{R}- \dot{\bar{\hat{R}}}\pi -2 \partial_k\pi h^{ij}\dot{\hat{\Gamma}}^k_{ij} + 2 \partial^i \pi \dot{\hat{\Gamma}}^m_{im} - h^{ij} \dot{h}_{li}D^lD_j \pi + h^{kl}\dot{h}_{kl} \hat{\square} \pi \,.
\end{align}
\subsection{Gradient (in-)stability for $\pi$} \label{sec:pistability}
With the scalar mode restored through the St\"uckelberg trick, we are ready to study the stability of the theory in the decoupling limit. Given the symmetries of the background, there are in general two different sound speeds in the problem: the radial sound speed squared, $c_{\rho}^2$, and the angular sound speed squared, $c_{\theta,\varphi}^2$. In the following we individually analyze quadratic operators in the effective action~\eqref{eqn:masteraction}. In each case, we neglect terms that are cubic and higher-order in~$\pi$, as well as terms that involve mixing with~$\delta h_{ij}$. We also neglect terms that are irrelevant to the dispersion relation up to second-order in derivatives, such as $M_{5,6,7,8}$ as they contribute to higher order derivative terms for $\pi$. Note that we treat the coefficients as general functions of~$(\tau, \rho)$, except when it becomes convenient to use the specific dependence of $r(\tau,\rho)$.
\begin{itemize}
\item \underline{$M_2(\tau,\rho)\left(\delta g^{\tau\tau}\right)^2$}: Using the background quantities $\bar{g}^{\tau\tau}=-1$ and $\bar{g}^{a\tau} =0$, this term only contributes to the time-derivative part of $\pi$,
\begin{align}
\int \rd^4x \sqrt{-g} \,M_2\left(\delta g^{\tau\tau}\right)^2 \to \int \rd^4x \sqrt{-g} \,4 M_2 \dot{\pi}^2 \,.
\end{align}
In the Schwarzschild-de Sitter limit, $\alpha = \beta =0$, therefore the above term is crucial for generating a $\dot{\pi}^2$ term and $M_2$ must be positive.
\item \underline{$M_3(\tau,\rho)\, \delta g^{\tau\tau} \delta K$}: The contribution to the~$\pi$ action coming from this term is
\begin{align}
\int \rd^4x \sqrt{-g}\, M_3\delta g^{\tau\tau} \delta K \to \int \rd^4x \sqrt{-g}\Big[ -(M_3 h^{ij})\dot{} \, D_i \pi D_j \pi + 2 (D^iM_3) \dot{\pi} D_i \pi - m_3(\tau,\rho) \pi^2 \Big] \,.
\end{align}
Here,~$m_3(\tau,\rho)$ is the mass term, whose explicit form is not necessary for our purposes. (Henceforth,~$m_i(\tau,\rho)$ will denote the mass contribution from the operator with corresponding coefficient~$M_i(\tau,\rho)$.) In general, due to the existence of the mixing term~$2(D^iM_3) \dot{\pi} D_i \pi$, it is necessary to find a coordinate system that diagonalizes the kinetic matrix. In the specific case in which $M_3$ is a function of $r(\tau,\rho)$ we have $\partial_\rho M_3(r) = - \dot{M_3}(r) $, so that the kinetic terms become
\begin{align}
- 2 \dot{M_3}g^{\rho\rho} \dot{\pi} \pi' -(M_3 g^{\rho\rho})\dot{} \,\pi'^2 - \partial_{\tau}\left( \frac{M_3}{r^2}\right) \gamma^{AB} \partial_A \pi \partial_B \pi \,.
\end{align}
Stability along the angular directions therefore requires
\begin{align}
\partial_{\tau}\left( \frac{M_3}{r^2}\right) >0\,.
\end{align}
Given that the~$\dot{\pi}^2$ term (from other operators) is of the form~$A |g^{\tau\tau}| \dot{\pi}^2$, with~$A>0$, a straight forward Hamiltonian analysis \footnote{For a Lagrangian given by
\begin{align}
L = a \dot{\pi}^2 + 2 b \dot{\pi} \pi' - c\pi'^2\, ,
\end{align}
the corresponding Hamiltonian is
\begin{align}
H = \frac{(P_{\pi} - 2 b \pi' )^2}{4 a } + c \pi'^2 ,\quad P_{\pi} = \frac{\partial L}{\partial \dot{\pi}} = 2a \dot{\pi} +2 b \pi' .
\end{align}
Therefore, the stability conditions are
\begin{align}
a >0, \quad c > -\frac{b^2}{a}.
\end{align}
} give the following stability constraints:
\begin{align}
(M_3 g^{\rho\rho})\dot{} \, > - \frac{\big(\dot{M_3}g^{\rho\rho} \big)^2}{A|g^{\tau\tau}|}.
\end{align}
There also exist suitable coordinates~$(T, \,R)$ that diagonalize the kinetic matrix, with the corresponding kinetic terms
\begin{align}
\frac{1}{2}\Big( A|g^{\tau\tau}|(2+\delta_3) + \left( M_3g^{\rho\rho}\right)\dot{} \, \delta_3 \Big) (\partial_T \pi)^2 & - \frac{1}{2}\Big( A|g^{\tau\tau}|\delta_3 + \left( M_3g^{\rho\rho} \right) \dot{} \, (2+\delta_3) \Big)(\partial_R \pi)^2 \nonumber \\
&-\partial_{\tau}\left(\frac{M_3}{r^2} \right) \gamma^{AB}\partial_A \pi \partial_B \pi \,,
\end{align}
where
\begin{equation*}
\delta_3 \equiv \left[1 + \frac{4 \left( \dot{M_3}g^{\rho\rho}\right)^2}{\left(A|g^{\tau\tau}| + \partial_{\tau}\left( M_3g^{\rho\rho} \right) \right)^2}\right]^{\frac{1}{2}} -1 \ge 0 \, .
\end{equation*}
\item \underline{$M_4(\tau,\rho) \bar{K}_{\mu\nu}\delta g^{\tau\tau} \delta K^{\mu\nu}$}: This operator generates the following contribution:
\begin{align}
\int \rd^4x \sqrt{-g}\, M_4 \bar{K}_{\mu\nu}\delta g^{\tau\tau} \delta K^{\mu\nu} \to \int \rd^4x \sqrt{-g}\Big[- \partial_{\tau}\big( M_4\bar{K}^{ij} \big) D_i \pi D_j\pi + 2 D_i \big(M_4\bar{K}^{ij}\big) \dot{\pi} D_j \pi - m_4(\tau,\rho) \pi^2 \Big]\,.
\end{align}
Since~$M_4$ is a function of~$(\tau, \rho)$, and using~\eqref{eqn:bgK}, there is no~$\dot{\pi} \partial_A \pi$ mixing. Therefore, without further specifying the geometry, stability along the angular direction requires
\begin{align}
\partial_{\tau}\left( M_4\frac{\partial_{\tau}(r^2)}{r^4} \right)>0\,.
\end{align}
As mentioned earlier, other contributions to the~$\dot{\pi}^2$ term are of the form~$A |g^{\tau\tau}| \dot{\pi}^2$, with~$A>0$. Stability obtained from a Hamiltonian analysis requires that
\begin{align}
\partial_{\tau}\big( M_4\bar{K}^{\rho\rho} \big) > - \frac{\Big[ D_i \big(M_4\bar{K}^{i \rho}\big) \Big]^2 }{A|g^{\tau\tau}|}
\end{align}
It follows that the two eigenvalues of the kinetic matrix in the~$(\tau,\rho)$ directions are given by
\begin{align}
\lambda_1 = \frac{1}{2} \Big( A |g^{\tau\tau}| ( 2+ \sigma) + \partial_{\tau} \big(M_4 \bar{K}^{\rho\rho}\big) \sigma \Big) \,; \quad \lambda_2 = -\frac{1}{2} \Big( \partial_{\tau} \big(M_4 \bar{K}^{\rho\rho}\big) ( 2+ \sigma) + A |g^{\tau\tau}| \sigma \Big) \,,
\end{align}
where
\begin{equation*}
\sigma \equiv \left[1+ \frac{4 \left( D_i \big(M_4\bar{K}^{i \rho }\big) \right)^2}{\left(A |g^{\tau\tau}| + \partial_{\tau}(M_4 \bar{K}^{\rho\rho} )\right)^2}\right]^{\frac{1}{2}} -1 \ge 0 \, .
\end{equation*}
\item \underline{$M_9(\tau, \rho)\delta K^2 + M_{10}(\tau,\rho) \delta K_{\mu\nu} \delta K^{\mu\nu}$}: For these operators, it is sufficient to restrict our analysis to the combination
\begin{align}
M_9 \left(\delta K^2 - \delta K_{\mu\nu} \delta K^{\mu\nu}\right) \,,
\end{align}
since any other choice of relative coefficients differs from this choice only by higher-derivative terms in~$\pi$. We then have
\begin{align}
&\int \rd^4x \sqrt{-g}\, M_9\left(\delta K^2 - \delta K_{i j} \delta K^{ij} \right) \nonumber \\
&~~~\rightarrow \int \rd^4x \sqrt{-g}\bigg[ \left(M_9 \hat{R}^{ij} + D^iD^jM_9- \left(\hat{\square}M_9+ 2 M_9 \dot{\bar{K}} \right) h^{ij} + 2M_9 \dot{\bar{K}}^{ij} \right) D_i \pi D_j \pi - m_9(\tau,\rho) \pi^2\bigg] \,.
\end{align}
Without any $\dot{\pi}\pi'$ mixing, stability of this operator alone simply requires negativity for the coefficient of $D_i \pi D_j \pi $,
\begin{align}
\left(M_9 \hat{R}^{ij} + D^iD^jM_9- \left(\hat{\square}M_9+ 2 M_9 \dot{\bar{K}} \right) h^{ij} + 2M_9 \dot{\bar{K}}^{ij} \right) < 0 \,.
\end{align}
\item \underline{$M_{11}(\tau,\rho)\bar{K}_{\mu\nu}\delta K \delta K^{\mu\nu} +M_{12}(\tau,\rho) \bar{K}_{\mu\nu}\delta K^{\rho \mu} \delta K^{\nu}_{\;\; \rho} $}: A similar pattern emerges for these operators. Only the combination
\begin{align}
M_{11} \left( \bar{K}_{\mu\nu}\delta K \delta K^{\mu\nu} -\bar{K}_{\mu\nu}\delta K^{\rho \mu} \delta K^{\nu}_{\;\; \rho} \right)
\end{align}
generates second-order equations for $\pi$. The associated contribution to the action is then
\begin{align}
&\int \rd^4x \sqrt{-g} M_{11}(r) \left( \bar{K}_{\mu\nu}\delta K \delta K^{\mu\nu} -\bar{K}_{\mu\nu}\delta K^{\rho \mu} \delta K^{\nu}_{\;\; \rho} \right) \nonumber \\
& ~~~\rightarrow \int \rd^4x \sqrt{-g}\bigg[ \bigg( D_kD^i \left(M_{11}\bar{K}^{k j} \right)- \frac{1}{2}D_k D_l \left(M_{11} \bar{K}^{kl} \right)h^{ij} -\frac{1}{2} D_k D^k \left(M_{11}\bar{K}^{ij} \right)+ M_{11}\bar{K}^{k\ell}\hat{R}^{\; \,i \;\, j}_{k\;\ell} \nonumber \\
&\quad ~~~~~~~~~~~~~~~~~~~~~ + M_{11} \dot{\bar{K}} \bar{K}^{\ij} + M_{11} \bar{K}^{kl}\dot{\bar{K}}_{kl} h^{ij} - 2 M_{11}\bar{K}^{k i} \dot{\bar{K}}_{k}^{\;\; j} \bigg)D_i \pi D_j \pi - m_{11}(\tau,\rho) \pi^2 \bigg] \,.
\end{align}
Stability condition is also straight forward as there is no $\dot{\pi}\pi'$ mixing.
\item \underline{$M_{13}(\tau,\rho) \delta g^{\tau\tau} \delta \hat{R}$}: This operator gives rise to the following terms in the~$\pi$ action:
\begin{align}
&\int \rd^4x \sqrt{-g} M_{13} \delta g^{\tau\tau} \delta \hat{R} \nonumber \\
& ~~~\rightarrow \int \rd^4x \sqrt{-g}\bigg[ 4 \left( M_{13}\left(h^{k\ell} \dot{\hat{\Gamma}}^m_{\ell m}-h^{\ell m} \dot{\hat{\Gamma}}^k_{\ell m} \right) + D_i\left(M_{13}\bar{K}^{ik}\right) - D^k\left(M_{13}\bar{K}\right) \right)\dot{\pi} D_k\pi \nonumber \\
&\quad ~~~~~~~~~~~~~~~~~~~~~ + 2 \partial_{\tau}\left( M_{13}\bar{K} h^{ij}- M_{13} \bar{K}^{ij} \right) D_i\pi D_j\pi - m_{13}(\tau,\rho) \pi^2 \bigg] \,,
\end{align}
where we have used the fact that~$\dot{\bar{h}}_{ij} = 2 \bar{K}_{ij}$. Again, it is not hard to see that there is no mixing of the form~$\dot{\pi} \partial_A \pi $. It can be diagonalized in the usual way but the resulting expression is lengthy and not informative which we do not display here. The stability constraints arising from these operators are rather less informative. We will restrict in appendix \ref{app:SdSlimit} to de Sitter-Schwarszchild limit to extract explicit constraints.
\item \underline{$M_{14}(\tau,\rho) \bar{K}_{\mu\nu} \delta g^{\tau\tau} \delta \hat{R}^{\mu\nu}$}: The quadratic $\pi$ action generated from this operator is
\begin{align}
& \int \rd^4x \sqrt{-g} M_{14}\bar{K}_{\mu\nu} \delta g^{\tau\tau} \delta \hat{R}^{\mu\nu} \nonumber \\
& ~~~\rightarrow \int \rd^4x \sqrt{-g} \bigg[ 2 \bigg( M_{14}\left( \dot{\hat{\Gamma}}^m_{km}\bar{K}^{ik} + \dot{\hat{\Gamma}}^m_{\ell k}\bar{K}^{\ell}_{\;\,m} h^{ki} - \dot{\hat{\Gamma}}^m_{k\ell} h^{k\ell} \bar{K}_{m}^{\;\; i} - \dot{\hat{\Gamma}}^i_{k\ell }\bar{K}^{k\ell} \right) \nonumber \\
&\quad~~~~~~~~~~~~~~~~~~~~~ + D_k \left(2M_{14} \bar{K}^{i\ell} \bar{K}_\ell^{\;\,k} -M_{14}\bar{K}K^{ik}\right) - D^i \left(M_{14} \bar{K}^{k\ell}\bar{K}_{k\ell} \right) \bigg) \dot{\pi} D_i\pi \nonumber \\
& \quad~~~~~~~~~~~~~~~~~~~~~ + \partial_{\tau}\left( M_{14} \bar{K}^{k\ell}\bar{K}_{k\ell}h^{ij} + M_{14}\bar{K} \bar{K}^{ij} - 2 M_{14}\bar{K}^{i\ell} \bar{K}_{\ell}^{\;\, j}\right) D_i\pi D_j\pi - m_{14}(\tau,\rho)\pi^2 \bigg] \,.
\end{align}
After evaluating the form of $\hat{\Gamma}$'s, one finds that there is only $\dot{\pi} \pi'$ mixing.
\end{itemize}
In principle, one should consider an action with all these terms together in order to come up with a constraint. However, such expression is not particularly illuminating here with many free functions. We stress again that instability of each term signals potential instability in the entire theory. In Appendix~\ref{app:SdSlimit}, we collect the above constraints in the Schwarzschild-de Sitter limit.
\section{Odd-parity perturbations}
In this Section we analyze the effective action~\eqref{eqn:EFT} for odd-parity perturbations for~$\ell \ge 2$. The perturbed metric~$\delta g^{\rm odd}_{\mu\nu}$ is expanded in terms of the standard odd-parity vector and tensor spherical harmonics as
\begin{align}
\delta g^{\rm odd}_{\mu\nu} = \sum_{\ell m}\begin{pmatrix}
0 & 0 & h_0^{\ell m} \epsilon_{A}^{\;\;C} \nabla_C \\
0 & 0 & h_1^{\ell m} \epsilon_{A}^{\;\;C} \nabla_C \\
h_0^{\ell m} \epsilon_{B}^{\;\;C} \nabla_C &h_1^{\ell m} \epsilon_{B}^{\;\;C} \nabla_C & h_2^{\ell m} \epsilon_{(B}^{\quad C} \nabla_{A)}\nabla_C
\end{pmatrix} Y_{\ell}^m(\theta ,\varphi) \,,
\end{align}
where~$\nabla_A$ is the covariant derivative on the 2-sphere associated with~$\gamma_{AB}$. As usual, we work in the Regge-Wheeler gauge
\begin{align}
h_2^{\ell m}=0\, ,
\end{align}
which is justified since the EFT is invariant under diffeomorphisms on the 2-sphere. Note that $h_2$ is exactly zero for $\ell =1$.
Odd perturbations on the 2-sphere are only affected by a restricted set of operators, since odd-parity contributions to~$\delta g^{\tau \tau}$ and~$\delta K$ are non-zero only starting at second order. For the same reason, odd sector perturbations correspond only to tensor perturbations. Therefore, at linear level the relevant EFT is
\begin{align} \label{eqn:oddEFT}
S_{(2)} = \int \rd ^4x \sqrt{-g}\bigg[& \frac{M_{\rm Pl}^2}{2}R - \Lambda(\tau,\rho) + \alpha(\tau,\rho) g^{\tau\tau} + \beta(\tau,\rho) \bar{K}_{\mu\nu}K^{\mu\nu} \nonumber \\
& + M_{10}(\tau,\rho) \delta K_{\mu\nu}\delta K^{\mu\nu} + M_{12}(\tau,\rho)\bar{K}_{\mu\nu} \delta K^{\rho \mu}\delta K^{\nu}_{\;\,\rho} \bigg]\,,
\end{align}
where once again we have chosen to work in Einstein frame, in which~$M_1(\tau, \rho) = M_{\rm Pl}^2$. Furthermore note once more that we are leaving the coefficient functions as general functions of~$(\tau,\rho)$, specializing to functions of~$r(\tau,\rho)$ only when the analysis is restricted to Schwarzschild-de Sitter space-time.
In general, the action for~$h_0$ and~$h_1$ for a specific value of~$(\ell,m)$ takes the form
\begin{align}
S = \int \rd \tau \rd \rho \left[ c \left( h'_0 - \dot{h}_1 + a h_0 + b h_1 \right)^2 + \frac{1}{2}k_{00} h_0^2 + k_{01}h_0 h_1 + \frac{1}{2} k_{11} h_1^2 \right]\,,
\end{align}
where all the coefficient functions $c$, $a$, $b$, $k_{00}$, $k_{01}$ and $k_{11}$ are functions of $(\tau, \,\rho)$. Restricting to the EFT~\eqref{eqn:oddEFT}, these are then explicitly given by
\begin{align} \label{eqn:oddcoeff}
c &= \frac{M_{\rm Pl}^2}{2 \sqrt{F}}+\frac{M_{10}}{2 \sqrt{F}}+\frac{M_{12} \left(r^2 F\right)\dot{} }{8 F^{3/2} r^2} \,;\nonumber \\
a &= -2 \frac{r'}{r}\,;\nonumber \\
b&=\frac{2 F \dot{r} \beta +(2 F \dot{r}-r \dot{F}) M_{\rm Pl}^2}{4 c F^{3/2} r}+\frac{\dot{F}}{2 F}+\frac{\dot{r}}{r}\,;\nonumber \\
k_{00} &=\sqrt{F} (\alpha -\Lambda )+\frac{M_{\rm Pl}^2 \left(F^2 \left(j^2+2 \dot{r}^2\right)-2 F \left(r \left(2 r''-\dot{F} \dot{r}\right)+r'^2\right)+2 r F' r'\right)}{F^{3/2} r^2} \nonumber \\
&\quad\, + M_{10} \frac{\sqrt{F} \left(j^2-2\right)}{r^2} +M_{12}\frac{\sqrt{F} \dot{r} \left(j^2-2\right)}{r^3} \,;\nonumber \\
k_{01}& = \frac{M_{\rm Pl}^2 \left(4 F \dot{r}'-2 \dot{F} r'\right)}{F^{3/2} r}+\frac{r^2 \left(\dot{F}\left(\beta F'-F \beta'\right)-\beta F \dot{F}'\right)+4 \beta F^2 r' \dot{r}-2 \beta F r \dot{F}r'}{2 F^{5/2} r^2}\,;\nonumber \\
k_{11} &=\frac{ 3 r^2 \dot{F}^2-8 F r \dot{F} \dot{r}+12F r'^2-6 F^2 \left(j^2+4 r \ddot{r}\right) }{6 F^{5/2} r^2}M_{\rm Pl}^2 \nonumber \\
&\quad \, + \frac{ 6 F r (\alpha +\lambda )-8 M_{10} \dot{F} \dot{r}+\beta \left(6 \dot{F} \dot{r}+3 r \ddot{F}\right)+3 r \dot{\beta} \dot{F} }{6 F^{3/2}r} \nonumber \\
&\quad \,
-\frac{\left(2 F \dot{r} \beta +(2 F \dot{r}-r \dot{F}) M_{\rm Pl}^2\right)^2}{8 c F^3 r^2}+c \left(\frac{2 \dot{F} \dot{r}}{3 F r}-\frac{\dot{F}^2}{2 F^2}-\frac{2 \dot{r}^2}{r^2}\right)\,,
\end{align}
where $F(\tau,\rho) = 1- f(r)$ and $j^2 = \ell^2+\ell$. The above expressions apply to a general case where $\Lambda,\, \alpha,\, \beta, \, M_{10},\, M_{12} $ are general functions of $(\tau,\, \rho)$.
One can identify the relevant degrees of freedom by integrating in an auxiliary field~$\Psi$ as follows:
\begin{align}
S= \int \rd \tau \rd \rho & \bigg[ c \left( h'_0 - \dot{h}_1 + a h_0 + b h_1 \right)^2
- c\left( \frac{1}{c}\Psi - (h'_0 - \dot{h}_1 + a h_0 + b h_1) \right)^2 \nonumber \\
& ~~~ + \frac{1}{2}k_{00} h_0^2 + k_{01}h_0 h_1 + \frac{1}{2} k_{11} h_1^2 \bigg] \,.
\end{align}
The equations of motion for $h_0$ and $h_1$ become,
\begin{align}
\begin{pmatrix}
k_{00} & k_{01} \\
k_{01} & k_{11}
\end{pmatrix} \begin{pmatrix}
h_0 \\ h_1
\end{pmatrix} = \begin{pmatrix}
2 \Psi' - 2a \Psi \\
-2 \dot{\Psi} - 2b \Psi
\end{pmatrix},
\end{align}
which can then be expressed purely in terms of $\Psi$ in the action,
\begin{align}
S & = \int \rd \tau \rd \rho \Bigg[ -\frac{1}{2} \begin{pmatrix}
2 \Psi' - 2a \Psi \\ -2 \dot{\Psi} - 2b \Psi
\end{pmatrix}^{\rm T} \begin{pmatrix}
k_{00} & k_{01} \\
k_{01} & k_{11}
\end{pmatrix}^{-1} \begin{pmatrix}
2 \Psi' - 2a \Psi \\ -2 \dot{\Psi} - 2b \Psi
\end{pmatrix} - \frac{1}{c} \Psi^2 \Bigg] \nonumber \\
& = \int \rd \tau \rd \rho \Bigg[ \frac{2}{k_{01}^2 -k_{00} k_{11}} \left(k_{00} \dot{\Psi}^2 + 2k_{01} \dot{\Psi} \Psi' + k_{11} \Psi'^2 \right) - \frac{1}{2} m_{\Psi} \Psi^2 \Bigg] \,.
\end{align}
Stability of the Hamiltonian requires that
\begin{align}
k_{00} > 0, \quad k_{11} < \frac{k_{01}^2 }{k_{00}}\,.
\end{align}
The kinetic terms can be diagonalized as usual, with the corresponding eigenvalues given by
\begin{align}
\frac{1}{2}\left( k_{00}+ k_{11} \pm (k_{00} -k_{11}) \left( 1+ \frac{4 k_{01}^2}{(k_{00} -k_{11})^2}\right)^{\frac{1}{2}}\right).
\end{align}
However, the resulting stability constraints are unclear in general, due to the complicated forms of the coefficient functions~\eqref{eqn:oddcoeff}.
\subsection{Schwarzschild-de Sitter limit}
We now restrict the analysis to the Schwarzschild-de Sitter case, in order to obtain useful constraints. As mentioned in Sec.~\ref{sec:isometry}, the background isometries constrain the EFT coefficients to be functions of~$r(\tau,\rho)$,
\begin{align}
\Lambda(\tau,\rho) & \to \Lambda(r)\,; \nonumber \\
\alpha(\tau,\rho) &\to \alpha(r)\,; \nonumber \\
\beta(\tau,\rho) &\to \beta(r)\,; \nonumber \\
M_{10}(\tau,\rho) &\to M_{10}(r)\,; \nonumber \\
M_{12}(\tau,\rho) &\to M_{12}(r)\,.
\end{align}
One can show that, under the condition $g(r) = f(r)$, together with the background equations of motion~\eqref{eqn:background},
\begin{align}
k_{01} =0\,.
\end{align}
This holds even without using the explicit form of~$f(r)$. Therefore, stability requires that
\begin{align} \label{eqn:kconstraints}
k_{00} > 0\,;\qquad
k_{11} < 0 \,,
\end{align}
with
\begin{align}
k_{00} & = \frac{\sqrt{1-f} \left(j^2-2\right) \left(r M_{\rm Pl}^2+rM_{10} -\sqrt{1-f} M_{12}\right)}{r^3}\,; \nonumber \\
k_{11} & = \frac{\left[12 (1-f)^2+ r f_r \left( 3r f_r+4 (1-f)\right)\right] \left[M_{12} \left(2(1-f)-rf_r\right) -4 r\sqrt{1-f}\left(M_{10}+ M_{\rm Pl}^2\right)\right]}{48 (1-f)^2 r^3} \nonumber \\
&\quad -\frac{\left(2 \beta (1-f)+\left(2 (1-f)+r f_r\right) M_{\rm Pl}^2\right)^2}{(1-f) r \left(4 r\sqrt{1-f}\left(M_{10}+ M_{\rm Pl}^2\right)-M_{12} \left(2(1-f)-rf_r\right)\right)} \nonumber \\
&\quad +\frac{M_{\rm Pl}^2 \left[6 (1-f) (2 (1-f)-j^2+2)+ r f_r \left( 3 rf_r-8 (1-f)\right)\right]+4 (1-f) \left(3 \beta (1-f)+ 2 r f_r M_{10} \right)}{6 (1-f)^{3/2} r^2} \,,
\end{align}
where we have defined~$j^2 \equiv \ell (\ell + 1)$. In the above we have used the background equations to eliminate~$\Lambda$,~$\alpha$ and~$\beta_r$
in terms of~$f$ and~$\beta$. Furthermore, one finds that $\alpha=\beta=0$ and $\Lambda= 2 M_{\rm Pl}^2 \lambda$ in the Schwarzschild-de Sitter limit, with $f(r) =g(r) = 1-\frac{r_s}{r} - \frac{\lambda}{3} r^2$. Therefore, the radial sound speed for perturbations in the Schwarzschild-de Sitter limit is
\begin{align} \label{eqn:cssq}
c_{\rho}^2 &= -\frac{g^{\tau\tau}k_{11} }{ g^{\rho\rho} k_{00}} \nonumber \\
&= \frac{1}{48 (1-f)^{3/2} (j-2) \left(4 r \sqrt{1-f} \left( M_{\rm Pl}^2+M_{10} \right)-M_{12} \left(2 (1-f) -r f_r\right)\right) \left(r \left(M_{\rm Pl}^2+M_{10}\right)-\sqrt{1-f} M_{12}\right) }\nonumber\\
& \qquad \times \bigg[ -M_{12}^2 \left(2 (1-f)- r f_r\right)^2 \left(12 (1-f)^2+3 r^2f_r^2 +4 r (1-f) f_r \right)-48M_{10}^2 (1-f) r^2 \left( 2 (1-f) - r f_r\right)^2 \nonumber \\
&\quad \qquad -16 (1-f)^{3/2} M_{12} r \left(2 (1-f) -r f_r\right) \left(6 \beta (1-f)+M_{\rm Pl}^2 \left(2 r f_r-3 j+6\right)\right) \nonumber \\
&\quad \qquad -192 (1-f)^2 r^2 \left(\beta ^2 (1-f) + \beta r f_r M_{\rm Pl}^2+(j-2) M_{\rm Pl}^4\right) \nonumber \\
&\quad \qquad +8 M_{10} M_{12} r \sqrt{1-f} \left(24 (1-f)^3-20 r f_r (1-f)^2 -3r^3 f_r^3 +10r^2 (1-f) f_r^2\right) \nonumber \\
& \quad \qquad +192 (1-f)^2 M_{10} r^2 \left(2 \beta (1-f)-M_{\rm Pl}^2 \left(j-2 - 2 r f_r\right)\right) \bigg]\, .
\end{align}
The dependence on~$j^2=\ell(\ell+1)$ of the sound speed is due to the unknown coefficients~$M_{10}$ and~$M_{12}$. In a Horndeski theory with~${\cal L}_2$ and~${\cal L}_4$ only,\footnote{We are using the convention that
\begin{align}
{\cal L}_2 &= P(X), \nonumber \\
{\cal L}_4 &=G_4(X) R + G_{4,X}(X)\big( (\square \phi)^2 - \phi_{\mu\nu}\phi^{\mu\nu} \big),
\end{align}
where $X= - \frac{1}{2} \partial_{\mu}\phi \partial^{\mu} \phi$.
}
$P(X)$, $G_4(X)$ and their derivatives evaluated on the background have to satisfy the following relations on the Schwarzschild-de Sitter geometry,
\begin{align}
\bar{P} +2 \lambda \big(\bar{G}_4 - m^4\bar{G}_{4,X}\big) =0 \,; \nonumber \\
\bar{P}_{,X} + 2 \lambda \big(\bar{G}_{4,X} + m^4\bar{G}_{4,XX}\big) =0 \,.
\end{align}
After using these relations, one finds that~$c_s^2 $ coming from Horndeski ${\cal L}_2$ and ${\cal L}_4$ is simply
\begin{align}
c_s^2 = \frac{\bar{G}_4}{\bar{G}_4 - m^4\bar{G}_{4,X}}.
\end{align}
Note that a quadratic DHOST gives the same sound speed in the odd sector~\cite{Takahashi:2021bml}. The $\ell$ dependence in \eqref{eqn:cssq} will cancel out and give the exact same expression as in Horndeski if one chooses $M_{12}=0$ and $M_{10}$ being the corresponding coefficient from Horndeski~${\cal L}_4$. Phenomenologically speaking, apart from the stability constraints \eqref{eqn:kconstraints}, there is also stringent observational restrictions on the propagation speed of gravitational waves from neutron star mergers~\cite{TheLIGOScientific:2017qsa,GBM:2017lvd,Monitor:2017mdv}. In such cases, the relevant gravitational wave perturbation is the lowest $\ell =2$ mode, given the current experimental signal-to-noise ratio.
\section{Discussion}
Effective field theory is a powerful tool for the study of perturbative systems or of the macroscopic behavior of theories at low energies, which is a typical situation that arises in many different areas of physics. The robustness of EFT allows us to investigate a large class of theories, in our case modified gravity theories, using a unified framework. Furthermore, it has a wide range of applicability in extreme limits of gravitational systems, such as in cosmological inflation and in black holes. One particularly interesting application has been to scalar-tensor theories which, over the past several decades, have drawn significant attention due to their rich phenomenological applications in both early and late-time cosmology. While one well-studied application has been to the physics of inflation, the technique can also be used to study other interesting approaches to the early universe, including those in which a time-dependent scalar profile drives cosmological expansion through a conformal coupling with matter. Scalar profiles of this type provide an interesting connection between cosmological evolution and astrophysical black holes, which can emerge as allowed non-trivial solutions with time-like hair is generic in scalar tensor theories.
In this work we have investigated the EFT of perturbations around such black holes with time-like scalar hair. Building on the construction in \cite{Mukohyama:2005rw}, we argue that the EFT coefficients must satisfy certain constraints imposed by the background isometries. When the underlying scalar-tensor theory enjoys a shift symmetry in~$\phi$, the combination between this symmetry and time-translation invariance (in Schwarszchild type coordinates) remains as an unbroken symmetry, and constrains the EFT coefficient functions, despite the fact that the time-like scalar profile spontaneously breaks both symmetries individually. We have constructed a general set of operators, up to second-order in derivatives, which are compatible with the aforementioned symmetries. Motivated by the fact that gradient instabilities appear in the scalar sector in both the Horndeski and DHOST theories~\cite{Khoury:2020aya,Takahashi:2021bml}, we have performed a stability analysis in the decoupling limit, which we argue should capture the essential properties of scalar perturbations. In addition, we have shown that odd sector perturbations can be comprehensively analyzed despite the appearance of many unknown functions, since only a few terms in the action contribute to odd-parity perturbations. This analysis also provides hints as to how we might construct potentially stable black hole/wormhole solutions.
In order to complete the story, a full analysis of even-parity perturbations is required. Such a task promises to be technically challenging, despite the fact that a subset of the EFT operators (essentially those corresponding to the quadratic DHOST terms, for instance $\delta K^2$) can be analyzed using the technique in~\cite{Takahashi:2021bml}. However, Inclusion of operators of the type $\bar{K}_{\mu\nu}\delta K^{\rho \mu} \delta K^{\nu}_{\;\; \rho}$ would require different techniques in order to identify the correct propagating degrees of freedom, and to perform the stability analysis.
It is also worth commenting on higher-derivative terms in the EFT. The inclusion of higher-derivative operators does not necessarily mean higher-order equations of motion for perturbations. In fact, we expect that a similar story to the DHOST analysis should play out for perturbations, so that a degeneracy condition between coefficient functions of higher-derivative operators can be imposed, reducing the equation of motion to second order. Furthermore, it is possible for higher derivative operators to serve as ``Scordatura terms" \cite{Motohashi:2019ymr,Gorji:2020bfl,DeFelice:2022xvq}, which contribute to a non-vanishing dispersion relation in such a way as to solve the strong coupling problem for some solutions.
In future work, we intend to extend this work to the case of rotating black holes, which comprise the majority of astrophysical black holes observed through current and future instruments. The EFT for slowly rotating black holes with time-independent scalar hair has already been constructed~\cite{Hui:2021cpm}, while interesting background solutions for rotating black holes with time-like hair have found in DHOST models~\cite{Charmousis:2019vnf,BenAchour:2020fgy}. The EFT description of perturbations around these objects remains to be constructed, and may be complicated by the fact that the background construction relies on the use of disformal transformations. It is an interesting future exercise to investigate whether similar techniques to those used in this paper can be applied to perturbations and the construction of the EFT in these rotating backgrounds.
\bigskip
\goodbreak
\centerline{\bf Acknowledgements}
While this paper was in preparation, we were aware of the research by Mukohyama, Takahashi and Yingcharoenrat on a similar subject. Their paper appeared concurrently on arXiv and we thank them for their correspondence. We thank Hayato Motohashi, Luca Santoni, Kazufumi Takahashi and Enrico Trincherini for useful discussions. This work is supported in part by the US Department of Energy (HEP) Award DE-SC0013528, NASA ATP grant 80NSSC18K0694, and by the Simons Foundation Origins of the Universe Initiative. T.N. is supported in part by JSPS KAKENHI Grant No.~20H01902 and No.~22H01220, and MEXT KAKENHI Grant No.~21H00075, No.~21H05184 and No.~21H05462.
\appendix
\section{Background equations of motion} \label{app:bgEOM}
The tadpole terms of~\eqref{eqn:EFT},
\begin{align}
S_{\rm tadpole} = \int\rd^4x \sqrt{-g}\Big[M_1 R - \Lambda + \alpha g^{\tau\tau} + \beta \bar{K}_{\mu\nu} K^{\mu\nu} \Big]\,,
\end{align}
generate the background equation of motion,
\begin{align}
\left( \bar{R}_{\mu\nu} - \frac{1}{2} \bar{g}_{\mu\nu} \bar{R} - \bar{\nabla}_{\mu}\bar{\nabla}_{\nu} + \bar{g}_{\mu\nu} \bar{\square} \right)M_1 = \bar{T}_{\mu\nu}\,,
\end{align}
where
\begin{align}
\bar{T}_{\mu\nu} &= \Big( \alpha g^{\tau\tau} - \Lambda + \beta \bar{K}_{\rho}^{\; \sigma}\bar{K}_{\sigma}^{\; \rho} \Big) \bar{g}_{\mu\nu} - 2 \alpha \delta_{\mu}^{\tau} \delta_{\nu}^{\tau} - 2 \beta \bar{K}_{\mu}^{\;\rho} \bar{K}_{\rho \nu} + \beta \bar{K}_{\rho}^{\; \sigma}\bar{K}_{\sigma}^{\; \rho} n_{\mu} n_{\nu} \nonumber \\
&\qquad + \bar{\nabla}_{\lambda} \Big( \beta \bar{K}_{\mu} ^{\; \lambda} n_{\nu} + \beta \bar{K}_{\nu} ^{\; \lambda} n_{\mu} - \beta \bar{K}_{\mu\nu} n^{\lambda} \Big) \,.
\end{align}
Here we collect the useful equations of motion evaluated on the ansatz~\eqref{eqn:glemaitre}, with $r$-dependent coefficient functions and constant $M_1 = M_{\rm Pl}^2$,
\begin{align}\label{eqn:background}
\frac{\delta}{\delta g^{\tau\tau} } : & \quad 0=r^2 (\alpha -\Lambda )+2 M_1\frac{ r (1-f) g f_r +f^2 (1-g)+ r f g_r }{f^2} \,; \nonumber \\
\frac{\delta}{\delta g^{\tau\rho} } : & \quad 0= 4 M_1 r\frac{ (f-1) \left(g f_r-f g_r\right)}{f}+g r^2 f_r \beta _r \nonumber \\
&\qquad \quad +\beta \left(\frac{1}{2} r^2 \left(\frac{(2 f-1) g f_r^2}{(1-f) f}+f_r g_r+2 g f_{rr}\right)+2 g r f_r+4 (1-f) g\right)\,;\nonumber \\
\frac{\delta}{\delta g^{\rho\rho} } : & \quad 0= M_1 \left(\frac{4 r \left((1-f) f g_r-g f_r \right)}{f}+ 4 f (1-g)\right)+r^2 \left(g f_r \beta _r-2 f (\alpha +\Lambda )\right)\nonumber \\
&\qquad \quad +\beta \left(\frac{1}{2} r^2 \left(\frac{g f_r^2}{(f-1) f}+f_r g_r+2 g f_{rr}\right)+2r g f_r + 4 (1-f) g\right)\,; \nonumber \\
\frac{\delta}{\delta g^{\theta\theta} } \mbox{ or } \frac{\delta}{\delta g^{\phi\phi} } : & \quad 0= M_1 \left( 2 r^2\frac{ f f_r g_r- g f_r^2 +2 f g f_{rr} }{f}+4 r (f_r g+ fg_r)\right) +4 r^2f (\alpha +\Lambda ) \nonumber \\
&\qquad \quad +\beta \left(2 r \frac{(1-f) f g_r-(1+f) g f_r }{f}+\frac{ r^2 g f_r^2}{(f-1) }+4 g(1-f)\right)+4 r \beta_r (1-f) g \,.
\end{align}
These are used in our analysis to re-express~$\Lambda$,~$\alpha$,~$\beta$ and~$\beta_r$ in terms of $f(r)$, $g(r)$ and their derivatives.
\section{The St\"uckelberg procedure}
This Appendix contains the complete expressions for the appearance of the St\"uckelberg field $\pi$.
\begin{align}
g^{\tau\tau} &\to g^{\tau\tau}(1 + 2\dot{\pi} + \dot{\pi}^2) + 2 g^{a \tau} (\partial_{a}\pi + \dot{\pi} \partial_a \pi) + g^{ab} \partial_a \pi \partial_b \pi, \nonumber\\
g^{ a \tau} & \to g^{a \tau } (1+ \dot{\pi}) + g^{ab}\partial_b\pi, \nonumber \\
g^{ab} &\to g^{ab}.
\end{align}
The derivatives $\partial_{\mu}$ transform as
\begin{align}
\partial_{\tau} &\to (1- \dot{\pi} + \dot{\pi}^2 ) \partial_{\tau} + {\cal O}(\pi^3)\,; \nonumber \\
\partial_i &\to \partial_i - (1-\dot{\pi})\partial_i \pi \partial_{\tau}+ {\cal O}(\pi^3)\,.
\end{align}
To calculate the transformation of~$\hat{R}_{ij}$ in~\eqref{eqn:curvpi}, we have used
\begin{align}
\hat{R}_{ij} &\to \hat{R}_{ij} - \partial_k \pi \dot{\hat{\Gamma}}^k_{ij} +\partial_i \pi \dot{\hat{\Gamma}}^k_{jk} + D_k \left( \delta \hat{\Gamma}^k_{ij} \right) - D_i \left( \delta \hat{\Gamma}^k_{kj} \right) \nonumber \\
D_k \dot{h}_{li} & =\dot{\hat{\Gamma}}^m_{li} h_{mi} + \dot{\hat{\Gamma}}^m_{ki} h_{lm} \,.
\end{align}
\section{Decoupling limit in Schwarszchild-de Sitter} \label{app:SdSlimit}
It is convenient to collect the results in section \eqref{sec:pistability} restricted to the Schwarszchild-de Sitter geometry. In this case the metric is given by~\eqref{eqn:gtr} with~$f(r) =g(r) = 1-\frac{r_s}{r} - \frac{\lambda}{3} r^2$, where~$0<\lambda < \frac{4}{9 r_s^2}$ for two horizons~$r_-,r_+$ to exist, and the inner horizon~$r_-$ satisfies~$1<\frac{r_-}{r_s}<\frac{3}{2}$. The individual kinetic terms for $\pi$ are:
\begin{itemize}
\item \underline{$M_3(\tau,\rho)\, \delta g^{\tau\tau} \delta K$}:
\begin{align}
\frac{2M_{3,r}}{\sqrt{F}}\dot{\pi}\pi'+ \frac{Fr^2 M_{3,r} +M_3 (3 r_s-2 rF )}{r^2F^{3/2} }\pi'^2 + \frac{\sqrt{F} \left(r M_{3,r} -2 M_3\right)}{r^3}\gamma^{AB}\partial_A\pi \partial_B\pi
\end{align}
where $F(r)= \frac{r_s}{r}+\frac{\lambda}{3}r^2 $, and $\left(\frac{3}{2}\right)^{2/3} r_s^{2/3}\lambda^{1/3} <F<1$ between two horizons.
\item \underline{$M_4(\tau,\rho) \bar{K}_{\mu\nu}\delta g^{\tau\tau} \delta K^{\mu\nu}$}:
\begin{align}
&\quad \frac{2 F r^2 (3 r_s-2 F r) M_{4,r}+9 r_s^2 M_4}{2 F^2 r^4}\dot{\pi}\pi'+\frac{M_4\left(8 F^2 r^2-36 F r r_s+27 r_s^2\right)+2 F r^2 (3 r_s-2 F r) M_{4,r}}{4 F^2 r^4}\pi'^2\nonumber \\
&+ \frac{(4 F r +3 r_s )M_4-2 F r^2 M_{4,r}}{2 r^5}\gamma^{AB}\partial_A\pi \partial_B\pi
\end{align}
\item \underline{$M_9 \left(\delta K^2 - \delta K_{\mu\nu} \delta K^{\mu\nu}\right) $}:
\begin{align}
&\quad \frac{M_{9} \left(-4 F^2 r^2+18 F r r_s-9 r_s^2\right)-2 Fr^3 M_{9,r} }{F^2 r^4}\pi'^2 \nonumber\\
&-\left( \frac{M_{9} (4 F r-3 r_s) (2 F r+3 r_s)}{2 F r^6}+\frac{r M_{9,rr} +M_{9,r}}{r^3} \right)\gamma^{AB}\partial_A\pi \partial_B\pi
\end{align}
\item \underline{$ M_{11} \left( \bar{K}_{\mu\nu}\delta K \delta K^{\mu\nu} -\bar{K}_{\mu\nu}\delta K^{\rho \mu} \delta K^{\nu}_{\;\; \rho} \right) $}:
\begin{align}
&\quad \frac{2 F r^3 (4 F r-3 r_s) M_{11,r} -3 M_{11} \left(8 F^3 r^3-20 F^2 r^2 r_s+3 (8 F+1) r r_s^2-9 r_s^3\right)}{4 F^{5/2} r^6}\pi'^2\nonumber \\
&+\frac{1}{16 F^{5/2} r^8} \Big[4 F r^2 \left( \left(4 F^2 r^2-9 r_s^2\right)M_{11,r} +F r^2 (4 F r-3 r_s) M_{11,rr} \right) \nonumber \\ &+M_{11} \left(-96 F^4 r^3+96 F^3 r^2 r_s+36 (3-2 F) F r r_s^2+27 (2 F-3) r_s^3\right) \Big] \gamma^{AB}\partial_A\pi \partial_B\pi
\end{align}
\item \underline{$M_{13}(\tau,\rho) \delta g^{\tau\tau} \delta \hat{R}$}:
\begin{align}
&\quad \frac{8M_{13,r}}{r}\dot{\pi}\pi'+\left[\frac{M_{13} (6 r_s-8 F r)}{F r^3}+\frac{4 M_{13,r}}{r} \right]\pi'^2\nonumber \\
&+ \left[\frac{(4 F r-3 r_s)}{r^4}M_{13,r} +M_{13} \left(-\frac{9 r_s^2}{2 F r^6}-\frac{8 F}{r^4}+\frac{9 r_s}{r^5}\right)\right]\gamma^{AB}\partial_A\pi \partial_B\pi
\end{align}
\item \underline{$M_{14}(\tau,\rho) \bar{K}_{\mu\nu} \delta g^{\tau\tau} \delta \hat{R}^{\mu\nu}$}:
\begin{align}
&\quad \frac{9 r_s^2 \left(2 M_{14}+r M_{14,r} \right)-4 F^2 r^3 M_{14,r}}{2 F^{3/2} r^5} \dot{\pi} \pi' \nonumber \\
&+\frac{M_{14} \left(8 F^2 r^2-15 F r r_s+9 r_s^2\right)+F r^2 (3 r_s-4 F r)M_{14,r} }{F^{3/2} r^5}\pi'^2 \nonumber \\
&+ \frac{F r^2 \left(-16 F^2 r^2+18 F r r_s-9 r_s^2\right) M_{14,r} +M_{14} \left(32 F^3 r^3-42 F^2 r^2 r_s+72 F r r_s^2-27 r_s^3\right)}{4 F^{3/2} r^8}\gamma^{AB}\partial_A\pi \partial_B\pi
\end{align}
\end{itemize}
\renewcommand{\em}{}
\bibliographystyle{utphys}
\addcontentsline{toc}{section}{References}
\bibliography{BH_hair}
|
Title:
Interpreting time-integrated polarization data of gamma-ray burst prompt emission |
Abstract: Aims. With the accumulation of polarization data in the gamma-ray burst (GRB)
prompt phase, polarization models can be tested. Methods. We predicted the
time-integrated polarizations of 37 GRBs with polarization observation. We used
their observed spectral parameters to do this. In the model, the emission
mechanism is synchrotron radiation, and the magnetic field configuration in the
emission region was assumed to be large-scale ordered. Therefore, the predicted
polarization degrees (PDs) are upper limits. Results. For most GRBs detected by
the Gamma-ray Burst Polarimeter (GAP), POLAR, and AstroSat, the predicted PD
can match the corresponding observed PD. Hence the synchrotron-emission model
in a large-scale ordered magnetic field can interpret both the moderately low
PDs ($\sim10\%$) detected by POLAR and relatively high PDs ($\sim45\%$)
observed by GAP and AstroSat well. Therefore, the magnetic fields in these GRB
prompt phases or at least during the peak times are dominated by the ordered
component. However, the predicted PDs of GRB 110721A observed by GAP and GRB
180427A observed by AstroSat are both lower than the observed values. Because
the synchrotron emission in an ordered magnetic field predicts the upper-limit
of the PD for the synchrotron-emission models, PD observations of the two
bursts challenge the synchrotron-emission model. Then we predict the PDs of the
High-energy Polarimetry Detector (HPD) and Low-energy Polarimetry Detector
(LPD) on board the upcoming POLAR-2. In the synchrotron-emission models, the
concentrated PD values of the GRBs detected by HPD will be higher than the LPD,
which might be different from the predictions of the dissipative photosphere
model. Therefore, more accurate multiband polarization observations are highly
desired to test models of the GRB prompt phase.
| https://export.arxiv.org/pdf/2208.03668 |
\title{Interpreting time-integrated polarization data of gamma-ray burst prompt emission}
\author{R.Y. Guan
\inst{1}
\and
M.X. Lan\inst{1}
}
\institute{Center for Theoretical Physics and College of Physics, Jilin University,
Changchun 130012, China\\
\email{[email protected]}
}
\date{}
\abstract
{}
{With the accumulation of polarization data in the gamma-ray burst (GRB) prompt phase, polarization models can be tested.} %
{We predicted the time-integrated polarizations of 37 GRBs with polarization observation. We used their observed spectral parameters to do this. In the model, the emission mechanism is synchrotron radiation, and the magnetic field configuration in the emission region was assumed to be large-scale ordered. Therefore, the predicted polarization degrees (PDs) are upper limits.} %
{For most GRBs detected by the Gamma-ray Burst Polarimeter (GAP), POLAR, and AstroSat, the predicted PD can match the corresponding observed PD. Hence the synchrotron-emission model in a large-scale ordered magnetic field can interpret both the moderately low PDs ($\sim10\%$) detected by POLAR and relatively high PDs ($\sim45\%$) observed by GAP and AstroSat well. Therefore, the magnetic fields in these GRB prompt phases or at least during the peak times are dominated by the ordered component. However, the predicted PDs of GRB 110721A observed by GAP and GRB 180427A observed by AstroSat are both lower than the observed values. Because the synchrotron emission in an ordered magnetic field predicts the upper-limit of the PD for the synchrotron-emission models, PD observations of the two bursts challenge the synchrotron-emission model. Then we predict the PDs of the High-energy Polarimetry Detector (HPD) and Low-energy Polarimetry Detector (LPD) on board the upcoming POLAR-2. In the synchrotron-emission models, the concentrated PD values of the GRBs detected by HPD will be higher than the LPD, which might be different from the predictions of the dissipative photosphere model. Therefore, more accurate multiband polarization observations are highly desired to test models of the GRB prompt phase.} %
{}
\keywords{polarization -- gamma-ray burst: general -- radiation mechanisms: non-thermal -- methods: numerical -- magnetic fields
}
\section{Introduction}
Gamma-ray bursts (GRBs) are the most violent high-energy explosions in the Universe. GRBs were divided into two categories, long and short GRBs, based on a rough duration separation of about 2 seconds. GRB spectra are nonthermal, which can be described by a broken power law with a smooth joint, known as the Band function \citep{Band1993}. The spectrum integrated over the GRB duration can empirically be described by a function with a peak in the $\nu f_{\nu}$ spectrum, and the peak energy is defined as $E_{p,obs}$. For the low-energy spectral index $\alpha$, the typical value for long GRBs is $\alpha\sim-0.92$, and short GRBs have a harder low-energy spectral index $\alpha\sim-0.50$ \citep{Nava2011}.
Gamma-ray polarization measurements of the prompt emission of GRBs have profound implications for our understanding of the unknown magnetic field configuration and emission mechanism of GRBs prompt emission. With the development of polarimetry, increasingly more GRBs have been measured and can be used for a statistical analysis. Therefore, constraints on the underlying models can be provided \citep{Toma2009}. For the GRB prompt phase, there are two possible emission mechanism, synchrotron radiation and inverse Compton scattering \citep{Chand2018, Lazzati2004}. Although several thousands of GRBs have been observed to date, few of these have reported polarization detections. The polarization degrees (PDs) of GRB prompt emissions in the measurements so far vary strongly.
The Gamma-ray Burst Polarimeter (GAP) has observed PD values of GRBs 100826A, 110301A, and 110721A, which suggests that GRB prompt emissions are highly polarized \citep{Yonetoku2011, Yonetoku2012}. Then an increasing number of polarimeters became operational. \citet{Chattopadhyay2022} published their updated polarization observational results of 20 GRBs recently, which are the brightest GRBs detected by the Cadmium Zinc Telluride Imager (CZTI) on board AstroSat. The renewed AstroSat data show that most of the bright GRBs are relatively highly polarized (with a typical PD value of $\sim45\%$) in the energy range of 100 keV$-$600 keV, different from that of their former results with the high polarizations (typical PD was around $60\%$) in energy range of 100 keV$-$350 keV \citep{Chattopadhyay2019, Chand2019, Gupta2022}.
POLAR is a polarimeter with a comparable energy range as CZTI, which was launched as part of the China Tiangong-2 space laboratory in September 2016. The detection energy range of POLAR is 50 keV$-$500 keV. During its approximately six months of operation, a total of 55 GRBs were detected \citep{Xiong2017}. Polarization measurements of 5 of these 55 GRBs were reported first, and the results show that they are less polarized than predicted by some popular models \citep{Zhang2019}. Moderate levels of linear polarization were also found in subsequent reports, and the polarization measurements of 9 other GRBs were published next \citep{Kole2020}. Despite the great efforts that have been made in gamma-ray polarimetry, there are still large errors in the current data, which allow us to present only preliminary constraints on the various models of the GRB prompt phase. It is encouraging that more detailed polarization measurements will become available from forthcoming missions such as POLAR-2 \citep{POLAR2}, which will help us to understand the magnetic field configuration and emission mechanism of GRBs.
In this paper, we have numerically calculated \citep{Toma2009} the ranges of theoretical PDs of 37 GRBs detected by GAP, AstroSat, and POLAR based on the values of the observed spectral parameters. The paper is arranged as follows. In Section 2 we present our data. The model and numerical results are described in Section 3. Finally, we give our conclusions and discussion in Section 4.
\section{Data list}
\begin{table*}[!htbp]
\caption{Spectral parameters and polarization properties of the three GRBs observed with GAP}
\label{tab1: GAP}
\begin{center}
\centering
\begin{tabular}{cccccccc}
\hline\hline\noalign{\smallskip}
GRB & $PD_{obs}$(\%)&$\alpha_{s}$&$\beta_{s}$&$E_{p,obs}$(keV)&Instrument(Spectrum)&$z$&\\
\hline\noalign{\smallskip}
100826A&$27_{-11}^{+11}$&$-0.19_{-0.01}^{+0.01}$&$0.92_{-0.02}^{+0.02}$&$263.25_{-7.84}^{+7.84}$&Fermi-GBM&-&\\[5pt]
110301A&$70_{-22}^{+22}$&$-0.10_{-0.02}^{+0.02}$&$1.67_{-0.05}^{+0.05}$&$102.28_{-1.82}^{+1.82}$&Fermi-GBM&-&\\[5pt]
110721A&$84_{-28}^{+16}$&$0.03_{-0.02}^{+0.02}$&$0.78_{-0.03}^{+0.03}$&$465.19_{-38.66}^{+38.66}$&Fermi-GBM&$0.382$&\\\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htbp]
\caption{Spectral parameters and polarization properties of the 14 GRBs observed with POLAR}
\label{tab2: POLAR}
\begin{center}
\begin{tabular}{c c c c c c c c c c}
\hline\hline\noalign{\smallskip}
GRB & $PD_{obs}$(\%)&$\alpha_{s}$&$\beta_{s}$&$E_{p,obs}$(keV)&Instrument(Spectrum)&$z$&\\
\hline\noalign{\smallskip}
161203A&$16_{-15}^{+29}$&$-1.13_{-0.27}^{+0.25}$&$2.41_{-0.39}^{+0.46}$&$344_{-12}^{+19}$&$^{*}$&-&\\[5pt]
161217C&$21_{-16}^{+30}$&$0.08_{-0.43}^{+0.25}$&$1.76_{-0.36}^{+0.61}$&$143_{-34}^{+32}$&$^{*}$&-&\\[5pt]
161218A&$7.0_{-7.0}^{+10.7}$&$-0.72_{-0.25}^{+0.21}$&$2.40_{-0.43}^{+1.17}$&$128_{-8}^{+8}$&Konus-Wind&-&\\[5pt]
161218B&$13_{-13}^{+28}$&$-0.52_{-0.01}^{+0.01}$&$1.93_{-0.10}^{+0.10}$&$209.67_{-3.00}^{+3.00}$&Fermi-GBM&-&\\[5pt]
161229A&$17_{-13}^{+24}$&$-0.36_{-0.03}^{+0.03}$&$2.07_{-0.72}^{+1.49}$&$339_{-14}^{+12}$&$^{*}$&-&\\[5pt]
170101A&$6.3_{-6.3}^{+10.8}$&$0.44_{-0.17}^{+0.13}$&$1.49_{-0.23}^{+0.65}$&$123_{-21}^{+23}$&Konus-Wind&-&\\[5pt]
170101B&$60_{-36}^{+24}$&$-0.43_{-0.06}^{+0.06}$&$1.23_{-0.12}^{+0.12}$&$206.52_{-12.75}^{+12.75}$&Fermi-GBM&-&\\[5pt]
170114A&$10.1_{-7.4}^{+10.5}$&$-0.17_{-0.05}^{+0.05}$&$1.04_{-0.09}^{+0.09}$&$230.15_{-21.03}^{+21.03}$&Fermi-GBM&-&\\[5pt]
170127C&$9.9_{-8.4}^{+19.3}$&$0.14_{-0.22}^{+0.21}$&$2.1_{-0.6}^{+0.6}$&$1500_{-900}^{+800}$&$^{*}$&-&\\[5pt]
170206A&$13.5_{-8.6}^{+7.4}$&$-0.72_{-0.04}^{+0.04}$&$1.55_{-0.12}^{+0.12}$&$341_{-13}^{+13}$&Fermi-GBM&-&\\[5pt]
170207A&$5.9_{-5.9}^{+9.6}$&$-0.14_{-0.06}^{+0.06}$&$1.63_{-0.26}^{+0.84}$&$394_{-33}^{+42}$&Konus-Wind&-&\\[5pt]
170210A&$11.4_{-9.7}^{+35.7}$&$-0.10_{-0.02}^{+0.02}$&$1.28_{-0.08}^{+0.08}$&$361.5_{-14.1}^{+14.1}$&Fermi-GBM&-&\\[5pt]
170305A&$40_{-25}^{+25}$&$-0.58_{-0.13}^{+0.13}$&$1.06_{-0.13}^{+0.13}$&$233_{-35}^{+35}$&Fermi-GBM&-&\\[5pt]
170320A&$18_{-18}^{+32}$&$-0.76_{-0.13}^{+0.17}$&$1.32_{-0.16}^{+0.21}$&$228_{-15}^{+13}$&$^{*}$&-&\\\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
$^{*}$: The spectral parameters are obtained from \citet{Kole2020}, who performed a joint fit using an external spectrum with POLAR data based on the Multi-Mission Maximum Likelihood (3ML) framework \citep{Vianello2015}.
\end{table*}
\begin{table*}[htbp]
\caption{Spectral parameters and polarization properties of the 20 GRBs observed with AstroSat}
\label{tab3: AstroSat}
\begin{center}
\begin{tabular}{c c c c c c c c c c}
\hline\hline\noalign{\smallskip}
GRB & $PD_{obs}$(\%)$$&$\alpha_{s}$&$\beta_{s}$&$E_{p,obs}$(keV)&Instrument(Spectrum)&$z$&\\
\hline\noalign{\smallskip}
160325A&$<45.02$&$0.25_{-0.08}^{+0.07}$&$0.97_{-0.10}^{+0.14}$&$223.57_{-25}^{+29}$&Fermi-GBM, BAT&$-$&\\[5pt]
160623A&$<56.51$&$-0.06_{-0.02}^{+0.02}$&$1.83_{-0.09}^{+0.10}$&$662_{-18}^{+19}$&Fermi-GBM, Konus-Wind&0.367&\\[5pt]
160703A&$<62.64$&$-0.22_{-0.12}^{+0.09}$&$1.48^{a}$&$351_{-46}^{+40}$&BAT, Konus-Wind&$-$&\\[5pt]
160802A&$<51.89$&$-0.36_{-0.04}^{+0.03}$&$1.53_{-0.14}^{+0.20}$&$207_{-1}^{+1}$&Fermi-GBM&$-$&\\[5pt]
160821A&$<33.87$&$-0.04_{-0.00}^{+0.00}$&$1.29_{-0.02}^{+0.02}$&$977_{-12}^{+12}$&Fermi-GBM, BAT&$-$&\\[5pt]
170527A&$<36.46$&$-0.01_{-0.01}^{+0.01}$&$2.14_{-0.29}^{+0.29}$&$974_{-47}^{+51}$&Fermi-GBM&$-$&\\[5pt]
171010A&$<30.02$&$0.12_{-0.01}^{+0.00}$&$1.39_{-0.02}^{+0.02}$&$180_{-3}^{+3}$&Fermi-GBM&$0.3285$&\\[5pt]
171227A&$<55.62$&$-0.20_{-0.01}^{+0.01}$&$1.49_{-0.05}^{+0.05}$&$899_{-32}^{+32}$&Fermi-GBM&$-$&\\[5pt]
180103A&$71.43_{-26.84}^{+26.84}$&$0.31_{-0.06}^{+0.06}$&$1.24_{-0.90}^{+0.13}$&$273_{-23}^{+26}$&BAT, Konus-Wind&$-$&\\[5pt]
180120A&$62.37_{-29.79}^{+29.79}$&$0.01_{-0.01}^{+0.01}$&$1.40_{-0.09}^{+0.09}$&$140.91_{-3}^{+3}$&Fermi-GBM&$-$&\\[5pt]
180427A&$60.01_{-22.32}^{+22.32}$&$-0.71_{-0.08}^{+0.08}$&$1.80_{-0.16}^{+0.16}$&$147_{-2}^{+2}$&Fermi-GBM&$-$&\\[5pt]
180806A&$<95.80$&$-0.08_{-0.04}^{+0.04}$&$1.46_{-0.23}^{+0.44}$&$453_{-44}^{+46}$&Fermi-GBM&$-$&\\[5pt]
180809B&$<24.63$&$-0.31_{-0.08}^{+0.07}$&$1.29_{-0.07}^{+0.08}$&$251_{-15}^{+16}$&BAT, Konus-Wind&$-$&\\[5pt]
180914A&$<33.55$&$-0.27_{-0.03}^{+0.03}$&$1.30_{-0.11}^{+0.15}$&$330_{-19}^{+20}$&Fermi-GBM&$-$&\\[5pt]
180914B&$48.48_{-19.69}^{+19.69}$&$-0.25_{-0.04}^{+0.04}$&$1.10_{-0.08}^{+0.70}$&$453_{-24}^{+26}$&BAT, Konus-Wind&$1.096$&\\[5pt]
190530A&$46.85_{-18.53}^{+18.53}$&$-0.01_{-0.02}^{+0.00}$&$2.50_{-0.25}^{+0.25}$&$888_{-8}^{+8}$&Fermi-GBM&0.9386&\\[5pt]
190928A&$<33.10$&$0.00_{-0.06}^{+0.06}$&$0.97_{-0.07}^{+0.13}$&$658_{-88}^{+111}$&Konus-Wind&$-$&\\[5pt]
200311A&$<45.41$&$-0.05_{-0.02}^{+0.02}$&$1.57_{-0.19}^{+0.19}$&$1218_{-110}^{+110}$&Fermi-GBM&$-$&\\[5pt]
200412A&$<53.84$&$-0.30_{-0.05}^{+0.05}$&$1.50_{-0.21}^{+0.21}$&$256_{-7}^{+8}$&Fermi-GBM&$-$&\\[5pt]
200806A&$<54.73$&$-0.47$&$1.96$&$109.12$&BAT&$-$&\\\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
$^{a}$: Fitting this spectrum with the Band function only presents a lower limit on $\beta_s$ of 1.48.
\end{table*}
\citet{Yonetoku2011, Yonetoku2012} reported polarization observations of the prompt emission of GRB 100826A, GRB 110301A, and GRB 110721A with GAP. For GRB 100826A, an averaged polarization of $27\pm11\%$ with a confidence level of $99.4\%$ (2.9$\sigma$) was reported, with systematic errors being considered for the first time in their analysis. For GRB 110301A and GRB 110721A, the observed linear polarizations are $70\pm22\%$ and $84_{-28}^{+16}\%$ with confidence levels of $3.7\sigma$ and $3.3\sigma$, respectively. \citet{Berger2011} reported a redshift value of $0.382$ for GRB 110721A. The spectral parameters used in our calculations for all three GRBs are from the Fermi-GBM catalog in the energy range of $50$ keV$-300$ keV\footnote{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html} \citep{Kienlin2020ApJ, Gruber2014, Kienlin2014ApJ, Bhat2016}, and are presented in Table \ref{tab1: GAP}.
Recently, \citet{Kole2020} published a detailed polarization catalog reporting the polarization properties of 14 GRBs observed with POLAR. We searched the parameters of the Band function for all of these GRBs and list them in Table \ref{tab2: POLAR} along with the instruments that provide them. Among these spectral parameters, all those from Konus were measured in the energy range of 20 keV$-$15 MeV \citep{Frederiks2016, Tsvetkova2017, Svinkin2017}, while those from Fermi-GBM were obtained in the energy range of 50 keV$-$300 keV \citep{Kienlin2017GCN, Roberts2017GCN, Stanbro2017GCN}.
The updated polarization measurements and the corresponding spectral parameters of 20 GRBs observed by the CZTI on board AstroSat were also reported by \citet{Chattopadhyay2022} recently. In Table \ref{tab3: AstroSat} we list the detailed polarization information and spectral properties for them \citep{Chattopadhyay2022}. In addition, the redshift values have been reported for 4 of 20 GRBs \citep{Malesani2016, Postigo2017, Gupta2022, GCN23246}, which provides more precise parameters for our calculations.
\section{Model and numerical results}
An ultrarelativistic jet is assumed to be an optically thin shell to $\gamma$-rays with an emitting region of radius $r$, located at redshift $z$, and a source with a luminosity distance $d_L$. Its fluence can be expressed as follows \citep{Toma2009, Granot1999, Woods1999, Ioka2001}:
\begin{equation}
F=\frac{1+z}{d_L^2}r^2\int_{\nu_1}^{\nu_2}d\nu_{obs}\int_0^{\theta_j+\theta_V}\frac{f(\nu')d(\cos\theta)}{\gamma^2(1-\beta_0\cos\theta)^2}
\int_{-\Delta\phi}^{\Delta\phi} A_0d\phi.
\end{equation}
In the above equation, $\theta_V$ is the viewing angle of the observer, $\theta_j$ is the half-opening angle of the jet, and $\theta$ is the angle between the line of sight and the local direction of the fluid velocity. The physical quantities that are primed and unprimed are in the comoving and observer frame, respectively. For example, $\nu'=\nu_{obs}(1+z)\gamma(1-\beta_0\cos\theta)$ is the observational frequency in comoving frame, with the bulk Lorentz factor $\gamma$ and the velocity of jet $\beta_0$ in units of the speed of light. $\nu_{obs}$ is the observationl frequency in the observer frame. $\nu_1$ and $\nu_2$ are the energy ranges of the corresponding detectors (e.g., $\nu_1=50$ keV and $\nu_2=500$ keV for POLAR). $\phi$ is the angle between the projection of the jet axis and the projection of the local fluid velocity direction on the sky plane. More information about $\Delta\phi$ can be obtained from \citet{Toma2009}. $E_{p,obs}$ can be converted into the comoving frame by $E'_p=E_{p,obs}(1+z)/2/\gamma$. We adopted the following form for the spectrum of GRB prompt emission described by the Band function \citep{Band1993}:\begin{equation}
f(\nu')=\begin{cases}
\vspace{1ex}
{(\frac{\nu'}{\nu'_0})}^{-\alpha_s}e^{-{\frac{\nu'}{\nu'_0}}}, & \text{$\nu'<\nu'_0(\beta_s-\alpha_s)$}, \\ {(\frac{\nu'}{\nu'_0})}^{-\beta_s}(\beta_s-\alpha_s)^{\beta_s-\alpha_s}e^{\alpha_s-\beta_s}, & \text{$\nu'\geq\nu'_0(\beta_s-\alpha_s)$}.
\end{cases}
\end{equation}
$\alpha_s$ and $\beta_{s}$ are the low-energy and high-energy spectral indices, respectively. $\nu'_0=E'_p/h$ is the comoving break energy of the Band spectrum. $h$ is the Planck constant. In this paper, $\alpha_s$ and $\beta_s$ are the spectral indices of the flux desity $F_{\nu}$. In our calculation, the source was assumed to be at a redshift of 1 unless its redshift value has been reported. We assumed an aligned large-scale ordered magnetic field in the emission region with an orientation of $\delta=\pi/6$ \citep{Lan2016}. Other fixed parameters are $\theta_j=0.1$ rad, $\theta_V=0$ rad, and $\gamma=100$.
We then calculated the PDs of the GRBs with the polarization observation, using the observed spectral parameters (including $\alpha_s$, $\beta_s$ , and $E_{p,obs}$) and redshift values as well as the energy range of the polarimeters. In general, the calculated PD ($PD_{cal}$) of a GRB consists of a typical value and its upper and lower limits. In our calculations, we used the redshift value, the detector energy range, and the typical values of $E_{p,obs}$, $\alpha_s$, and $\beta_s$ to calculate a typical value of $PD_{cal}$. For the same GRB (i.e., the redshift value and the upper and lower limits of the detector energy range are fixed), the upper limit of its $PD_{cal}$ was taken when its $\alpha_s$ and $\beta_s$ took the maximum values and $E_{p,obs}$ took the minimum value; conversely, the minimum values of $\alpha_s$ and $\beta_s$ and the maximum value of $E_{p,obs}$ determine the lower limit of the $PD_{cal}$. We compare the calculated PDs ($PD_{cal}$) and the observed PDs ($PD_{obs}$) in Figs. 1-3.
Fig. \ref{fig1: GAP} shows a comparison of $PD_{cal}$ and $PD_{obs}$ observed with GAP. The energy ranges of GAP (for polarization observations) and of the Fermi-GBM (for spectra observations) overlap exactly ($50$ keV$-300$ keV). For GRB 100826A detected with GAP, the polarization evolutions of this burst were simulated with the collision-induced magnetic reconnection model \citep{Deng2016}, and the results can reproduce the
time-resolved polarizations, especially the 90-degree polarization angle (PA) change between the two pulses. The observed PD of GRB 110721A is larger than the predicted one.
We also calculated the $PD_{cal}$ ranges and compared them with the $PD_{obs}$ observed by POLAR, as shown in Fig. \ref{fig2: POLAR}. $PD_{cal}$ for 10 of the 13 GRBs in the light blue region overlaps with their corresponding $PD_{obs}$. The $PD_{cal}$ of the remaining 4 GRBs is significantly higher than the $PD_{obs}$ ranges. All these $PD_{cal}$ show higher polarization degrees.
In Fig. \ref{fig3: AstroSat} we numerically calculated the $PD_{cal}$ ranges of the GRBs observed by the AstroSat and found that the results match most of the observations, with a distribution around $\sim40\%$. The only burst with observed PD larger than the predicted value is GRB 180427A. Our integrated energy range of Stokes parameters for AstroSat is $100$ keV$-600$ keV. The range of $\Pi_0$ is $[0,1]$. This requires that the spectral index ($\alpha_s$ or $\beta_s$) should be higher than $-1$ according to the local polarization as shown below (Toma et al. 2009).
\begin{equation}
\Pi_0\equiv
\begin{cases}
\vspace{1ex}
{\displaystyle\frac{\alpha_s+1}{\alpha_s+\frac{5}{3}}},\text{$\nu'\leq\nu'_0(\beta_s-\alpha_s)$}\\
{\displaystyle\frac{\beta_s+1}{\beta_s+\frac{5}{3}}},\text{$\nu'\geq\nu'_0(\beta_s-\alpha_s)$},
\end{cases}
\end{equation}
The PDs of the POLAR bursts are concentrated around $10\%$, while they are around $40\%-50\%$ for AstroSat bursts. To interpret this discrepancy, we plot the spectral indices with peak energy in Fig. 4. The typical values of high-energy spectral indices are similar for POLAR bursts and AstroSat bursts. However, the typical value of low-energy spectral index is higher for AstroSat bursts (typically $\alpha_s\sim0.0$) than the POLAR bursts (typically $\alpha_s\sim-0.5$), resulting in a higher $PD_{cal}$ for AstroSat bursts. And the integrated energy range ($100$ keV$-600$ keV) of AstroSat bursts, compared with that ($50$ keV$-500$ keV) of POLAR bursts, is shifted to the higher energy range. For the bursts with similar spectral parameters the contributions from the high-energy photons (with larger local PD) will be larger for AstroSat burst, which will lead to a higher energy-integrated $PD_{cal}$. These might be the main reasons for this discrepancy.
Because the energy range of the polarimeter also affects the observed polarization properties, we numerically predict the PDs of the long and short GRBs measured by two detectors, Low-energy Polarimetry Detector (LPD) and High-energy Polarimetry Detector (HPD) (whose energy ranges are $2$ keV$-30$ keV and $30$ keV$-800$ keV, respectively) on board POLAR-2 \citep{POLAR2}, based on the typical values and distribution of their spectral parameters \citep{Nava2011, Preece2000}. We present the results in Fig. \ref{fig4: HPD and LPD}, where the gray area and the light blue area denote the energy ranges of LPD and HPD, respectively. The typical PD values of GRBs detected by LPD and HPD are shown as black diamonds and red points. Because the typical PD values for different detectors are calculated under the typical spectral parameters and the number of GRBs with typical spectral parameters is maximum, the observed PD value with the maximum number of GRBs for each detector is concentrated around the typical PD values predicted by the model.
\section{Discussion and conclusion}
Polarization properties of GRBs are essential for diagnosing the magnetic field configuration and the geometry of the emission region and the observation. We used the observed energy spectrum to calculate the corresponding GRB polarization properties within the synchrotron-emission model and compared them with the observed time-integrated PDs. In our model, we used the large-scale aligned magnetic field in the emission region. Therefore, the predicted PDs give upper limits for the synchrotron-emission models.
For GAP, POLAR, and AstroSat the predicted PDs of the model can match most of the corresponding observed PDs, indicating that in the GRB prompt phase or at least during the peak time of the burst, the magnetic field configuration is approximately large-scale ordered with $\xi_B>1$ \citep{Lan2019, Lan2021}. The large-scale ordered magnetic field in the GRB emission region may originate from its central engine. In the scenario of the internal shock in a fireball \citep{PX1994, RM1994}, the magnetic field may be mixed with a low $\xi_B$ value ($\xi_B<1$), so that this model is not favored by the current PD observations. For the internal shock with an ordered magnetic field \citep{Fan2004}, the magnetization parameter $\sigma$ is required to be smaller than 1. The observed PD values require that it cannot be too small, however, otherwise, turbulence will develop and destroy the ordered magnetic field \citep{Deng2017}. For the ICMART model \citep{Zhang2011}, the magnetic field becomes less ordered with the magnetic reconnection during the burst (i.e., a decrease in $\xi_B$). The observed data indicate that at the peak time of these bursts, the $\xi_B$ values of the magnetic fields are still higher than 1 (i.e., the magnetic fields are dominated by the ordered component at the peak times of the bursts).
For POLAR, $10\%$ of the observed PDs can also be interpreted as synchrotron emission in an ordered magnetic field with a small low-energy spectral index. However, $PD_{cal}$ of four GRBs is still higher than $PD_{obs}$. For these four GRBs, the magnetic field configurations in the emission regions may be mixed \citep{Lan2019}, or the PAs show rotation or an abrupt change during the bursts. Future time-resolved polarization observations will enable us to distinguish the two scenarios. For AstroSat, the predicted PDs are concentrated around $40\%$ and can interpret the measurements of all GRBs except one (GRB 180427A). There is a discrepancy between the moderately low PDs ($\sim10\%$) detected with POLAR and the relatively high PDs (about $40\%-50\%$) observed with AstroSat. The main reasons for this difference may originate from both the higher low-energy spectral indices and higher integrated energy range for AstroSat bursts.
The PD data of GRB 180427A detected by AstroSat and of GRB 110721A detected by GAP are both higher than the predicted values. Therefore the two PD observations challenge the models invoking synchrotron radiation in an ordered magnetic field. Because the synchrotron radiation in an ordered magnetic field gives the upper limit of the PD of the synchrotron-emission models with a mixed magnetic field for on-axis observations \citep{Lan2020}, and the GRBs selected for the polarization analysis are usually bright (indicating on-axis observations), the PD data of GRB 180427A and GRB 110721A finally challenge the synchrotron-emission models for GRB prompt phase.
With co-observations of the HPD and LPD on board POLAR-2 \citep{POLAR2}, the polarization spectrum will be obtained in the near future. We predict that the concentrated PD values of the GRBs detected by the HPD will be higher than the LPD in the synchrotron-emission model. A reversed polarization spectrum was predicted by the dissipative photosphere model, however, that is, the concentrated PD values detected by the HPD will be lower than the LPD. The two models can be tested in this way with polarization observations of POLAR-2. The emission mechanism at the high-energy $\gamma-$ray band is multiple inverse-Compton scattering for the dissipative photosphere model \citep{Lundman2018}, which is different from the synchrotron-emission model. With the observations of the polarization spectrum of the POLAR-2, the emission mechanism in the high-energy $\gamma-$ray band can therefore be determined.
\begin{acknowledgements}
We thank the anonymous referee for his/her useful comments. We also thank Yan-Zhi Meng for useful discussions and Tanmoy Chattopadhyay for useful comment. This paper is dedicated to the 70th anniversary of the physics of Jilin University. This work is supported by the National Natural Science Foundation of China (grant No. 11903014).
\end{acknowledgements}
\vspace{-2mm}
\bibliography{ref}
|
Title:
Exoplanet Radio Transits as a Probe for Exoplanetary Magnetic Fields -- Time-dependent MHD Simulations |
Abstract: We perform a series of time dependent Magnetohydrodynamic simulations of the
HD 189733 star-planet system in order to predict radio transit modulations due
to the interaction between the stellar wind and planetary magnetic field. The
simulation combines a model for the stellar corona and wind with an exoplanet
that is orbiting the star in a fully dynamic, time-dependent manner. Our
simulations generate synthetic radio images that enable us to obtain synthetic
radio lightcurves in different frequencies. We find a clear evidence for the
planetary motion in the radio light curves. Moreover, we find specific repeated
features in the light curves that are attributed to the passage of the
planetary magnetosphere in front of the star during transit. More importantly,
we find a clear dependence in the magnitude and phase of these lightcurve
features on the strength of the planetary magnetic field. Our work demonstrates
that if radio transits could be observed, they could indeed provide information
about the magnetic field strength of the transiting exoplanet. Future work to
parameterize these lightcurve features and their dependence on the planetary
field strength would provide tools to search for these features in radio
observations datasets. As we only consider the thermal radio emission from the
host star for our study, very sensitive radio interferometers are necessary to
detect these kinds of planetary transit in radio.
| https://export.arxiv.org/pdf/2208.06006 |
\title{Exoplanet Radio Transits as a Probe for Exoplanetary Magnetic Fields - Time-dependent MHD Simulations}
\author[0000-0002-7069-1711]{Soumitra Hazra}
\email{soumitra\[email protected], [email protected]}
\affiliation{Lowell Center for Space Science and Technology, University of Massachusetts Lowell, 600 Suffolk Street, Lowell, MA 01854, USA}
\author[0000-0003-3721-0215]{Ofer Cohen}
\affiliation{Lowell Center for Space Science and Technology, University of Massachusetts Lowell, 600 Suffolk Street, Lowell, MA 01854, USA}
\email{ofer\[email protected]}
\author[0000-0002-6118-0469]{Igor V. Sokolov}
\affiliation{Department of Climate and Space Sciences and Engineering, University of Michigan, 2455 Hayward St., Ann Arbor, MI 48109, USA}
\email{[email protected]}
\keywords{Magnetohydrodynamical simulations (1966) --- Exoplanets (498) --- Magnetic Fields (994) --- Radio astronomy (1338) --- Stellar coronae (305)}
\section{Introduction}
Since the discovery of first planet outside the solar system, thousands of exoplanets have been confirmed \citep{mayo95a, schi95a}. Following the dedicated {\it Kepler} \citep{hans10a} and {\it TESS} \citep{RickerTESS} missions, we now have significant statistical information regarding the masses, sizes, orbital separations of these transiting exoplanets. Specifically, many of these exoplanets are found in a short-period orbit, with a semi-major axis of less than 0.1 AU (sometimes even less than 10 stellar radii) \citep{schi95a}. Most of these close-in exoplanets are hot gas giants known as hot Jupiters, which are expected to produce a strong star-planet interaction due to their close-in orbit \citep[sometimes located within the Alfv\'en radius, see, e.g.,][]{shko03,Ip2004,Lanza2008,cohe11,cohe18}. Indeed, Many observations regarding close-in star-planet system have been reported using modern space based telescope and ground based instruments \citep[see summary in][]{stru18a}.
Exoplanet observations are now strongly supplemented by the consistent growing observational efforts of detecting spectral emission from the atmosphere of these exoplanets. These include observations of Lyman-$\alpha$ signature of the atmospheric evaporation \citep{vida03, trip15, bour16, spak18, vido21} and chromospheric signature of the star-planet interaction \citep{shko05, shko08, fare10, shko18}. However, consistent observational techniques to detect exoplanetary magnetic fields are still missing. Magnetic fields may play a crucial role in the planetary evolution, they may (or not) protect exoplanets atmospheres, and may provide an insight about exoplanets internal structure \citep[see, e.g.,][]{ExoplanetHandbook}. Thus, exoplanets magnetic field observations are crucial for exoplanets characterization. In close-in exoplanets, star-planet interaction observations in the Radio, EUV and X-Ray may provide an insight about the planetary magnetic field \citep{zark07a, bent17a, grie18, zark18, stru22}.
In our solar system, radio signals from Jupiter have been observed \citep{burk55}. Thus, we could pose the question is it possible to detect exoplanets by looking at radio signals? Since the first discovery of radio emission from Jupiter, similar type of radio emissions are also detected for other planets in the solar system \citep{galo89}. It is now well known that planetary magnetosphere extract energy from the solar/stellar wind of the host star and some part of this energy can be radiated via electron cyclotron maser instability, likely at radio frequencies \citep{Gurnett1974,Desch1984,grie07, lazi07, lazi18, lync18, zark18}. Depending on the magnetic nature of the host star and the planet, four types of interaction between the stellar wind and planet is possible. It has been shown that three of these four possible cases intense radio emission is possible \citep{zark07a}. Intense radio emission is not possible only when both star and the planet are non-magnetic.
Similar like solar system planets, intense cyclotron maser emission at radio wavelength have been predicted for hot Jupiter exoplanets. Detection of these radio emission using ground based instrument is limited by the ionospheric cut-off frequency, approximately around 10 MHz \citep{davi69, yeh82}. Solar system planets except Jupiter emit very low radio frequency below this cutoff, making detection very difficult from ground based present instrument. This is due to very weak planetary magnetic field strength, as frequency of the radio emission is directly proportional to the magnetic field strength close to the planetary surface \citep{grie18}. Low frequency ($\leq 200 MHz$) radio emission is also one of the known tools to probe the outer stellar corona and space weather conditions around that star \citep{schw06, vedan20a}. In summary, signature of the star-planet interaction can be detected either by observing the planetary auroral emission or by observing the modulation in the planetary radio transit. This realization makes space-based or ground-based very low frequency radio observatory as a promising candidate for the detection of exoplanet in radio band \citep{burkh17, grie18, zark18, pope19}.
Until recently, neither the exoplanet nor their host star have been detected in the low frequency radio band despite several try \citep{lync18}. Thanks to the higher sensitivity of the new generation radio interferometer LOFAR \citep[the LOw-Frequency ARray;][] {vanh13}, coherent low-frequency radio emission from M dwarf stars have been detected recently \citep{vedan20, calli21a, calli21}. Radio emissions are also detected for some other stellar systems using LOFAR \citep{turn21}. It has been suggested that this radio emissions are similar to that of planetary auroral emission, indicating the signature of star-planet interaction via electron cyclotron maser instability \citep{vedan20a, calli21}. In the case of M dwarf star GJ 1151, although coherent radio emissions were detected, no conclusive evidence of the massive planet around that star was found \citep{pope20a, maha21,perg21, pope21}. Several theoretical studies have been published regarding exoplanet detection in the radio band \citep{zark97, zark07a, selho13, vido15, cohe18, kavan19, kavan20, selho20}. \cite{selho13, selho20} studied the possibility of detecting exoplanets at high radio frequencies (17 GHz and more than that) using planetary transit and suggest that it is possible to observe this kind of planetary transit with the Atacama Large Millimeter/Submillimeter Array (ALMA) radio interferometer. Observational facilities like LOFAR, the upgraded version of the existing Giant Meter Radiowave Telescope \citep[GMRT;][] {gupt17} and Murchison Widefield Array \citep[MWA;][] {ting13} actually opens up the unique opportunity for the more sensitive search in the low frequency range \citep[See][]{pope19, shioh20}. The upcoming planned Square Kilometer Array \citep[SKA;][] {dewd09} is also expected to conduct a very sensitive search in low frequency range which will be helpful to find out the signature of star-planet interaction. \cite{pope19} calculated the radiometric sensitivity of the upcoming SKA and suggest that it is possible to detect close-orbit exoplanet transit around the host star using SKA. Motivated by the present observational success as well as theoretical studies, here we aim to study the possibility of exoplanet detection and characterization in the radio band from the MHD modeling point of view.
In this study, we follow the approach suggested by \cite{cohe18} for the detection and characterization of exoplanets' magnetic fields via the planet-induced modulation of the background coronal radio emission, instead of detecting the planet as a radio source. \cite{cohe18} mimicked the orbital phase variation of the exoplanet around the host star by viewing the static, three dimensional solution from different angles. However, this method missed the variation of plasma properties along the planetary orbit when the planet actually moves. Here we perform the time-dependent simulation of the star-planet interaction to study similar, but dynamical system. Basically, we aim to use the planetary transit for the characterization of radio emission from the host star. That will also help us to detect and characterize exoplanets. Please note that in this study we only focuses on the thermal radio emission from the host star, not the coherent radio emission. Coherent radio emission generally comes from small regions of the star that can be very strongly lensed and time variable.
We describe the details of our model in Section~\ref{Model}. We present the details of our results in Section~\ref{Results} and a detailed discussion of our results in Section~\ref{Discussion}. Our method of exoplanet magnetic field measurement from the radio transit are described in Section~\ref{exo-mag}. Section~\ref{uv-xray} describes our results regarding exoplanet detection in UV and X-ray. Finally, we present a summary of this study and our conclusions in the last section.
\section{Time dependent model of the Star-Planet interaction}
\label{Model}
We developed the time-dependent model of the star-planet interaction using the BATS-R-US global MHD model \citep{powe99, toth12} and its version for the stellar corona and wind, the Alfv\'en Wave Solar Atmosphere Model \citep[AWSOM][]{vand14}\footnote{The BATS-R-US and AWSOM codes are part of the open-source Space Weather Modeling Framework (SWMF), which is available at \url{https://github.com/MSTEM-QUDA}. Input parameter setting file as well as any results data file are available upon request. The code version and the input parameter file provide the ability to fully recover the results presented here.}. AWSOM has been used extensively to study different properties of the solar corona and the solar wind. This model solves the set of non-ideal MHD equations (mass continuity, momentum, magnetic induction and energy equations) in the conservative form, while taking into account thermodynamic processes. Our time-dependent Star-Planet interaction setup consists of two parts. First we simulate the ambient solar/stellar wind, and second, we superimpose the planet in to this background solution.
\subsection{Modeling Stellar Corona}
In the AWSoM set up, the propagation, reflection, and dissipation of the Alfv\'en wave energy are modeled by solving two additional equations - one for waves propagating parallel to the magnetic field and another is for waves antiparallel to the magnetic field. We refer the reader to \cite{vand14} for the complete detailed description of the model. In the AWSoM formalism, the Alfv\'en wave pressure gradient accelerates the solar wind plasma \citep{alaz71}. Non-linear interaction between the outward and counter-propagating Alfv\'en waves generates turbulent cascade which is the source for coronal heating \citep{tu93, tu95,chan09}. Detailed thermodynamic effects, such as radiative cooling and thermal conduction are also included in the AWSoM setup. We use Threaded Field Line Model (TFLM) to model the transition region and lower corona as prescribed by \cite{soko21}. This helps to save the computational resources which otherwise needed to resolve the fine structure of the transition region using a highly refined grid. We refer the readers to \cite{sach19,sach21} for the validation study of the AWSoM model with observations.
One can develop the model for the solar wind by initializing the AWSoM model with the solutions from a Potential Field Source Surface (PFSS) magnetic field extrapolations \citep{scha69} obtained from the synoptic maps of the solar photospheric radial magnetic field. To develop the model for stellar wind, one have to just use the photospheric radial maps of that specific star instead of the Sun. Thanks to Zeeman-Doppler imaging techniques, these kind of observations for stars other than the Sun are available \citep[e.g.,][]{dona97, dona99}. For this study, we use the HD189733 stellar system. The stellar parameters of HD189733 (stellar mass M$_*$, stellar radius R$_*$ and stellar rotation period P$_*$) are listed in the Table \ref{tab:my_label}, and the model is driven by magnetogram data obtained from \cite{fare10}. Using AWSoM, we obtain the self-consistent, steady-state solution of the stellar corona and stellar wind.
\begin{table}[]
\caption{Stellar Parameters of HD 189733}
\centering
\begin{tabular}{c c}
\hline \hline
Stellar Parameter ~~~~~ ~~~~& Value\\
\hline
R$_*$ & 0.76 R$_\odot$\\
M$_*$ & 0.82 M$_\odot$\\
P$_*$ & 11.95 days \\
\hline
\end{tabular}
\label{tab:my_label}
\end{table}
\subsection{Modeling the Planets}
In our star-planet simulation, the planet is modeled through an additional boundary conditions for a second body that is imposed in the simulation domain. In our setup, the second body is the planet and the first body is the star. Next, we aim to include the orbital motion of the second body in our model. For that purpose, we updated the coordinates of the second body along a circular orbit with a radius of the planet's semi-major axis. In principle, any kind of orbit is possible, but for the sake of simplicity here we assume a circular orbit in the equatorial plane. In future work, we plan to generalize the planetary orbit to include additional orbital parameters, such as inclination and eccentricity.
To develop the time dependent model of the star-planet interaction, we first determine the cells which are inside the second body. We define the cells inside the second body as "body cells" and cells outside the body and the first body (star) as "true cells". Next, we update the coordinates of the second body per the orbital motion of the planet. When we update the second body coordinate, some previously body cells now become true cells as they are now outside the second body. On the other hand, some previously true cells now become body cells. New true cells, which were inside the second body before, need to be filled. We fill these cells by the averaging of nearest surrounding true cell values. As BATS-R-US uses block adaptive techniques, one have to also update ghost cells if the new true cells are near the block boundary. Cells that were true cells and are now body cells, are filled with the boundary conditions for the second body. In essence, this procedure dynamically move the boundary conditions of the second body along the planetary orbit.
Please note that frequency of the second body coordinate update is constrained by the numerical stability condition and the timestep. It is also necessary to resolve the second body well; for that purpose we need very fine grid resolutions (at least 10 grid cells across the planetary body). We follow the prescription of \cite{cohe11, cohe18} for that purpose. We set the planet size as $0.2 R_\odot$, which is almost twice size of HD 189733 b. \cite{cohe18} have shown that model results are independent of the planet size up to $0.15 R_\odot$. However, very fine resolution is needed to resolve the smaller size planet, making the computation very expensive. We follow grid refinement strategy along the planetary trajectory to resolve the planet well at any point of its trajectory. One can see \cite{cohe11, cohe18} for further details.
As the magnetic field strength of the planet is not known, we use two specific cases for our simulations. Because of tidal locking and larger rotation period, one can expect lower magnetic field for hot Jupiter planets compared to that of Jupiter \citep{sanc04}. However, stronger planetary magnetic field is also necessary to protect the planetary atmosphere from the erosion by the stellar wind. Keeping these in mind, we formulate our cases. In one case, we set the higher planetary magnetic field (3 G) and in another case lower Earth-like planetary magnetic field (0.3 G). We also consider a non-magnetized case. For our simulation, we use planetary boundary number density as $10^7$ cm$^{-3}$ and boundary temperature value as $10^4$ K. These values are sufficient to produce significant modulation in the background coronal density, although it produces lower thermal outflow from the planet compared to hot Jupiter \citep{cohe18}. Increase in these values will only intensify the modulations. For this study, we also consider two different orbital separations. In one case, we place the planet at a distance of $10 ~R_*$ from the star (a short-orbit case), and in another scenario, we place the planet at a distance of $20 ~R_*$ from the central star (a longer-orbit case). $R_\star$ is the stellar radius.
\subsection{Synthetic Images of the Stellar Corona in Radio, UV and X-ray Wavelength}
\subsubsection{Synthetic Radio Images}
We use the utility presented in \cite{mosc18} for generating synthetic radio images of the stellar corona from our MHD wind solutions. This algorithm captures the process of radio emission from the stellar corona due to Bremstrahulong emission and the way it propagates through the circumstellar medium of the non-uniform density \citep{benk10, benk12, mosc18}. During the propagation, radio waves suffer refraction. Although all the electromagnetic waves face the effect of refraction, radio waves experience more refraction because of their strongly varying refractive index between the medium of different densities \citep{kund65, ober11,moha17}.
Our radio emission calculation tool uses the ray tracing algorithm developed by \cite{benk10}, to calculate the actual curved path for the different frequency radio rays. Radio wave refraction actually controls the curved path trajectory for a given frequency $\nu$ (and angular frequency, $\omega$) from one grid cell to another inside the computational domain. Refractive index is related with the dielectric permittivity $\epsilon$ via the dispersion relation:\\
\begin{equation}
n^2= \epsilon= 1- \frac{\omega_p^2}{\omega^2}
\end{equation}
where $\omega_p=4 \pi e^2 n_e/m_e$ is the plasma frequency, with the electron number density, $n_e$, the electron mass, $m_e$, and the electron charge, $e$. Dispersion relation indicates that if the plasma frequency is greater than the ray frequency, refractive index become imaginary. Dispersion relation also suggests that in a region where plasma frequency is less than the ray frequency, low radio frequencies suffer increased refraction compared to higher frequencies (e.g., optical, EUV, and X-ray). One can assume the quasi-neutrality of the plasma and write the hydrogen plasma density as $\rho=m_p n_e$. We can then rewrite the dispersion relation as:
\begin{equation}
n^2=\epsilon=1- \frac{\rho}{\rho_{cr}}
\end{equation}
where $\rho_{cr}$ is the critical plasma density where refractive index becomes zero and radio waves can not transmit.
Finally, this tool calculates the radio intensity by performing an integration over the ray trajectories and provides a radio image for the particular radio frequency. We calculate the intensity of each pixel, $I_\nu$, by performing the integration over the emissivity along each ray path for a particular frequency $\nu$:
\begin{equation}
I_\nu=\int B_\nu(T) k_\nu ds
\end{equation}
where Bremsstrahlung emission are represented by the term:
\begin{equation}
B_\nu (T)=\frac{2 k_B T_e \nu^2}{c^2}
\end{equation}
and the absorption coefficient is:
\begin{equation}
k_\nu= \frac{n_e^2 e^6}{\nu^2 (k_B T_e)^{3/2}m_e^{3/2}c} <g_{eff}>
\end{equation}
Here, $k_B$ is the Boltzman constant, e is the electron charge, $n_e$ is the electron number density, $T_e$ is the electron temperature, $m_e$ is the mass of the electron, c is the speed of light and $<g_{eff}>$ is the Gaunt factor which is assumed to be equal to 10 in our study \citep{karz61}.
\subsubsection{Synthetic UV and X-Ray Images}
We generate synthetic UV and X-ray images by performing a line of sight integration:
\begin{equation}
I_{Pix}=\int n_e^2 \Lambda (T) ds,
\end{equation}
where, $I_{Pix}$ is the flux in each pixel, $n_e$ is the total electron density, $\Lambda (T)$ is the temperature response function obtained from the CHIANTI database \citep{Landi2012} and $ds$ represents the differential path along the line of sight. Finally, we generate synthetic light curve using the generated synthetic UV and X-ray images for different planetary phases.
\section{Results}
\label{Results}
In this section, we aim to characterize radio transit signals. In each case, we first perform a simulation keeping the planet fixed at a certain distance and obtain the steady state solution for the stellar corona and stellar wind, including the planet. This way, the wind and coronal solution evolves undisturbed by the planetary motion. Next, we use that steady state solution as an initial condition and perform the time dependent simulation where planet moves along the orbit. This way, we capture the disturbance of the coronal solution by the moving exoplanet. In this study, a hot Jupiter planet is moving around the HD 189733 star. The choice of an actual star over an idealized stellar dipole field is to capture a more realistic (non-uniform) background stellar radio emission. Our simulation captures the variation of plasma density along the planetary orbit as the planet moves. We generate the synthetic radio images of the stellar corona at different phases of the planetary orbit and calculate the modulation of this coronal radio emission by the Hot Jupiter exoplanet. Here, we consider the semi-major axis and the planetary magnetic field strength as our two free parameters. Other parameters, such as the planetary field orientation and planetary rotation, are left for a future study.
Our synthetic observations assume a single observing point connecting the observer and the star, while the exoplanet is orbiting around the star, modulating the medium along the line-of-sight (LOS) as a function of the orbital phase. The moving exoplanet can modulate the radio emission in two ways. In can block or deflect some of the emission from the observer, leading to an overall flux reduction, and it can focus the emission towards the observer, leading to an overall increase in the observed flux. These two effects and their impact on the observed radio lightcurves are discussed for each case.
We would like to emphasis that we intentionally do not scale the flux to certain distance from the Earth, but we only show the modulations observed by a "local" observer. The assumption we make in calculating synthetic modulations of the ambient radio flux is that the stellar flux is actually observable from the Earth. By definition, this work is only relevant to stars with an observable flux from the Earth, so we do not provide the actual flux magnitude in units of Jy. \cite{cohe18} have listed a number of stars with realistically observable radio flux, arguing that the methodology presented here is useful.
\subsection{Short-orbit Case}
\label{shortO}
We first consider the case when the planet is placed at a distance of 10 stellar radii ($a=0.037$AU). In this case, the planetary magnetosphere is sweeping through a relatively high background coronal density and magnetic field. Thus, strong radio modulations are expected. Indeed, the radio intensity in different frequency bands is strongly modulated by the star-planet interaction as the planet is very close to the host star. When radio waves at these frequencies propagate through the ambient medium between the star and the planet, they suffer a strong refraction due to the strong density variations between the ambient corona and the planetary magnetosphere. The magnetopause region, where plasma is highly compressed also plays a role in the modulations of the radio path. Figure \ref{fig:RadioImage} shows synthetic radio images for frequencies of 10 MHz and 1 GHz respectively. Modulation of the ambient coronal plasma by the planetary movement are clearly reflected in these generated radio images.
The interaction described above is shown in Figure \ref{fig:evol-short}. The figure shows a top-down view on the equatorial plane, colored with density contours. It shows the density modulation of the ambient medium by the moving planet for different planetary phases, where the planetary magnetosphere is clearly visible. The telescope position and the LOS are marked by the thick black line (along the Y=0 line in the negative x direction), so the centre transit point (phase 0.5) is given by this point of observation. When the planet is at phase 0.25 (Fig. \ref{fig:evol-short} a), the coronal density along the LOS is affected slightly by the planetary movement. We also note that the planetary magnetosphere is slightly tilted from the normal planetary phase line (thin black line) due to the planetary orbital motion (leading to a comet-like magnetotail). When the planet moves close to the mid transit line, density modulations starts to impact the collection of radio waves by the telescope. This is basically due to radio frequency refraction which is controlled by the density modulation of the ambient medium. Even if the planet moves away from the mid-transit line, the tail of the planetary magnetosphere still has some impact on the density of the ambient medium along the LOS (see Fig. \ref{fig:evol-short} d-f). We also note again that previous studies mimicked the orbital phase variation of the exoplanet around the host star by viewing the static solution from different angles \citep[a moving observer, static soution, e.g.,][]{cohe18,stru18a}. Please note that there is some initial density perturbation at the start of the simulation (phase zero); but that will not impact the result as that phase is on the complete opposite side of the line of sight.
Figure \ref{fig:exo_short_transit1} shows the synthetic light curves of the different frequency radio intensity as a function of the orbital phase for two different planetary magnetic field strengths. Radio intensity values are presented in relative flux values normalized to the flux at phase zero (when the planet is eclipsed by the star). %
Our virtual observing telescope is located in the middle of each plot (at a phase of $0.5$) designated as a mid-transit point. Our "telescope" is located at $40~R_*$ along the mid-transit point. The solid and dashed synthetic light curves in Figure \ref{fig:exo_short_transit1} correspond to two different planetary magnetic field strengths of 0.33 Gauss and 3 Gauss. Both these cases show similar trends, but they differ in magnitude and phase (see Figure \ref{fig:exo_short_transit1}).
Figure~\ref{fig:Magnetosphere} shows a zoom in view near the planetary magnetosphere for the three cases (strongly, weakly, and non magnetized planet). It can be seen that the day-side magnetosphere is very small in the non and weakly magnetized case (about 0.5 planetary radii), and its is larger for the strongly magnetized case (about 2.5 planetary radii). It can also be seen that the strongly magnetized magnetotail is wider and it has more structure, possibly due to a magnetic interaction with the ambient coronal plasma and magnetic field.
Figure \ref{fig:exo_short_transit1} shows that high frequency radio emission (equal to or more than 100~MHz) is blocked by the hot Jupiter planet near the mid-transit phase, causing a drop of around 5-10~\% in intensity. The higher the frequency is, the lower the drop we see. This high frequency blocking trend follows the logic that the high frequency emissions come from the hot, dense regions at the low corona where the magnetic field is stronger \citep[e.g.,][]{cohe18,mosc18}, which are simply shaded by the planet. We notice a slight increase in the intensity when planet starts to move out or in of transit. It seems like as the planet is moving out of transit, and the magnetotail is located at the transit point, there is an overall focusing of the radio wave path at high frequencies, leading to a flux increase. When planet starts to enter into the transit phase, then also some refracted waves reached into the telescope, resulting an increase in flux just before the transit. We found that drop in the radio intensity is higher (around 15-20~\%) for 100~MHz frequency. The higher intensity drop at 100~MHz-1~GHz, mid-transit for the higher field strength indicates a larger flux blocking. This makes sense if one assumes a slightly larger magnetosphere for a higher field strength (Figure~\ref{fig:Magnetosphere}), which blocks a larger area of the background emission.
The behaviour of radio transit is quite different in the low-frequency bands (below 100 MHz). The lower frequency stellar radio emission is associated with the cooler, less dense regions of the higher corona, which are also the regions that the planet is passing through. Transit behaviour of the 30~MHz radio frequency (Figure~\ref{fig:exo_short_transit1}) indicates an increase in the lower frequency radio intensity starting around phase 0.2 and then drops at the mid transit phase. Drop in the mid transit phase is expected due to the flux blocking by the planetary magnetosphere. We also note that modulation in the radio transit behaviour is more for higher planetary magnetic field scenario.
However, the behaviour of 10 MHz radio frequency transit (Figure~\ref{fig:exo_short_transit1}) is very different. It shows an increase in the radio intensity starting around phase 0.2, and increases to very high value (10~\% increase) around the mid transit phase and then starts dropping. Looking at Figure~\ref{fig:exo_short_transit1}, these phases seemed to be the locations of the dense stellar helmet streamers. Thus, the increase of the radio flux at very low frequency (10~MHz) is associated with the crossing of the denser regions of the stellar corona, possibly leading to a compression of the planetary magnetosphere, and a 8-10~\% increase in the radio waves flux that is collected by the observing telescope (see Figure \ref{fig:exo_short_transit1}). The increase at mid-transit seems to be a combination of the helmet streamer crossing (regions around the largest closed stellar magnetic field loops), and the magnetotail focusing effect also seen at low frequencies. Here, the flux increase is greater for the strong field case comparing to the weaker field case. It seems like the magnetotail size contributes more to the enhancement of the low-frequency radio flux as it crosses the observing LOS. Probably, radio frequency of 100~MHz or less than that is the best suitable frequency for observing planetary transit in radio. Finally, no significant phase shift is visible between the two field strength cases in the low frequencies.
The total radio intensity for the higher frequencies, e.g., 1~GHz lies on the order of $10^{-14}~[W~m^{-2}~Hz^{-1}]$, while for a frequency of 10 MHz, it is on the order of $10^{-17}~[W~m^{-2}~Hz^{-1}]$. Although low frequency radio intensities modulated significantly due to star-planet interaction, the total low frequency radio emission is much lower compared to higher radio frequency emission. This makes the observation of low frequency radio emission a challenging task compared to high frequency radio observation.
In addition to the strong/weak field cases, we also consider a non-magnetized planet case. The interaction between the magnetized stellar wind and the non-magnetized obstacle (planet) has been recently recently defined as "unipolar" interaction \citep{stru15, stru18a}. In our solar system, interaction between the solar wind and the Venus atmosphere is an ideal example of such an unipolar interaction. It was shown that the unipolar interaction leads to the creation of an induced magnetosphere having similar global structure like other self-generated planetary magnetosphere \citep[e.g.,][]{Luhmann1981,Kivelsonrussell1995,Russell2006, basa21}. However, \cite[e.g.,][]{ma13} have shown that induced magnetosphere has much less spatially extended compared to the self-generated planetary magnetoshpere.
When we perform the simulation with a non-magnetized planet, we find an induced magnetosphere around the non-magnetized planet (see Figure \ref{fig:evol-unipolar}). This reaffirms previous findings \citep{stev03, ma13, basa21} that non-magnetized planet possess an induced magnetosphere. Figure \ref{fig:evol-unipolar} shows the density modulations of the surrounding medium due to the induced magnetosphere when the planet is non-magnetized. Figure \ref{fig:exo_uni_transit} shows synthetic light curves of the different frequency radio intensity modulations in case of the unipolar interaction scenario. We note that non-magnetized planet is also placed at a distance $10~R_\star$. The radio modulations show overall similar trends to the weakly magnetized case for higher frequencies (equal to or more than 100~MHz), but the modulations are weaker (about 10\% or less). In this scenario, we also notice higher drop in the radio intensity for 100~MHz case. For lower frequencies, we notice an increase in the radio intensity value starting around phase 0.2 and a drop around mid transit (Figure~\ref{fig:exo_uni_transit}). It seems like the particular structure of the induced magnetosphere leads to a significant focusing of the radio waves at this frequency.
Our results show that the radio waves are blocked or disrupted not only by the actual planet but also by the planetary magnetosphere. The flanks or edge of the planetary magnetosphere start disrupting the propagation of radio waves well before the beginning of the actual transit. Thus, radio emission modulations are affected more strongly by a transiting exoplanet comparing to visible transits.
\subsection{Longer-orbit Case}
\label{LongO}
In this scenario, we place the planet at a distance of $a=20~R_\star$ ($0.080$AU). As we go outwards, the density of the stellar corona now decreases significantly. In this situation, one can assume the planetary magnetosphere as a relatively high density bubble that is crossing the lower density regions of the outer corona. This is clearly seen in Figure~\ref{fig:Magnetosphere}. One can see that in the close-orbit case, the magnetosphere density is comparable with that of the ambient corona, while in the longer-orbit case, the magnetosphere represents a bubble of higher density than that of the ambient corona. As a result, in the short-orbit case, the modulation of the radio flux is mostly due to the magnetosphere-corona interaction since the magnetosphere replace the ambient corona with an overall similar density. In contrast, in the longer-orbit case, the modulations are due to the density structure of the magnetosphere itself, since the magnetosphere replaces the corona with larger density along the path of the radio waves.
Figure \ref{fig:evol-long} shows the density modulations in the ambient medium due to the planetary motion in the long orbit scenario. Similar to the short orbit case (Figure \ref{fig:evol-short}), the telescope observing point and the LOS are marked by the thick black (Y=0) line. Figure \ref{fig:evol-long} b indicates that planetary magnetosphere modulates the density near the line of sight (thick black line) much before the mid-transit point. Figure \ref{fig:evol-long} d-f indicate that density modulation due to the planetary magnetosphere along the line of sight remains even after the completion of transit.
Figure \ref{fig:exo_long_transit1} shows the synthetic light curves of the radio flux at different frequencies as a function of the orbital phase when planet is orbiting further from the star. The figure shows the modulations for the two different planetary magnetic field strengths.
Figure \ref{fig:exo_long_transit1} shows significant drop in the radio intensity value at the mid transit phase. We notice very significant (almost 45~\%) drop in the radio intensity value for the 100 MHz frequency; drop is much lower (around 5-10~\%) for other frequencies (250 MHz-1 GHz). We also notice slight increase in the radio intensity just before and after the transit similar like short-orbit scenario. This is because magnetosphere refract the radio waves towards the observer just before and after the transit. Transit behaviour of the 30~MHz radio frequency also shows a drop in the radio intensity value at the mid transit and a increase in the value of radio intensity just before and after the transit.
We find quite different behaviour of radio transit for very low frequency (see Figure \ref{fig:exo_long_transit1}). Between phase of 0.4 and mid-transit, the radio flux increases by 10~\%, and then drops slightly around mid transit. Between phase of 0.52 and 0.6, we again find an increase, and then drops around the phase 0.6. This is a clear, and a very extensive focusing effect of the stellar radio emission by the moving planetary ``bubble". We notice some modulation in the radio intensity value around phase of 0.15 and 0.85 for all radio frequencies. Looking at Figure~\ref{fig:exo_long_transit1}, these phases seemed to be the locations of the dense stellar helmet streamers. This modulation is probably the effect of the planet crossing the streamers in the outer corona.
In all scenarios, we notice a very clear difference in the magnitude of the modulations between the strong and weak field cases, and a slight difference in phase. Variations between the compressed day side magnetopause and the stretched magnetotail seem not to be visible in the radio lightcurves when the planet is a orbiting the star at a greater distance and interacting with the outer corona.
\section{Discussion}
\label{Discussion}
\subsection{Exoplanet Magnetic Field from the Radio Transit}
\label{exo-mag}
Previous studies assumed that the exoplanetary magnetosphere is a source for an intense non-thermal radio emission. Interaction between the stellar wind and the planetary magnetosphere causes long term variation in the magnetospheric radio emission \citep{desc83}. The magnetospheric radio emission is directly proportional to the stellar wind energy input at the planetary magnetosphere. It has been shown that solar wind energy input at the planetary magnetosphere depends on the solar wind-magnetosphere stand off distance \citep{desc84}. As solar wind-magnetosphere stand off distance depends on the planetary magnetic field strength, thus one can easily estimate the planetary magnetic field from the scaling relationship between the solar wind-magnetosphere stand off distance and the magnetospheric radio emission \citep{desc84, mill88, farr99, zark01a, lazi04}. However, auroral radio emission from exoplanets seem to be below the observable threshold \citep[e.g.,][]{Burkhart2017,lync18}. Here, we are proposing an alternative approach for calculating exoplanet magnetic field from the radio transit.
Instead of detecting planetary magnetosphere as a source of radio emission, here we focus on the modulation in the radio intensity from the host star caused by the planetary magnetosphere to characterize the exoplanetary magnetic field. Specifically, we aim to figure out some scaling relationship between the radio intensity modulation during the transit and the planetary magnetic field. For this purpose, we consider six different scenarios with three different planetary magnetic field strengths, namely 0.33 G, 1 G and 3 G respectively and two different orbital distances, namely 10 R$_\star$ and 20 R$_\star$ (the intermediate case of 1 G planetary field strength is not shown in details in the paper).
In order to get the simplest scaling of the radio flux modulation with the planetary field strength, We define the "Extreme Modulation" as the absolute difference between the maximum and minimum radio intensity value during the transit. Please note that we calculate the Extreme Modulation using the normalized radio transit dataset for each frequency. Figure~\ref{fig:exo-mag-field} shows the relationship between the Extreme Modulation and the planetary magnetic field strength. In the long orbit scenario, we find a clear increase in the Extreme Modulation value with the planetary magnetic field strength for all radio frequencies (Right panel of Figure~\ref{fig:exo-mag-field}). In the short orbit scenario, we also find a moderate increase in the Extreme Modulation value with the planetary magnetic field, but for some frequencies, it drops first and then increases again (Left panel of Figure~\ref{fig:exo-mag-field}). Please note that our data sets consist of only three data points, which is insufficient in order to derive a scaling law between the Extreme Modulation and the planetary magnetic field strength. For that, many simulations considering different planetary magnetic field strengths, star-planet distance and stellar magnetic map are needed. We also note that the size of the planetary magnetosphere is larger for higher planetary magnetic field (see Figure~\ref{fig:Magnetosphere}). Our initial results are very promising, indicating the possibility of exoplanet magnetic field determination from the radio transit.
\subsection{Exoplanets in EUV and X-ray:}
\label{uv-xray}
Most of the magnetically active stars are known to produce emissions in the EUV and X-ray bands, which are well-observed. It is possible that the planetary motions can modulate the EUV and X-ray emission from the host star as well. Since our model can also produce synthetic X-ray and EUV images \citep[see full description in, e.g.,][]{vand14,sach19}, we compare the synthetic modulations produced in these band to those obtained in the radio bands.
Left panel of the Figure~\ref{fig:euv-xray} shows the synthetic light curves for different EUV frequencies as a function of the orbital phase. Right panel of the Figure~\ref{fig:euv-xray} also shows the same but for the X-ray frequencies. In both cases, we do not find any significant modulation. There is some 3-4 \% modulation (see Zoomed in portion) but that is not because of exoplanet. That is probably due to the fact that our star-planet interaction model is time dependent; solutions of our model are not always perfectly steady, and we believe that these modulations are due to the non-steadiness of the stellar solution, and not due to the planetary motion.
One can expect some dip in the mid-transit phase as the planet is supposed to block the incoming X-ray and EUV emission. However, we do not find any such characteristic dip in the X-ray and EUV transit spectrum. First, the low-resolution magnetogram we use for HD 189733 produces very small amount of X-ray and EUV emission since no large active regions are included. Additionally, the orbital period of the planet is only two days in our simulation and the planet remains in the transit phase for very short time (only four to five hours). Please note that we have not considered the presence of large starspots and flares in our simulation setup. Large starspots and flares are know to produce high amount of EUV and X-rays; thus presence of large starspots and flares may increase the total X-ray and EUV flux value in the observable range. It may be then possible to observe exoplanet transits in EUV and X-ray \citep[See][]{popp13}.
\subsection{The Potential of Exoplanets Radio Transits Observations and Simulations}
\label{FutureofRadioTransits}
When an exoplanet transits its host star, stellar emission is expected to be absorbed or scattered by the planetary atmosphere or magnetosphere. One can use the resulting transit spectrum to characterize the atmosphere of that exoplanet. This techniques has been used successfully to characterize the atmosphere of many hot Jupiter, mini Neptune and super Earth like exoplanets \citep{seag20, vida03, krei14, knut14}. Host star can emit in different bands like visible, radio, EUV etc. During the propagation through the stellar or planetary atmosphere, these emitted waves are refracted (bent) in response to the atmospheric index of refraction gradient. This process is important as it modifies the atmospheric path traversed by the emitted waves, eventually impacting the collection of emitted waves by the observing telescope thus the transit. Previous studies have explored the effect of refraction on an exoplanet light curve/ transit spectrum \citep{hui02, sidi10, garc12, misr14}. Significant impact of refraction has been observed in our solar system during lunar eclipse and the 2004 Venus transit \citep{pasa11, garc12}.
In principle, it is very difficult to determine from where the refracted rays are reaching the observing telescope. However, we can think of some simple tentative scenarios. We first think the scenario when the planet is far away from the mid-transit LOS. In this case, although the moving planet modulates the density surrounding its position, it is very less likely that refracted rays will be able to reach the telescope. Rays that are coming from the star are only refracted by the non-modulated density variations of the stellar corona. One can expect almost no modulation or very less modulation in the transit spectrum. However, density modulation due to planetary motion does affect the refraction pattern significantly. When the planet gets closer to the LOS, just before the beginning of the transit, one can expect a significant impact of refraction on the transit spectrum. In this scenario, more refracted rays are likely to reach the telescope due to the strong density variations between the coronal and magnetospheric plasma, especially at the compressed region near the magnetopause. Thus one can expect an increase in flux just before the transit. Next, we consider the situation when the planet is at mid-transit. In this scenario, the planet and the planetary magnetosphere are supposed to block a significant amount of the incoming rays, while some refracted rays still reach the telescope. One can expect a dip in the observed flux. Finally, we consider the situation when the planet just completes the transit and moves out of transit. More refracted rays in addition to direct incoming rays are also reaching the telescope, due to the crossing of the LOS by the other side of the magnetosphere/magnetopause. Thus one can also expect an increase in the flux just after the transit. A general increase in the observed flux just before and after the transit, and a dip during the transit is mainly due to atmospheric lensing and it is described in \cite{sidi10}. Our results for radio waves refraction are overall consistent with these scenarios, while they are more detailed due to the non-ideal setting we choose to use here.
The structure of the stellar corona generally consists of closed magnetic loop (helmet streamers) and regions where the magnetic field lines are open to space. The so-called Alfv\'en surface defines the surface where the solar wind speed is equal to the local Alfv\'en speed. We generally find a slow, more dense wind structure near the top of the helmet streamer, and a faster, less dense wind away from these regions. Thus, the topology of the stellar magnetic field influences the coronal structure significantly (see \cite[e.g.,][]{McComas2007,reve15, stru15, perr21, hazr21}). The dense, hot closed field regions are actually the major source region for the background stellar coronal radio emission. Specifically, strong magnetic field near active regions is the source for the highest frequency radio emission \citep{mosc18}. Therefore, the controlling factor for the stellar radio emission (excluding transient radio bursts) is the stellar surface magnetic field structure \citep{vare18}. In our model, the low-resolution ZDI map for HD 189733 represents a relatively simpler magnetic structure (almost dipolar) with helmet streamer regions located mainly near the equator. Since the low-resolution magnetogram we use does not include active regions, it is likely that the actual radio flux from HD 189733, especially at higher frequencies is larger than the one modeled here.
Our simulations show clear trends in the modulations of the radio flux for different frequencies, and planetary orbital distance from the star. Moreover, they show notable differences in the modulations as a function of the magnetic field strength. Interestingly, even the non-magnetized case has shown some modulations due to its induced magnetosphere. Our simulations demonstrate that both magnitude and phase differences exist in the radio flux modulations patterns, and that these differences could potentially be related to the planetary field strength.
Despite of the fact that the radio flux from most planet-hosting stars is weak \cite[there are some potential selected targets, see][]{cohe18}, and that radio signals are very noisy, our simulations show that the planetary modulations of the radio signal could cover significant part of the phase curve due to the extended impact of the planetary magnetosphere. A combination of radio observations in both low and high radio frequencies, and a detailed modeling of the background stellar corona (driven by magnetic observations of the star) could provide a promising way to characterize and constrain the planetary magnetic field. Such a characterization could significantly improve our understanding about the internal structure of exoplanets, as well as their atmospheric evolution \citep{Gronoff2020}.
Here, we choose to simulate a real system in order to capture a-symmetries in our solutions. We show that even in a non-uniform, more realistic case, radio modulations are clear and visible. However, idealized cases may help to better characterize the shape, magnitude, and phase location of different modulations as a function of magnetic field strength, orbital phase, and perhaps spectral type/stellar field strength. Specifically, if shape, magnitude, and phase location could be generalized, Machine Learning (ML) techniques could be adopted to search for planetary radio modulation in the large, available data sets. Such an approach has been adopted to detect exoplanets in other types of exoplanets data sets \cite[e.g.,][]{Malik2022}.
Previous studies already indicated the possibility of detecting the signature of exoplanet transit around the host star using upcoming SKA in the low frequency range \citep{pope19} and ALMA in the high frequency range \citep{selho13, selho20}. However, radio flux from the planet hosting star should be sufficiently strong to be observable; one may use few known stars with observable radio flux for that purpose \citep[See][]{wend95, slee03, etangs11, villa14, fich17, moha21}. Many of these previous studies placed an upper limit on the flux density and mass loss rates on the wind of host stars. \cite{villa14} detected all four selected stars in their study in the Ka band (centre frequency 34.5 GHz) using Very Large Array (VLA) and only able to put an upper limit on the flux density for other frequencies. The upgraded version of the existing VLA \citep[ngVLA;][]{oste18} will have very high sensitivity and can detect few observable stars in the radio band. However, we note that most of the stars are detected in non-thermal radio emission, especially in flares. In our study, we only consider the thermal radio emission, which is difficult to detect with present radio instruments. We may need to wait for more sensitive radio interferometers to detect thermal radio emission from planet-hosting stars.
Our next studies will be dedicated to perform a grid of idealized models of the star and the planet using dipole fields for both. In addition to the planetary field strength and orbital separation, we plan to investigate the radio modulations as a function of planetary field polarity (with respect to the stellar field), and planetary inclination. Moreover, we plan to investigate the effect of small-scale active regions on the radio modulations, and on the overall radio flux. We will either impose artificial active regions or alternatively, we will use solar magnetograms that include these small-scale features.
\section{Conclusion}
\label{Conclusion}
Observing and characterizing magnetic fields of exoplanets are important for the understanding of their internal structure and atmospheric evolution. In this study, we perform a time-dependent star-planet interaction (SPI) simulation to study the same. We use HD 189733 as a central star and one hot Jupiter planet is moving around that star. We use a set of simulations aiming to demonstrate the feasibility of observing exoplanetary magnetic field using radio transit observations.
Our simulations show some clear repeated trends in all radio frequencies, as well as some differences in these trends between the low- and high-range of frequencies. Moreover, our simulations demonstrate a clear dependence of the modulations on magnetic field strength, in terms of magnitude of the modulations, and a phase-dependence of some modulation features. Thus, our simulations suggest that the magnitude of the exoplanetary field could potentially determined from radio transit observations.
Our initial study here provides a solid background for an extended parametrization of the transit modulations of radio emissions from the star. Future work should combine simulations of the stellar corona and the exoplanet, and radio observations of stars with a feasibly observable radio flux. The former would provide specific features in the data that indicate the planetary strength, while the latter would provide the datasets in which those features may appear. ML tools would be ideal for this task. Future more sensitive radio interferometers may help us to detect these kinds of radio modulations in the planetary transit.
Additionally, we note that our study does not consider any kind of stellar magnetic variability. Our model only focuses on the thermal radio emission, not the coherent (non-thermal) radio emission. Observations indicate significant increase in the stellar X-ray and radio emission during flare and coronal mass ejection (CME). Effective cleaning of stellar magnetic variability (flare and CME effect) from the Radio transit spectrum is necessary to understand the impact of exoplanet.
\begin{acknowledgments}
This work is supported by NASA grant 80NSSC20K0840. Simulation results were obtained using the (open source) Space Weather Modeling Framework, developed by the Center for Space Environment Modeling, at the University of Michigan with funding support from NASA ESS, NASA ESTO-CT, NSF KDI, and DoD MURI. We also thank Dibyendu Nandi for the discussion and suggestions. The simulations were performed on NASA's Pleiades cluster under SMD-20-52848317.
\end{acknowledgments}
\input{reference_exo.bbl}
|
Title:
Using ultra-high energy cosmic rays and air showers to test Lorentz invariance within modified Maxwell theory |
Abstract: Cosmic rays and air showers at ultra-high energy are unique tools to test the
validity of Lorentz invariance. A brief overview is given on such tests
focusing on isotropic, non-birefringent Lorentz violation (LV) in the photon
sector. Based on the apparent absence of vacuum Cherenkov radiation and photon
decay, the LV parameter $\kappa$ is bound to $-0.6 \cdot 10^{-20} < \kappa < 6
\cdot 10^{-20}$ (98\% CL). We report an updated limit from cosmic-ray photon
observations and preliminary results on testing vacuum Cherenkov radiation in
air showers.
| https://export.arxiv.org/pdf/2208.08747 |
\newcommand{\refeq}[1]{(\ref{#1})}
\def\etal {{\it et al.}}
\title{Using ultra-high energy cosmic rays and air showers
to test Lorentz invariance
within modified Maxwell theory}
\author{Markus Risse}
\address{Department of Physics, University of Siegen,\\
57072 Siegen, Germany}
\bodymatter
\section{Introduction}
As a pillar of physics, Lorentz invariance (LI) deserves being tested thoroughly.
The search for violation of Lorentz invariance (LV) is also motivated by efforts towards a fundamental theory, where LV effects are allowed and may appear in the low-energy theory.\cite{liberati09a}
Thus, theory needs experimental guidance. Data interpretation, in turn, needs a theoretical framework.
In the analyses summarized here, the framework of modified Maxwell theory is adopted (Sec.~\ref{sec:modmax}).
We make use of the highest-energy particles in the universe: ultra-high energy (UHE) cosmic rays (Sec.~\ref{sec:cr}) and the air showers they initiate (Sec.~\ref{sec:eas}).
By checking the presence of the non-standard processes of vacuum Cherenkov radiation and photon decay, LI is probed.
The apparent absence of these processes strongly constrains LV.
\section{Modified Maxwell theory}\label{sec:modmax}
The Lagrange density of standard QED is extended by adding a term which breaks Lorentz invariance while preserving CPT and gauge invariance\cite{modmax}:
\begin{equation}
\mathcal{L} = -\frac{1}{4}F^{\mu\nu}F_{\mu\nu} +
\overline{\psi}\left[\gamma^\mu(i\partial_\mu-eA_\mu)-m\right]\psi
-\frac{1}{4}(k_F)_{\mu\nu\rho\sigma}F^{\mu\nu}F^{\rho\sigma}
\label{eq:lv_lagrangian}
\end{equation}
For a discussion, see Ref.~\refcite{lv2021} and references therein. We focus on the case of isotropic, nonbirefringent LV in the photon sector,
controlled by a single dimensionless parameter $\kappa \in (-1,1]$.
For $\kappa \neq 0$, certain processes forbidden in case of LI become allowed.
For $\kappa > 0$, vacuum Cherenkov radiation of charged fermions of mass $M$ occurs above an energy threshold
\begin{equation}
E^\text{th}_f(\kappa) = M\,\sqrt{\frac{1+\kappa}{2\kappa}} \simeq \frac{M}{\sqrt{2\kappa}}~.
\label{eq:particlethreshold}
\end{equation}
For $\kappa < 0$, photons decay into electron-positron pairs above the threshold
\begin{equation}
E^\text{th}_\gamma(\kappa) = 2\,m_e\,\sqrt{\frac{1-\kappa}{-2\kappa}} \simeq \frac{2\,m_e}{\sqrt{-2\kappa}}~,
\label{eq:photonthreshold}
\end{equation}
with the electron mass $m_e$.
Both processes turn out to be very efficient\cite{klinkhamer08c,diaz1516}: with radiation lengths $\ll 1$~m, the energy loss is quasi-instantaneous.
\section{Constraining Lorentz violation using cosmic rays}\label{sec:cr}
$\boldsymbol{\kappa > 0}$: A first constraint can be obtained by the mere existence of cosmic rays.
Assume $\kappa = 0.5 \cdot 10^{-10} \Rightarrow E^\text{th}_p \simeq 10^{14}$~eV for protons:
then, no protons were expected to reach Earth above $E^\text{th}_p$, in contradiction to data.
Thus, $\kappa$ can be bounded. Assuming iron primaries for cosmic rays observed above $10^{20}$~eV
led to the constraint $\kappa < 6 \cdot 10^{-20}$ (98\% CL).\cite{klinkhamer08ab,klinkhamer08c}
\\ \\
$\boldsymbol{\kappa < 0}$: Similarly, the existence of cosmic-ray photons is used to constrain $\kappa < 0$.
Based on $\sim$30~TeV photons observed by atmospheric Cherenkov telescopes, a limit of
$\kappa > -9 \cdot 10^{-16}$ (98\% CL) was obtained.\cite{klinkhamer08c}
In view of the recent observation of PeV photons, this limit can be updated. Using a photon of (1.42$\pm$0.13) PeV energy\cite{lhaaso2021}
improves the limit to $\kappa > -4 \cdot 10^{-19}$ (98\% CL).
Cosmic-ray photons at even higher energies are searched for\cite{uhephoton1,uhephoton2} but have not yet been identified.
An observation of an EeV photon would improve the limit by about six orders of magnitude.
\section{Constraining Lorentz violation using air showers}\label{sec:eas}
Extensive air showers (EAS) initiated by cosmic rays offer an alternative approach to test LI:
in the cascading process, various UHE particles are expected to be produced as secondaries,
possibly with energies above $E^\text{th}_f$ or $E^\text{th}_\gamma$.
When the primary cosmic ray -- typically protons or nuclei up to iron -- interact with the air,
pions constitute a large fraction of the particles produced. The neutral pions usually decay quickly
into secondary photons, giving rise to the well-known electromagnetic cascade of photons, electrons and positrons
based on pair production and bremsstrahlung. Dominated by these electromagnetic particles, the number of particles
increases, reaches a maximum, and decreases again due to ionizational energy loss.
Air shower experiments such as the Pierre Auger Observatory\cite{auger} detect secondary particles reaching ground with a huge surface detector array
(3000~km$^2$). In addition, the flash of fluorescence light emitted by the air from the
passage of the particle cascade can be registered by appropriate telescopes.
This allows the observation of longitudinal shower profiles.
The energy of the primary particle is given by integrating the profile (modulo small corrections).
The atmospheric depth of shower maximum, $X_{\textrm{max}}$, contains information about the primary type:
due to the smaller cross-section,
primary proton showers reach the maximum about 100~g/cm$^2$ deeper in the
atmosphere compared to primary iron of same total energy.
Related, shower-to-shower fluctuations $\sigma(X_{\textrm{max}})$
are larger for primary protons.
Both quantities, $X_{\textrm{max}}$ and $\sigma(X_{\textrm{max}})$, can be used to study possible LV effects.
\\ \\
$\boldsymbol{\kappa < 0}$: While awaiting the observation of primary UHE photons,
in EAS secondary photons are expected with energies up to about 10\%
of the primary proton energy. This permits some access to photons well above PeV energies.
In case of quasi-instantaneous photon decay, showers become shorter\cite{lv2017}:
as shown in Fig.~\ref{fig1}, the average depth of maximum $\left<X_{\textrm{max}}\right>$ is reduced by $\sim$100~g/cm$^2$
at $10^{19}$~eV for $\kappa = -9 \cdot 10^{-16}$.
If the observed $\left<X_{\textrm{max}}\right>$ exceeds the one from LV simulations for any primary composition,
the corresponding $\kappa$ value is excluded.
This has been performed for $\left<X_{\textrm{max}}\right>$ only\cite{lv2017} and
combining\cite{lv2021} $\left<X_{\textrm{max}}\right>$ and $\sigma(X_{\textrm{max}})$.
The combination provides additional restrictions on the primary composition and led to
a limit of $\kappa > -0.6 \cdot 10^{-20}$ (98\% CL). The constraints are summarized in Tab.~\ref{tbl1}.
\begin{table}[t]
\tbl{Summary of constraints on $\kappa$ (limits at 98\% CL).}
{\begin{tabular}{@{}l|rc@{}}%
& Using cosmic rays & Using air showers \\
\colrule
$\kappa > 0$: vacuum Cherenkov & $< 6 \cdot 10^{-20}$ & (work in progress) \\
\colrule
$\kappa < 0$: photon decay & $> -40 \cdot 10^{-20}$ & $> -0.6\cdot 10^{-20}$ \\
\end{tabular}
}
\label{tbl1}
\end{table}
\\ \\
$\boldsymbol{\kappa > 0}$: Checking Tab.~\ref{tbl1}, the question arises whether also a limit on $\kappa > 0$ can be placed using EAS.
Secondary electrons and positrons with energies
up to about 5\% of the primary proton energy are expected. In case of instantaneous vacuum Cherenkov radiation, again shorter showers
could occur compared to the standard case of bremsstrahlung.
Preliminary results\cite{lv2022} after implementing this LV effect in shower simulations are displayed in Fig.~\ref{fig2}.
Indeed, again a reduction of $\left<X_{\textrm{max}}\right>$ can be noticed. The effect appears to be smaller
than the one for $\kappa < 0$ for the same $|\kappa|$. Still, a limit can be placed that is
based on a different method (showers instead of cosmic rays) and different particle species (electrons/positrons instead of protons/nuclei).
It should also be noted that for a given $\kappa$, only primaries below a certain
threshold $E/M$ arrive at Earth at all. This restriction on composition will strengthen the LV constraints.
\section{Conclusion and outlook}
UHE cosmic rays and their air showers are well suited to test LI.
The apparent absence of the non-standard processes of vacuum Cherenkov radiation and photon decay puts strong constraints on LV.
Currently, LV constraints on $\kappa > 0$ are derived using air showers.\cite{lv2022}
Further improved tests are possible in case of observing UHE photons and for further data (higher energy, smaller uncertainties, restrictions on composition).
With limits on $|\kappa|$ at the $10^{-20}$ level, the tested LI apparently holds to a high extent.
In view of theoretical approaches allowing a-priori for LV at much larger levels, the questions arises whether mechanisms
should be considered to ``protect LI'' (e.g., Ref.~\refcite{bjorken}).
Still, even if getting used to constraining LV, experimentalists should remain open-minded for LV showing up.
\section*{Acknowledgments}
It is a pleasure to thank the organizers for a stimulating workshop.
The work summarized here was performed in collaboration with F.R.\ Klinkhamer,
J.S.\ D\'iaz, M. Niechciol and F. Duenkel: thank you!
Support by the German Research Foundation (DFG) is gratefully acknowledged.
|
Title:
Connecting the astronomical testbed community -- the CAOTIC project: Optimized teaching methods for software version control concepts |
Abstract: Laboratory testbeds are an integral part of conducting research and
developing technology for high-contrast imaging and extreme adaptive optics.
There are a number of laboratory groups around the world that use and develop
resources that are imminently required for their operations, such as software
and hardware controls. The CAOTIC (Community of Adaptive OpTics and hIgh
Contrast testbeds) project is aimed to be a platform for this community to
connect, share information, and exchange resources in order to conduct more
efficient research in astronomical instrumentation, while also encouraging best
practices and strengthening cross-team connections. In these proceedings, we
present the goals of the CAOTIC project, our new website, and we focus in
particular on a new approach to teaching version control to scientists, which
is a cornerstone of successful collaborations in astronomical instrumentation.
| https://export.arxiv.org/pdf/2208.02263 |
\keywords{high-contrast imaging, adaptive optics, testbeds, wavefront sensing and control, control software, laboratory experiments, version control}
\section{Introduction}
\label{sec:introduction}
Laboratory testbeds for high-contrast imaging (HCI) and extreme adaptive optics (AO) systems are indispensable venues for technology development in a field that relies on the largest-aperture ground and space-based telescopes. Ever more ambitious requirements for astronomical observations, such as higher sensitivity, finer resolution, and imaging at deeper contrasts are driving the development of new and improved techniques in the field of astronomical instrumentation\cite{astro2020}. Specifically, the research in HCI and AO is very technically oriented and laboratory testbeds are a crucial component in every project, providing facilities from proof-of-concept realizations over testing grounds, to full system demonstrations\cite{Mazoyer2019HighContrastTestbedsFuture}. The operation of such testbeds requires expertise in all scientific and technical areas, including hardware controls, software engineering, data storage and management, as well as processes to thread all of these components into an overall project that runs smoothly and robustly.
To date, there are more than a dozen of such testbeds at various institutions around the world, focusing on one or several of the following topics: exoplanet imaging, coronagraphy, wavefront sensing and control, adaptive optics, image processing, data analysis and component development (e.g., detectors, mirrors). Every research group is recording remarkable results in their respective project regions, however, the communication and exchange between the groups is inherently limited to published papers and proceedings, conference talks, and sparse email contact. We have identified the potential for easily accessible testbed information and the ease of communication across the community that aims to eliminate the need for each team to ``reinvent the wheel'' when implementing hardware and software solutions, as well as facilitate cross-testbed learning. Since all of these facilities use a finite number of well-known hardware components and expand well-established optical algorithms, our aim is to provide a platform for exchange. In this way, we hope to standardize certain approaches taken in the implementation and maintenance of the testbeds and to accelerate the research findings coming out of this community.
The CAOTIC (Community of Adaptive OpTics and hIgh Contrast testbeds) project provides a platform to leverage this potential and it is currently represented by a website. Through submissions from the community, we have collected technical specifications and information about their operations from a little over a dozen testbeds with the goal of providing a top-level overview of the field. By promoting the sharing of resources, the development and use of open-source software, and higher visibility for junior people - students, postdocs, and young professionals - in the field, our target is to strengthen the ways in which we build networks, spread knowledge and give access to information.
In particular, the first point in the actionable content we identified in the context of the CAOTIC project is to rethink what standards we as a community want to be able to rely on for the purpose of software management. As a core component of many testbed projects, software development is a technical topic that is too often left in the hands of a purely self-taught workforce in this field, namely astronomers creating full software environments and infrastructures. While the researchers involved in creating and running these optical testbeds are undoubtedly the experts who decide on the project goals and their execution, there is a stark lack of software development skills within this demographic. Efficiently addressing certain questions, like creating a new control architecture, rewriting code in a different language, introducing a level of abstraction or setting up a continuous integration framework, can sometimes only be done through the hiring of one or more software engineers by training. Other needs though, like the implementation of new algorithms, the integration of individual drivers and overall maintenance of an already well-designed project can easily be met by the scientific staff. However, to keep the interactions between the different software needs and implementations smooth, it is necessary to find a way to consolidate the software management process between the individual contributors.
In particular, efficient collaboration on code and its versioning and safe-guarding through backups is one of the most important aspects in this process. In most software projects outside of academia, this need is addressed by using version control systems (VCS). While this concept is not unknown to the scientific community, it is often ignored when setting up a testbed project or more generally research projects involving software development. One of the reasons for this is the lack of understanding of the goals and workings of VCS in the wider astronomy community, and the inherent lack of training opportunities for this particular topic. This is why we decided to address this issue as the first main goal of the CAOTIC project.
In these communications, we start in Sec.~\ref{sec:testbeds-and-labs} by introducing a broad overview of the general work of astronomical testbeds and laboratories before motivating the need to move to standardly using VCS. In Sec.~\ref{sec:caotic-project}, we present the overall goals of the CAOTIC project, its current status, impact thus far, and plans for the second half of 2022 and beyond. In Sec.~\ref{sec:git-github-teaching}, we highlight the goals and methods of a new approach to teaching version control for software development in research projects and present the impact of the version control course series tailored to scientists that took place in the first halves of 2021 and 2022. Finally, in Sec.~\ref{sec:summary}, we conclude our work and give an outlook for the future of these activities.
\section{Astronomical testbeds and laboratories}
\label{sec:testbeds-and-labs}
Astronomical instrumentation is a wide field of research with many scientific applications. Since most objects of astronomical research cannot be captured and brought to, or replicated in a laboratory, most instrumental applications focus on developing the technology to conduct observations of faraway objects with the goal to make new discoveries and confirm theoretical models. The building and testing of these instruments is mostly left to engineering teams, with some input from the scientists involved in the particular project, but the role of the latter usually becomes dominant only once an instrument starts its on-sky operations.
There are certain applications though where the development and improvement of optical instruments represents the concrete scientific work itself. This is in particular true for the field of direct imaging, where a project can consist of designing and building a testbed which is consequently used to test new instrumental methods or algorithms. In this case, the experimental results themselves are the end goal in order to set the path for consecutive testbeds or future on-sky instruments. The technologies that shape the field of direct imaging are coronagraphy; wavefront sensing and control (WFS\&C) including hardware and algorithms for both wavefront sensing (WFS) and wavefront control (WFC); focal-plane WFS; adaptive optics and predictive control; as well as post-processing methods.
Astronomical testbeds are an integral part of developing these technologies and can serve various purposes on the way to developing fully mature direct imaging instruments:
\begin{itemize}
\item Component-level development and testing, e.g. new coronagraph masks or WFC algorithms.
\item Systems development, e.g. the architecture of AO systems and interplay between different starlight suppression components, sensors and controllers.
\item Laboratory and on-sky demonstrations of fully integrated systems, and related trade-off studies.
\end{itemize}
While each project is pursuing its own goals, the tools and methods in doing so have become more and more common. This can be the same hardware equipment, for example the same camera models or laser sources, but this is especially true for critical components like deformable mirrors (DM) - there exists a finite number of both continuous face-sheet and segmented DMs, from a limited number of manufacturers, so different projects are bound to be confronted with the same hurdles when integrating them onto a testbed. This often encompasses general hardware work and organization, for example cable management or how to establish a remote connection to laboratory computers. But this can also manifest itself on the software side of a project, where the same task (e.g., writing a controller for a DM or camera) keeps seeing repeated reimplementation by different teams. While there is certainly some need for customization in these solutions, there is no need to redo all parts of the infrastructure from scratch.
Sharing software through open-science approaches is not a new concept and especially the last decade has seen a significant increase in scientific software packages being distributed freely to peers. In particular the use of GitHub\cite{github}, a cloud-based software hosting service with a plethora of tools for software development base on the open-source VCS git\cite{git}, has become the go-to solution for shared resources within the scientific community\cite{Perkel2016}. This includes astronomy, where also leading space agencies like NASA, ESA and CSA (US, European and Canadian space agencies, respectively) have embraced the open-source approach for collaboration\cite{Numrich2022}. There has been significant work put into various initiatives supporting this strategy, like the OpenAstronomy project\cite{openastronomy} and more recently, NASA's Transform to Open Science (TOPS)\cite{nasa_tops}. The need to support this path forward has been identified as critical in order to fully exploit the opportunities from shared resources in the future\cite{Tollerud2019Sustaining}. In the case of astronomical instrumentation and optics, some open-source packages have established themselves as a viable resource for optical propagations and simulations, like Poppy\cite{Perrin2016POPPYPhysicalOptics}, PROPER\cite{Krist2007Proper} or HCIPy\cite{Por2018HighContrastImaging}. Equally, some projects dealing with software infrastructures for hardware control have been made available to the community, for example CACAO\cite{cacao}, catkit\cite{Noss2022catkit} and milk\cite{milk}.
Independently of the tools and implementation of open-source projects within astronomy, it is clear that the work force that is anticipated to create and use them needs to have the appropriate skills\cite{Norman2019Growing}. This includes people working in instrumentation and in particular on astronomical testbeds, where the need for efficient software management is immediately apparent. The problems faced by such teams includes the fact that team members come and go - postdocs and students make up a large fraction of the work force, but their time within a team is usually limited to 2--4 years, while the project itself is usually designed to run for longer. In the beginning of their appointment they need to learn the specific tools used on their particular team, and towards the end they need to perform a transfer of knowledge before moving on. A team that minimizes the on-boarding time and integrates new ideas into the overall project as they grow instead of just before the departure of a team member is able to shift the focus from these procedural tasks more to the scientific results themselves.
One of the most involved processes to get acquainted with on a new team is software management. How to integrate one's own software contributions into the overall laboratory infrastructure and how to deploy it on a testbed in a reproducible and robust manner often relies on case-by-case examples that are not uniform across the project, let alone across different laboratories. The big asset for teams here is the use of VCS. Version control is a concept well known to the more tech-savvy individuals in astronomy, but the opportunities it offers are still being widely ignored by the wider community. Since VCS is conceptually completely independent from any chosen software implementation or its distribution, it harbors a big potential for standardization without impeding the individual character of each individual project. It is thus that the usage of version control is one of the main tenets of the CAOTIC project. We believe that it can significantly contribute to the success of a testbed project and its scientific results and that it can improve the exchange of knowledge and skills between different projects, thus advancing the field of astronomical instrumentation as a whole.
\section{The CAOTIC project}
\label{sec:caotic-project}
The project is currently centered around a website that aims to be a low-maintenance platform where interested members of the community can contribute to and organize relevant resources. Initially, this was realized with a Google website which went online in 2017. Ultimately, this did not fall in line with one of the main goals of the project -- to provide an easy and fast exchange of information between different instrumentation groups -- since only the page admins were able to change its content. Thus, the website was migrated to GitHub in October of 2018\footnote{Website URL: \url{https://highconaotools.github.io/}\\GitHub repository: \url{https://github.com/highconaotools/highconaotools.github.io}}. Hosting such a project on GitHub has the advantage that anyone can draft additions and changes to the website and then request their integration. This solution would combine the goal of providing a community platform with the goal to promote software best practices in research.
The core of this website contains a table listing participating testbeds and their basic information like location, science goals, key hardware components, as well as involved team members and their contact info. This data is complemented by a list of software resources and, in the future, will be extended with relevant talks, literature, courses and events. Since, as stated above, the main reason for hosting the CAOTIC website on a GitHub repository is to facilitate the contribution of new content or changing existing content as easy as possible for anyone within this community, the project provides pre-made templates for new contributions which can be published by means of a pull-request on GitHub, after review by the project owners. This requires a basic understanding of version control with git, as well as the GitHub platform. Since one of the dedicated goals of the CAOTIC project is to promote best practices in software development, which includes the use of version control, the first big action item within the scope of the project was the launch of a series of workshops about using version control with git for the purpose of academic research, which is described in detail in the next section.
\section{Adapted teaching: Git and GitHub for scientists}
\label{sec:git-github-teaching}
In the following section, we present our take on why using version control tools is indispensable for laboratory teams, how to adapt them to the specific needs of astronomers, how we optimized the teaching of such tools and the feedback we obtained in the process.
\subsection{Establishing version control as a standard tool in research}
\label{subsec:version-control-standard}
There are certain tools that facilitate scientific work and thus also the work that a lab creates that we tend not to question anymore. One example is the use of the Smithsonian/NASA Astrophysics Data System (ADS), also known as the ADS Abstract Service\cite{Kurtz1993} for bibliographic search. It has greatly developed since its launch in 1993 and rarely would anybody try to use any different search engine to do bibliographical searches in the domains of physics and astronomy (although other tools exist and have their place, for example Google Scholar). Similarly, a vast majority of manuscripts in astronomy today are prepared using the markup language ``\LaTeX''\cite{Rowley2001}\footnote{\url{https://www.latex-project.org//}}, for many reasons. This includes but is not limited to\cite{Sinclair2018}: consistent typesetting and formatting across a document or several documents, the simplicity to write mathematical expressions, bibliography management and easy sharing between collaborators with cloud-based tools like Overleaf\footnote{\url{https://www.overleaf.com/}}. Not everybody uses it, and it is not being used for every manuscript ever written; however, the crucial point is that almost every single astronomer has used it at least once in their life or participated in a project that required them to use it. While preparing manuscripts in \LaTeX~might not be the best option in all cases, it is acknowledged that it has a firm place in a researcher's tool box, so much so that many institutions and universities offer classes teaching their students and faculty how to use it.
Writing various types of papers, proposals and reports undeniably makes up a huge fraction of research, but as it turns out, so does writing software. Especially when working in an astronomical laboratory, software engineering represents a continuous thread through the many aspects of designing, building, and operating an optical testbed. A lot of effort is going into writing code to control the mechanical components of the testbed, synchronize them to perform experiments, perform high-fidelity optical simulations, implement a variety of algorithms and analyze the resulting data, a very slim subset of which we mention in Sec.~\ref{sec:testbeds-and-labs}.
Such software tools can be written in many different programming languages that have become more or less popular in the scientific community over the years: from Fortran, to IDL, Mathematica, Matlab, C++ and Python to the more recently developed Julia, to only name a few. This paper does not intend to be a discussion of the different trade-offs between these languages, nor a promotion of any language in particular. Instead, we strongly adhere to the claim that \textbf{every single research project, no matter the programming language it uses, benefits from using version control}. Further, we insist that the current most used way of teaching version control is outdated and unadapted for the needs of a researcher, which we come back to in Sec.~\ref{subsec:version-control-for-astronomers}. There are three main reasons we believe that version control and its associated technologies should be a skill acquired by everybody in the broad astronomical workforce:
\begin{enumerate}
\item The version control aspect itself
\item The intrinsic benefits of backing up one's work
\item The enabling of collaborating with other researchers more efficiently.
\end{enumerate}
The first point in the above list is comically, but also very realistically depicted in the two illustrations in Fig.~\ref{fig:version-control}. In most lines of work, but especially in very explorative fields like science, there is often the desire or need to be able to roll back to a previous version of a product, or in our example, code. The simple solution at first seems to be a straight ``copy-paste and rename'' like in the given illustrations, but it is very easy to loose track of the properties of each given version when versioning is handled this way, especially when coming back to a particular project after weeks or months.
The second point in the above list can almost be considered a useful benefit when using a distributed version control tool like git\cite{git}. It means that there will always be at least one full copy of the project saved on a remote server, preventing any work from being lost in case of the failure of a researcher's work machine. In most cases, this feature comes completely for free when working with version control including a remote repository, which is a dedicated location to save a project. Accidents happen and having a solid backup system in place for one's work can be a life-saver (see also Fig.~\ref{fig:backing-up-work}).
The third point in the above list is enabled by the wide range of collaborative version control tools now available freely to anybody with internet access. By creating a remote repository on a shared-access service like GitHub\cite{github}, Bitbucket\cite{bitbucket} or a personalized GitLab\cite{gitlab} installation, a copy of the version-controlled code base can be made accessible to anybody with an email address and who is familiar with git. By selecting a standardized way of doing this, it becomes very easy to bring new collaborators on board with a new project that requires the development of code. This is true for larger teams who work together on a single project like the controls for a testbed, but it is equally true if as little as two or three people need to use and change the same software project that can be a data reduction pipeline, a simulation tool or any code-based project. Too often are snippets of code still sent around by email, in which case all connection to the previous history of the code is lost. This is sometimes countered by encoding all metadata into the code itself in the form of comments; but rarely is this sufficient to keep up a cohesive and complete history, let alone have a way to synchronize between diverging code bases that have now been completely separated from each other.
There are many more motivations to use version control, but we would like to bring up three more:
\begin{enumerate}
\setcounter{enumi}{3}
\item The need to always have one working version of the code
\item Good research practices: repeatability, traceability, open science
\item The portability to non-academic jobs.
\end{enumerate}
One aspect in the discussion of version-controlled projects that is obvious to any software engineer but often overlooked by scientists is that VCS allow you to always keep one or more functional versions of the code available at all times, while new and potentially buggy features that are still under development can be handled separately, which is reflected in point 4 of the above list. This is very useful in most use-cases like developing a new simulator, where new features are supposed to enhance the functionality and not first break it before debugging your way to the new version. This is also true for code used for data analysis, you might want to start work on coding up a new feature while still run the currently working analysis in the background. And obviously, this is very important when the project in question is the operation of a testbed. While upgrades and enhancements are being coded up and prepared for testing, the testbed can always be used to run experiments without down-times due to untested and buggy code in the main version. (This does not help with fighting the lab gremlins and their vicious, inexplicable and sometimes transient malfunctions of testbed hardware, but you do what you can.)
We wanted to specifically point out an ethical motivation to keep one's software history clean and reproducible in point 5, and that is scientific integrity. Not being able to revert to the version of the code that produced the data and figures for your paper some years ago can pose a significant breach of scientific integrity. Keeping your work traceable and repeatable, possibly even archived by version in a public software archive can solve this problem\cite{open-science-git}.
In point 6 of the above list we want to emphasize the utility of general technical skills to career paths outside of academia. Especially in the field of astronomical instrumentation, people often hold valuable skills for industry positions. This includes knowledge of optics, software development and project management, and adding the mastery of version control and software management tools can increase one's hireability.
In summary, version control is the only way to manage software to let you keep a history of all changes, manage diverging aspects of a project and enable efficient collaboration on a project. Every single research project, no matter how small, would benefit from these aspects, and we purposefully include single-person projects here: after all, working by yourself also means collaborating with your past and future self. Who has not come back to a project they have not touched in a couple of months and started scratching their heads as to why it looked the way it did? Testbed projects are very much not like that and usually involve several people working on it at the same time, so collaboration management on the software side becomes key. With different strings of the code needing to be developed in parallel, it becomes almost impossible to develop without using version control, unless the team is willing to take a proliferation of different codes bases into account. This can leave for a very messy consolidation later on, or, in the worst case, loss of work that is kept uniquely on one person's laptop, especially if they end up leaving the group - which is unavoidable in the case of a postdoc or a PhD student.
\subsection{Version control for astronomers}
\label{subsec:version-control-for-astronomers}
\subsubsection{Teaching goals}
\label{subsubsec:teaching-goals}
With the main motivations to use version control in astronomical research, and in particular within testbed teams, listed above, we identified the need to isolate a specific set of skills that are required for a researcher to work with and contribute to a team project using version control. There exists a large number of online training courses and tutorials, paid and free, in-person workshops and classes to learn scientific programming with various programming languages, catering specifically to the scientific community. There are even offers directed especially at astronomers through dedicated summer schools and conference events\cite{CodeAstro,escape,lsst,TheCarpentriesOverall,TheSoftwareCarpentryOverall}. However, we did not identify a comparable offer to engage with version control. Tendentially, exposure to VCS happens as a side note during workshops focusing more on data analysis and scientific computing, and it is rarely given the full attention beyond a general introduction of an hour or two. There is an abundance of general online tutorials on the topic, so much that it is sometimes hard to identify where to start. And there are some git and GitHub teaching resources online aimed at scientists specifically, but most of them do not show a significant difference to generic git tutorials, are geared very much towards data scientists or are sold commercially. There are also some resources about lessons learned on integrating git and GitHub as learning objectives in courses for statistics and data science\cite{Beckman2020}, and some overview materials in the biology community\cite{Blischak2016,Perez-Riverol2016}.
This lead to the initiative to design a teaching activity geared particularly toward astronomers. We decided to teach our classes by working jointly with git and GitHub (Fig.~\ref{fig:git-and-github}).
Git ``is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.''\cite{git} This means participants of the course did not need to pay for the tool we were teaching, they would find tons of documentation and support online, and they would learn the most widely-used version control system used in the scientific community today. In particular, being a distributed VCS, which makes it far better suited for remote collaborations than traditional centralized VCS\cite{DVCS}. Similar reasons prompted us to work with GitHub: While it is not open-source (it was acquired by Microsoft in 2018), it is free to use, very many projects are already hosted on it and there is no affiliation requirement to be able to sign up (as opposed to institute-internal GitLab access for example). Both tools together facilitate the wide-spread collaborations between researchers and their teams.
The scope and mode of the course was defined by drawing from the experience the co-authors gathered in their respective laboratory teams and optics research groups. In particular, the intent was to address some of the difficulties they witnessed in those groups when it came to on-boarding new members and teaching colleagues how to work with version control. The course goals were developed in such a way that the focus would be the \textit{usage of version control in research projects} and not the version control tool itself. This might seem like a subtle difference and in many ways, it is. However, this approach proved to be way more engaging than pushing through each and every single functionality git has to offer. We thus decided to filter out the main course goals to be:
\begin{itemize}
\item A general motivation to use version control in research projects
\item Learn how to be a user and contributor first
\item Then move on to learning how to manage and create new projects.
\end{itemize}
The order of the bulleted points above matters, for several reasons. One, the target group of our workshop were specifically people who have proven time and time and again that they are smart enough to learn complicated concepts. It would be an easy thing for them to pick up a manual or tutorial that taught them all they needed to know about version control. The crucial point here is to actually spark an interest in them to do so, rather than feeding them information they could find anywhere else. By structuring the workshops in a way that made the usage and application of version control tools more obvious in their concrete work, learning the actual concepts would make be easy for them further down the line. This is why most of the lecture part of the courses talks about an extended version of what is laid out in Sec.~\ref{subsec:version-control-standard}. Two, almost all of the tutorials you find online start off by teaching you how to \textit{create} a repository, but let us face it, how often does any of us really type \verb git ~\verb init ~in their terminal? The bulk of time spent working with version control is not spent on creating new repositories but on maintaining them and contributing to them. Starting off with a skill that is used much rarer by comparison shifts the focus to lower-priority things, while we want to keep them on high-priority things like for example managing branches. Especially considering a testbed project, people will be joining to contribute, not to start their own control code and repository if they have never used git before. That being said, creating new repositories is such a crucial part of using git that of course it was covered as well - but later on in the course.
\subsubsection{A new spin on the same content}
\label{subsubsec:new-spin-same-content}
Following the elaborations in the previous sections, it is clear that the technical content of our training was not much different from any entry-level version control tutorial. It covers the general idea behind version control and some motivation to use it, how git works, what the differences between GitHub and GitLab are, and how to perform basic tasks like cloning a repository, creating branches, committing, inspecting the git history, pushing, pulling, and engaging in pull requests. This would be taught initially from the perspective of a user and contributor, later shifting to the standpoint of a maintainer, curator and creator. The focus here would be on the various workflows one can follow when working alone, in small groups or in teams.
The crucial point we enforced however was a reprioritization of the methods used during the teaching activities, which we captured in Table ~\ref{tab:priorities}. Here, we list more commonly used approaches to teaching version control versus our personal adaptations, as well as the rationale behind the choices.
\begin{table}[h!]
\begin{tabular}{@{}lll@{}}
\toprule
& Common approach & Adapted approach \\ \midrule
1. & Teach git through the command line interface & Teach git through a self-standing interactive GUI \\
2. & Teach git on code examples & Teach git on text files \\
3. & First lesson: setting up a git user profile and git init & First lesson: create branch, commit, merge \\
4. & Use individual practice examples & Use collaborative practice examples \\
5. & Teach git first, then introduce GitHub & Introduce git and GitHub at the same time \\ \bottomrule
\end{tabular}
\caption[Table of priorities]
{\label{tab:priorities}
Major differences between most openly available version control tutorials found online, and the training concept we present in this paper.}
\end{table}
The first major point that stands out in our version control courses is the fact that we never once touch the git command line interface (CLI)\footnote{We do give a brief demo for the sake of completeness, but only \textit{after} the participants had already be using a GUI to perform tasks with git and followed their actions in the visual representation of the git tree.}. Make no mistake: we do not contest its utility or power, but we insist that the basic workings of git fail to be conveyed purposefully by just giving people commands to type in their terminal window (see also Fig.~\ref{fig:xkcd-shell-commands}, left) - it has very limited pedagogical value.
This is in fact an aspect of teaching git that is widely recognized, as exemplified by the following quote by software developer Marco Chiappetta, found on his blog where he talks about how to use git for working on a team:
\begin{quote}
``When I had to learn git I started reading lots of articles and asked for help to friends of mine who were more experienced in versioning. They all had the worst approach: they started teaching me how to use the git CLI.''\cite{Chiappetta-quote-2019}
\end{quote}
Clearly, this issue resonates with many people and it is often easier to digest new information if it is accompanied by a visual representation. And yet, even Marco Chiappetta proceeds to provide an introduction to git through the CLI in that very same article! A graphical user interface (GUI) for git overcomes this problem as it unpacks git commands into buttons, labels and dashboards, and most importantly, in most cases it shows a beautifully rendered representation of the git history tree. The GUI we chose for our teaching activities is GitKraken\cite{gitkraken}. We investigated some other GUIs that were candidates for our trainings, like SourceTree, TortoiseGit and SmartGit\footnote{A list of git GUIs can be found here: \url{https://git-scm.com/downloads/guis}}, but GitKraken was the only one we found to satisfy all of the below requirements:
\begin{itemize}
\item It runs on \textbf{all three major operating systems} (Windows, MacOS, Linux), which means we do not have to change our trainings as a function of OS the participants use.
\item It is \textbf{self-standing}, meaning it is separated from an editor, IDE (integrated development environment) or file browser.
\item It includes a \textbf{well rendered graphical representation of the git tree}, which makes it easier to understand what is going on at any given time.
\item It requires \textbf{no usage of the command line interface at all}, but it is compatible with using it in parallel.
\item It is \textbf{free of cost} in its basic version, and the Pro version (required to work with private repositories) is free for students and people with an affiliation to an educational institution\footnote{standing as of July 2022}.
\item It has a good (if not excellent) \textbf{software production quality}, it is \textbf{actively maintained} and it has an extensive online documentation, including their own video tutorials.
\end{itemize}
We especially insisted on using a self-standing client over an integrated one like in the VS Code IDE because one major difficulty we found in teaching version control to newcomers is the natural entanglement of programming with versioning. By having to actively switch to an editor when editing your files, and purposefully opening a git GUI when you are about to perform version control operations, it enhances the point that \textbf{version control has a priori nothing to do with writing code}. It was designed to be a content tracker\cite{Stopak2020} but of course it is inherently optimized for its originally intended use, tracking software changes. To bring this point home, none of our version control trainings involve the use of a programming language, which is noted in point 2 of Table \ref{tab:priorities}. By using simple text files in the interactive exercises and examples\footnote{This is an approach we learned and adapted from the excellent git intro tutorial by The Carpentry\cite{carpentry-git-novice}.} people are free to use whatever editor they like and we avoid the temptation to engage in discussions about preferred programming languages.
Point number 3 given in Table \ref{tab:priorities} has been elaborated in Sec.~\ref{subsubsec:teaching-goals} already: the fraction of time spent working on creating (local) repositories is highly over-represented and often highlighted first in many online tutorials, so we decided to flip around the sequence in which we show people the different parts of a git workflow.
No version control tutorial would be complete without providing examples and inviting the participants to work through exercises, as indicated by point 4 in Table \ref{tab:priorities}, and our trainings are no different. However, since one of our declared goals is to first teach people how to contribute to collaborative projects, instead of letting everybody create their own repository and practice on there, we immediately walk through an exercise in which all course participants have to contribute to the same repository. In a first step, this is made easy by introducing changes only by creating new files, which avoids merge conflicts. In a later example though, participants are lead to make changes that will purposefully introduce merge conflicts when they open a pull request on GitHub. Here, they are taken through the step-by-step process of resolving them through the GitKraken conflict resolution tool and they review each other's pull requests. By carefully preparing the training repositories and examples, this lead to colorful-looking git trees in the training repository as shown in Fig.~\ref{fig:gitkraken-example} with the result that everybody worked through the same training exercises but in a highly collaborative fashion.
Adpoting the acquired skills to a simpler single-person workflow like the example shown in Fig.~\ref{fig:xkcd-shell-commands}, right, is then just a matter of a new context.
The final point in Table \ref{tab:priorities}, point 5, touches upon the relationship between git and GitHub during training activities. We regard that focusing only on git first, without introducing the concept of a remote repository, is a way of working that no researcher would really ever be confronted with. Even single-user private repositories would be hosted on a remote at some point, so we bring in the joint use of git and GitHub as early as in our very first example of our trainings.
The above account of version control trainings stems from some problems we observed in our day-to-day work, like having to face the git CLI as a newcomer to git, struggling to interpret the history tree and mixing up concepts from programming and from version control. We have found a solution that has served us well in a series of trainings we offered in 2021 and 2022, which we talk about more in the following section. One remaining point to conclude is that while we optimized our trainings for pedagogical efficiency (see Table.~\ref{tab:priorities}), the acquired skills and methods are easily portable to other tools of the user's liking. After being exposed to git and learning about it with a GUI, people can still choose to become a CLI-only user or perform coding and version control from within the same tool. What we argue is that this direction is much easier than the other way around (learn with the CLI if you actually prefer working with GUIs in the end). Likewise, most remote hosting services are based on the same principles, which means that learning how to use them on the most openly available platform (i.e., GitHub) is still very useful to people who then move on to working with something else (e.g., GitLab). The whole concept presented herein aims primarily to maintain a pedagogical narrative.
\subsection{Trainings held so far}
The initial idea for version control courses for astronomers arose in late 2020 and early 2021, which was in the middle of some of the more restricting Covid-19 lockdown periods. As a result, the git trainings created following the principles from the previous sections were designed as a fully remote class held over a video conferencing tool with a screen-share capacity. To keep screen fatigue to a reasonable minimum for a remote work day, we split the git training into two separate sessions, lasting about four hours each, scheduled on two separate days within a few weeks of each other. The first one is titled ``Git for Astronomers Intro'' and the second one``Advanced Git for Astronomers''. The attendance of both modules would give a participant exposure to the full training content and exercises.
Each tutorial is held in such a way that one main instructor is presenting the material and taking the participants through the exercises while a second instructor is available on the group chat to answer questions, bring questions to the attention of the presenting instructor and help out with smaller issues that arise during the training. The introduction class requires the instructors to prepare a training repository on GitHub while the advanced class requires the preparation of three such repositories. Creating them ahead of time with a specific branch and file structure allows for the right merge conflicts to be triggered at the right time of the course. Once set up, these training repositories can easily be used as templates for future trainings.
A first batch of trainings was held remotely in the spring of 2021 while another round of trainings was offered in hybrid mode (class held in-person with possible remote attendance over video and screen share) in early 2022, see Table \ref{tab:trainings-held} for a full list.
\begin{table}[h!]
\centering
\begin{tabular}{cl}
\hline
\textbf{Date (y/m/d)} & \textbf{Course} \\ \hline
2021 03 17 & Git for Astronomers Intro \\
2021 04 14 & Git for Astronomers Intro \\
2021 05 05 & Advanced Git for Astronomers \\
2021 05 19 & Advanced Git for Astronomers \\
2022 04 13 & Git for Astronomers Intro \\
2022 04 27 & Advanced Git for Astronomers \\ \hline
\end{tabular}
\caption[Table of courses held]
{\label{tab:trainings-held}
A list of version control trainings based on our methods held as of July 2022.}
\end{table}
The total number of individual participants across all courses was roughly 80 across all introductory sessions and about 55 across the advanced sessions, where most but not all attendants of the advanced course had also attended the intro class. The attendants included interns, graduate students, postdocs, permanent staff and faculty as well as engineers from all fields in astronomy, including very few participants from other scientific fields (e.g., biophysics). They were affiliated with at least seven different institutions in four different countries (France, Netherlands, Spain, Italy). The feedback was highly positive throughout, especially regarding the alternation between theoretical explanations, practical demonstrations and hands-on exercises, and the exchange between the instructors and attendants.
After the series of trainings described above, the course materials have matured enough to provide a solid basis for an introduction to VCS while also being easily adapted to any specific needs of a group or institute. Further trainings are currently not planned but the authors intend to identify avenues to put the course materials and strategy to good use, for example through online materials, conference workshops or dedicated research group activities.
\section{Summary and conclusions}
\label{sec:summary}
We have presented the broad scope of the CAOTIC project which aims to provide a platform for the astronomical testbed community to connect and exchange beyond the classical pathways of academic publications. The core of the project is currently built by a website that assembles basic information about testbed teams around the world, and their work. The main goal of the project is to identify common aspects of working on HCI and AO testbeds that traditionally get less attention than the actual scientific results, such as hardware handling, project management and software best practices.
As part of this process, we identified the usage of version control systems as a crucial aspect of concrete laboratory work. While we claim that any research project would greatly benefit from engaging with such tools, testbed activities in particular can optimize their work by embracing them. Every single testbed requires the development of a software infrastructure and this is usually done in a highly collaborative manner within a team, but also with its external collaborators. Nevertheless, the motivation for using version control solutions like git are not always recognized or the hurdle to start using it are perceived as too complicated or time-demanding. This led us to conclude that the astronomical research community and in particular researchers working in testbed teams lack appropriate training opportunities to overcome these entry-level barriers.
To change this, we identified some key points that seem to constitute the main difficulties in moving a team to use version control for their projects. We designed a git and GitHub training activity that is adapted around these difficulties and presented its methodology in this paper. We conducted several of such courses in 2021 and 2022, with very positive feedback from the roughly 80 distinct participants. We settled on the use of git and GitHub with the GitKraken GUI, a combination which met the teaching requirements we deducted from the observed difficulties we intended to overcome. While we consider this setup to be the most effective in a pedagogical sense, this does not mean we believe these tools to be the most effective VCS tools for every project. The teaching program we built with our choice of tools makes them easily substitutable with other tools that might be used preferentially by any given user, team or institute, or with new tools that gain relevance in the future.
We would like to note that one of the main observations the authors made in their respective research groups is that in spite of all well-intentioned presentations, provision of tools and demonstrations, there is a tendency on most teams to drop good practices, which includes the use of version control, unless there is at least some level of enforcement. This could either be a top-down decision by the principle investigator (PI), but is usually more efficient if the push comes from within the team, through encouragement and support between the team members. By consequence, it is really the continuous training of and exchange between junior-level researchers that will bring about the changes the CAOTIC project aims to support. With the findings presented in this paper, we hope to incentivize the community to engage in this effort.
\acknowledgments
I.L. and P.R. would like to thank Mehdi Kourdourli, Alexis Lau and Г‰lodie Choquet for valuable feedback in the early stages of the development of the workshops. I.L. and P.R. would also like to thank Laurent Mugnier for extensive discussions about the core needs from version control in research. I.L. acknowledges the support by a postdoctoral grant issued by the Centre National d'Г‰tudes Spatiales (CNES) in France.
\section*{CONFLICT OF INTEREST}
The authors declare no conflict of interest and no author holds a commercial or non-commercial affiliation with GitKraken, Resurgens Technology Partners, Axosoft, GitHub or Microsoft.
\bibliography{references}
\bibliographystyle{spiebib}
|
Title:
A pyramid-based adaptive optics for the high-resolution echelle spectrograph at SAO RAS 6-m telescope |
Abstract: We propose a design of an adaptive optics (AO) system for the high-resolution
fiber-fed echelle spectrograph installed at the Nasmyth focus of the 6-m BTA
telescope at the Special Astrophysical Observatory (SAO) of the Russian Academy
of Sciences (RAS). The system will be based on a pyramid wavefront sensor and
benefit from the experience of the Laboratoire d'Astrophysique de Marseille
team in the field of adaptive optics. The AO will operate in the visible domain
of 430-680 nm, in an f\30 input beam and provide correction for the on-axis
source only. The main challenges in this particular design are insetting
inserting the AO into an existing optical system and maintaining the focal and
pupil planes configuration, fitting within the instrument's flux budget as well
as limitations on the total cost of the AO bench. According to the current
design, the AO bench will use an additional relay consisting of 2 spherical
mirrors to re-collimate the beam and project the pupil onto a small deformable
mirror. A dichroic splitter will be used to longwave component to the pyramid
wavefront sensor branch based on refractive optics only. Using off-the-shelf
components only we can reach the instrumental wavefront error of 0.016 waves
PTV with a 20 nm bandpass filter at 700 nm. Using folding mirrors and
refocusing of the fiber's microlens we restore the nominal geometry of the beam
feeding the spectrograph. The final goal for the AO system is to increase the
energy concentration in spot at the spectrograph's entrance, and our
preliminary modelling shows that we can gain by factor of 69.5 with the typical
atmospheric conditions at SAO RAS.
| https://export.arxiv.org/pdf/2208.07618 |
\keywords{Pyramid wavefront sensor, Single conjugate adaptive optics, 6-m class telescope, Echelle spectrograph, Hish-resolution spectroscopy, Exoplanets}
\section{INTRODUCTION}
\label{sec:intro} %
The pyramid wavefront sensor was proposed
for the first time in Ragazzoni \cite{Ragazzoni96}, is an optical device used to perform wavefront sensing by
performing optical Fourier filtering with a glass pyramid
with four sides that is located at the focal plane. The purpose
of this glass pyramid is to split the incoming beam into four beams producing four different filtered images
of the entrance pupil. This filtering operation allows the conversion of phase information at the entrance pupil into amplitude at the pupil plane, where a quadratic sensor is used to record the signal \cite{Verinaud04,Guyon05}. Today the pyramid wavefront sensors are on a high demand in the astronomical applications. The main advantage of them is a high sensitivity, superior over that of the Shack-
Hartmann wave-front sensor (WFS) \cite{Esposito01}, while the main downside is their non-linearities, which that prevent a
simple relation between the incoming phase and the measurements, leading to control issues in the adaptive optics (AO) loop. However, the latter issue could be solved by using the optical gain tracking technique \cite{Chambouleyron21}.
Recently it was successfully demonstrated that a fully-functional AO bench based on a pyramid WFS can be developed and commissioned with moderate resources. The PAPYRUS project \cite{Muslimov21} was designed by a team of young researchers at Laboratoire d'Astrophysique de Marseille and installed at 1.52 meter telescope at Observatoire de Haute Provence. The bench uses only existing and off-the-shelf components was created in less than two years within a limited budget. Both in-lab and the first on-sky performance metrics are in a good agreement with the modelling predictions.
The success of PAPYRUS has urged us to look for potential applications, where a similar AO system could be introduced within a moderate budget and in a relatively short time, but with a notable scientific outcome. One of the promising options consists of development of a similar system to feed an echelle spectrograph at 6-m telescope. The spectrograph works with a fiber input, so the AO loop should correct only the on-axis point. Also, the spectral working range of the instrument is not so wide, so the requirements for the chromatic aberration correction in the AO system can be relaxed. In the meantime, using the AO could significantly increase the energy concentration at the spectrograph's input, thus increasing its' sensitivity.
So in the present paper we consider optical design of a pyramid-based AO system for a high-resolution echelle spectrograph at 6-m telescope. Below we introduce the target spectral instrument, then we discuss the adaptive optics system design and present the expected performance in the WFS and science branches and its' impact on the instrument capacity.
\section{ECHELLE SPECTROGRAPH}
\label{sec:echelle} %
The target instrument is the fiber-fed high-resolution echelle spectrograph, recently built for at Special Astrophysical Observatory of Russian Academy of Science (SAO RAS) \cite{Valyavin14}. The instrument is designed for the 6-m alt/az mounted telescope (BTA - Big Telescope Alt-azimuth) and has the following key features:
\begin{itemize}
\item Spectral resolving power up to R = 100 000 with a possibility (to use lower resolution modes by using pixel binning;
\item The simultaneously detected waveband of 400-750 nm;
\item Auxiliary units to measure the Stocks parameters, perform photometric and spectral calibrations;
\item The light from the telescope focus is fed to the spectrograph through an optical fiber.
\end{itemize}
The spectrograph is intended for Doppler studies of exoplanets and multiple stellar systems. Its' science goals also include studies of stellar atmospheres, asteroseismological studies, studies of stellar magnetism, active nuclei of bright galaxies and interstellar medium.
The spectrograph optical design is shown in Fig.~\ref{fig:echelle}. The light emitted from the entrance slit 1 is collimated by an F/11.6 mirror 2 and incident onto the echelle grating 3, which operates close to the auto-collimation mode together with the collimator. After the second reflection from 2 the dispersed beam is folded by the flat mirror 4 and is collimated again by the transfer collimator 6. It its' focal plane 5 the white pupil is formed, and in this plane the cross-disperser grism 7 is mounted. Further, the beam is focused by the camera lens 8 onto the CCD detector 9.
The key inputs for the AO system design are:
\begin{itemize}
\item Installation at the Nasmyth focus with f/30;
\item Correction only for the on-axis point;
\item The working spectral range is at least 430-680 nm (400-750 nm goal);
\item The allowable fraction of flux used in the AO branch - 10\%;
\item The entire AO system should pick-up the beam and then return it to the nominal path with the same f/\# and the focus position;
\item The distance between AO pick-off mirror and the focal plane is 400 mm.
On top of this it is preferable to use off-the-shelf components and rely on reflective optics.
\end{itemize}
\section{ADAPTIVE OPTICS SYSTEM DESIGN}
\label{sec:AO} %
In the optical design of the AO bench we tried to re-use the PAPYRUS heritage and rely on the same or similar active components, namely:
\begin{enumerate}
\item Deformable mirror (DM) with at least 17x17 actuators and 37.5 mm clear aperture diameter;
\item WFS branch camera with $5.76 \times 5.7 mm^2$ sensing area of $240\times240$ pixels;
\item The glass pyramid with the following parameters: facet angle $8.9^{\circ}$, material LF5, leading to deflection angle of $\pm 5.44^{\circ}$;
\item Off-the-shelf modulation mirror with 12 mm diameter;
\end{enumerate}
The proposed optical design is shown in Fig.~\ref{fig:ao}. The incoming beam is sent to the AO system by the folding mirror 1 and collimated by mirror 2. Since the beam is relatively slow it is sufficient to use commercial 2-inches spherical mirror with f=1000 mm. The pupil is reconstructed on th DM 3. The reflected beam is focused again by the mirror 4 identical to 2. Note that the mirrors 2 and 4 are slightly shifted to provide the necessary pupil projection and facilitate their mounting. The science and the WFS branches are separated by the dichroic splitter 5, which allows to minimize the flux losses for the scientific payload. It reflects the longer wavelengths right outside of the spectrograph's working range $\lambda>680nm$ and steers it to the collimating doublet lens 6 (d=1 inch, f=300 mm), which reconstructs the pupil again on the modulating mirror 7. Further we use bandpass filter 8 ($\lambda=693-712 nm$)mounted in a collimated beam, to moderate the chromatic aberrations of the WFS branch. The beam is focused by a similar doublet lens 9 to the intermediate focal plane, where the glass pyramid 10 is installed. The 4 beams created by the pyramid are collimated by a commercial short focal length lens 11 ($Canon^{TM}$ EF-S 24mm F/2.8 STM pancake photographic lens\cite{CANON}) and the pupil images are detected by the CCD 12. The shorter wavelengths are transmitted by the dichroic splitter to the spectrograph entrance, and the beam is folded again by mirror 13 to restore the nominal position of the focal plane 14.
\section{PERFORMANCE ANALYSIS}
\label{sec:performance} %
Below we analyze the static instrumental aberrations, introduced by this simple optical design separately in the science and wavefront-sensing branches. We assume that since we have the DM in the optical train, it is possible to use a part of its' stroke to compensate these static aberrations, as it was successfully demonstrated during the PAPYRUS integration. Finally, we provide a coarse estimate of the AO system performance with the typical atmospheric conditions for the SAO RAS site.
\subsection{Pyramid AO branch}
\label{sec:pyr}
The pyramid creates 4 images of the pupil, which are focused on the WFS CCD (see Fig.~\ref{fig:fillWFE}). Each of the pupil images covers 67 pixels in diameter, which give us to 3.9 or 2.8 pixels per actuators for the 17x17 and 24x24 actuators patterns, respectively. These actuators numbers correspond, for instance, to the commercial DM models DM241 and DM468 by $ALPAO^{TM}$\cite{ALPAO}. They have close clear apertures of 37.5 and 33 mm, respectively, and in both of the cases the projection optics provides a sufficient sampling.
The simplifications in the optical design as use of the spherical mirrors cause notable residual aberrations. The residual wavefront error (WFE) at 700 nm is shown in Fig.~\ref{fig:instrWFE},A. However, this WFE defined mainly by the spherical aberration and primary astigmatism can be relatively easily compensated by the DM shape. By optimizing the WFE in the pyramid branch we determine the DM shape necessary for the static aberrations compensation (see Fig.~\ref{fig:instrWFE},B).The peak-to-valley stroke used in this case is only $0.33 \mu m$, while the full stroke for the commercial DM's can reach $12$ or $40 \mu m$ depending on the model \cite{ALPAO}. By applying this small correction we can decrease the instrumental from $0.659\lambda$ to just $0.016 \lambda$ PTV as it is shown in Fig.~\ref{fig:instrWFE},C.
We assume that protected silver coating is used on all of the auxiliary mirrors, and a standard protected aluminium is imposed on the DM, use a typical shortpass dichroic splitter transmission/reflection curve \cite{Thorlabs} and a conservative presumption of 0.5\% per surface for the AR coatings. Taking into account the lenses angles of incidence we obtain the throughput of 70.6\% at 700 nm for the WFS branch.
\subsection{Scientific payload branch}
\label{sec:science}
After correction of the static aberrations we can analyze the expected performance in the science branch. If we apply the same DM profile to it , we can show the gain in image quality from Strehl ratio of 44.4\% (Fig.~\ref{fig:sciencePSF},A) to 97.4\% (Fig.~\ref{fig:sciencePSF},B) in the polychromatic PSF. This demonstrates that the non-common path aberrations between the two branches like the chromatism in the modulation mirror relay lenses are negligible. So with the simplest optical components we can get fairly close to the diffraction limit just by using a small fraction of the DM stroke.
In addition we can provide a rough estimation of the science branch throughput. Under the same assumptions about the coatings we obtain 53.3\%, 81.5\% and 78.5\% at 400, 550 and 640 nm, respectively. The transmission curve of the commercial dichroic splitter has a sharp drop closer to the working range limit at 680 nm. In general, the throughput can be significantly improved with dielectric mirrors and/or customized dichroic.
\subsection{Expected performance gain}
\label{sec:gain}
Finally, we provide an estimate for the expected performance of the AO bench with the atmospheric conditions typical for the SAO site \cite{Shikhovtsev2020} and the BTA parameters\cite{Kukushkin2016}:
\begin{itemize}
\item Fried parameter for the atmospheric turbulence $r_0=8 cm$;
\item Wind speed $V_0 =8 m/s$;
\item Telescope primary mirror diameter $D = 6 m$;
\item Central obscuration $Obs=0.33$;
\item Number of actuators in a single line \textit{17} or \textit{24};
\item Loop frequency $f=500 Hz$;
\item Loop delay $t_d=2 ms$;
\item Target - fiber core $100 \mu m$ in diameter sampled at least by 10 elements $10 \mu m$ each.
\end{itemize}
Using a general algorithm \cite{Fauvarque2016} and the corresponding modelling tools as well as the approaches previously used by our colleagues \cite{Fetick2019, Beltramo2020} for modelling of point spread functions (PSFs) in adaptive-optics corrected systems, we obtain the expected PSFs of the designed bench. Fig.~\ref{fig:3dPSF} shows the intensity distribution normalized to the diffraction efficiency case for the uncorrected turbulence, low spatial sampling with 17x17 actuators and enhanced sampling with 24x24 actuators in the sub-plots A,B and C, respectively. Note that the DM's with 24 and 17 actuators have different diameters, but the pupil projection can fit within another DM after a moderate change of the concave mirrors positions.
One can see that by using the AO bench it becomes possible to compensate a significant fraction of the atmospheric turbulence and increase the Strehl ratio from 0.025\% to 1.6\% or 8\% at 540 nm depending on the number of actuators.
To visualize the difference between the different modes better we plot the PSF cross-section in log-scale in Fig.~\ref{fig:crossPSF}. Though the image quality remains too low to use the Strehl ratio as the main metric. So, as the final estimate we use the relative energy concentration in the $100 \mu m$ diameter target corresponding to the spetrograph's fiber end. This simple analysis shows that use of the AO bench with 17x17 actuators DM allows to increase the energy concentration by a factor 8.37, while for the 24x24 actuators case this figure equals to 69.47. Then one can expect that the instrument sensitivity will be increased accordingly.
\section{CONCLUSIONS}
\label{sec:concl} %
Thus in the present paper we proposed an optical design of adaptive optics system for a high-resolution echelle spectrograph at 6-m telescope. It uses a pyramid wavefront sensor and is based on relatively simple optical components and available commercial active components. The entire design fits around the spectrograph's input unit and restores the nominal beam parameters. The flux losses due to introduction of the AO in the middle of the working waveband are estimated at the level of 18.5\%, but may be minimized by use of customized coatings. The static aberrations of the bench can be compensated by the deformable mirror with use of less than 2.7\% of its' nominal stroke.
Application of this AO system with the typical atmospheric conditions can allow to increase the energy concentration in the spectrograph input fiber core by a factor of 69.47 thus significantly increasing its' capacity to study faint objects.
\section*{ACKNOWLEDGMENTS}
We would like to warmly thank our colleague Romain Fetick from LAM for his help with the simulations of atmospheric turbulence and its' correction.
GV thanks the grant of the Ministry of Science and Higher Education of the Russian Federation no. 075-15-2020-780 (no. 13.1902.21.0039).
\bibliography{main} %
\bibliographystyle{spiebib} %
|
Title:
X-ray Spectral Analysis of the Jet Termination Shock in Pictor A on Sub-Arcsecond Scales with Chandra |
Abstract: Hotspots observed at the edges of extended radio lobes in high-power radio
galaxies and quasars mark the position of mildly-relativistic termination
shock, where the jet bulk kinetic energy is converted to the internal energy of
the jet particles. These are the only astrophysical systems where
mildly-relativistic shocks can be directly resolved at various wavelengths of
the electromagnetic spectrum. The western hotspot in the radio galaxy Pictor\,A
is an exceptionally good target in this respect, due to the combination of its
angular size and high surface brightness. In our previous work, after a careful
{\it Chandra} image deconvolution, we resolved this hotspot into a disk-like
feature perpendicular to the jet axis, and identified this as the front of the
jet termination shock. We argued for a synchrotron origin of the observed X-ray
photons, which implied maximum electron energies of the order of 10--100\,TeV.
Here we present a follow-up on that analysis, proposing in particular a novel
method for constraining the shape of the X-ray continuum emission with
sub-arcsec resolution. The method is based on a {\it Chandra} hardness map
analysis, using separately de-convolved maps in the soft and hard X-ray bands.
In this way, we have found there is a systematic, yet statistically significant
gradient in the hardness ratio across the shock, such that the implied electron
energy index ranges from $s\leq 2.2$ at the shock front to $s> 2.7$ in the near
downstream. We discuss the implications of the obtained results for a general
understanding of particle acceleration at mildly-relativistic shocks.
| https://export.arxiv.org/pdf/2208.10648 |
\nolinenumbers
\title{\textbf{X-ray Spectral Analysis of the Jet Termination Shock in Pictor\,A \\ on Sub-Arcsecond Scales with {\it Chandra}}}
\correspondingauthor{R.~Thimmappa}
\email{[email protected]}
\author[0000-0001-5122-8425]{R.~Thimmappa}
\affiliation{Villanova University, Department of Physics, Villanova, PA 19085, USA}
\author[0000-0001-8294-9479]{\L .~Stawarz}
\affiliation{Astronomical Observatory of the Jagiellonian University, ul. Orla 171, 30-244, Krak\'ow, Poland}
\author[0000-0002-8247-786X]{J.~Neilsen}
\affiliation{Villanova University, Department of Physics, Villanova, PA 19085, USA}
\author{\textcolor{xlinkcolor}{M.~Ostrowski} }
\affiliation{Astronomical Observatory of the Jagiellonian University, ul. Orla 171, 30-244, Krak\'ow, Poland}
\author[0000-0002-3778-1432]{B.~Reville}
\affiliation{Max-Planck-Insitut f\"ur Kernphysik, Saupfercheckweg 1, Heidelberg 69117, Germany}
\keywords{radiation mechanisms: non--thermal --- galaxies: active --- galaxies: individual (Pictor A) -- galaxies: jets -- radio continuum: galaxies --- X-rays: galaxies}
\section{Introduction}
\label{sec:intro}
Relativistic jets launched from high-accretion rate Active Galactic Nuclei (AGN), such as quasars and high-excitation radio galaxies, terminate by forming powerful shock waves, observed as prominent hotspots at the edges of extended radio cocoons/lobes inflated by the jets in the ambient medium \citep{Blandford74,Scheuer74}. In more detail, a light but high-power relativistic jet, when interacting with much denser interstellar/intergalactic medium, forms a double-shock structure: the non-relativistic forward shock propagates within the surrounding gas, compressing and heating the thermal plasma \citep[see, e.g.,][]{Carilli96,OSullivan18}, while the relativistic reverse shock converts the bulk kinetic energy of the outflow to the internal energy of jet particles \citep[e.g.,][]{Meisenheimer89,Kino04}. Magnetic field amplification and acceleration of some fraction of the jet particles to high, and even ultra-high energies, is expected to take place at the front of the reverse shock as well, although the exact acceleration processes, or the efficiency of the magnetic amplification, are still under the debate \citep[e.g.,][]{Stawarz07,Fan08,Araudo16,Araudo18,Matthews19}.
\begin{deluxetable*}{ccccccccc}[!th]
\tablecaption{Observational data and spectral fitting results for the soft ($0.5-2.0$\,keV) and hard ($2.5-7.0$\,keV) bands. \label{tab:PL_HR_map}}
\tablehead{\colhead{ObsID} & \colhead{Date} & \colhead{Exposure} & Band & \colhead{Count rate} & \colhead{Photon index} & \colhead{$\chi^2/$dof} & \colhead{Energy flux} & \colhead{Net counts}\\
\colhead{} & \colhead{} & \colhead{[ksec]} & \colhead{} & \colhead{[cts/s]} & \colhead{$\Gamma$} & \colhead{} & \colhead{[$10^{-13}$\,erg\,cm$^{-2}$\,s$^{-1}$]} & \colhead{}}
\startdata
3090 & 2002-09-17 & 46.4 & soft & 0.078 & $1.90\pm 0.05$ & 67.91/100 & $2.69 \pm 0.05$ & 3,649 \\
& & & hard & 0.013 & $2.36\pm0.24$ & 27.89/52 & $2.00 \pm 0.07$ & 906 \\
4369 & 2002-09-22 & 49.1 & soft & 0.079 & $1.96\pm 0.05$ & 84.65/100 & $2.73 \pm 0.03$ & 3,894 \\
& & & hard & 0.018 & $2.35\pm 0.21$ & 30.94/55 & $1.96 \pm 0.04$ & 924\\
\enddata
\end{deluxetable*}
Hotspots in cosmologically distant radio quasars and high-power FR\,II radio galaxies are typically of the size of a few/several kiloparsecs, and so in order to study them properly, one needs instruments with at least arcsecond resolution. A considerable effort was made to resolve such structures at radio and infrared/optical frequencies, where hotspots shine through the synchrotron emission downstream of the reverse shock \citep[e.g.,][]{Prieto02,Brunetti03,Mack09,Perlman10,Orienti12,Orienti17,Orienti20,Pyrzas15,Dabbech18,Migliori20,Sunada22a}. Hotspots are also the sources of non-thermal X-ray photons, as established by numerous {\it Chandra} observations \citep{Hardcastle04,Kataoka05,Tavecchio05,Harris06,Massaro11,Massaro15,Mingo17}. The origin of the X-ray hotspots' emission is, in many cases, unclear: while in some sources the X-ray spectrum seems to fall into the extrapolation of the radio-to-optical synchrotron continuum, in other sources the X-ray excess suggests an additional emission component, typically ascribed to inverse-Comptonization of Cosmic Microwave Background photons, or of the hotspot's own synchrotron photons, by lower-energy electrons.
Among the other targets, the western (W) hotspot in the radio galaxy Pictor\,A is exceptionally well suited for deep observational studies, due to the combination of its relatively large angular size, very large angular separation from the bright galactic nucleus, and its high surface brightness. As such, it was subjected to a number of multiwavelength programs, including the radio domain with the Very Large Array \citep[VLA;][]{Perley97}, the mid-infrared range with the IRAC camera onboard the {\it Spitzer} Space Telescope \citep{Werner12}, the Wide-field Infrared Survey Explorer \citep[WISE;][]{Isobe17}, and the SPIRE camera of the {\it Herschel} Space Observatory \citep{Isobe20}, at optical wavelengths with the Faint Object Camera on the {\it Hubble} Space Telescope \citep[HST;][]{Thomson95}, in X-rays with the Advanced CCD Imaging Spectrometer (ACIS) onboard the {\it Chandra} X-ray Observatory \citep{Wilson01,Hardcastle16,Thimmappa20}, as well as the EPIC MOS1 camera of the XMM-{\it Newton} \citep{Migliori07}, and lastly in hard X-rays with NuSTAR \citep{Sunada22b}. The hotspot was also the target of high-resolution radio imaging by the Very Long Baseline Array \citep[VLBA;][]{Tingay08}.
The radio structure of the W hotspot at GHz frequencies with the $1^{\prime\prime}.5$ VLA resolution is complex, including the main compact knot at the westernmost edge of the system, and the diffuse plateau region extending to the east/south-east \citep{Perley97}. With sub-arcsec VLA resolution (reaching $0^{\prime\prime}.17$), the main knot remains unresolved, while the plateau region reveals distinct filaments. The 74\,MHz---5\,GHz spectral index of the main knot is $\alpha \simeq 0.6 - 0.7$, and the degree of polarization reaches $70\%$; the upstream filaments seem to be characterized by a steeper spectrum ($\Delta \alpha \gtrsim 0.1$) and decreased polarization level (down to $10\%-30\%$). The projected magnetic field aligns with the levels of constant radio brightness, such that if the main compact knot denotes the position of the terminal reverse shock and its near downstream, the magnetic field configuration corresponds to that of a perpendicular shock. On the optical HST image with $\simeq 0^{\prime\prime}.1$ resolution \citep{Thomson95}, the main knot is decomposed into a system of highly polarized ($\gtrsim 50\%$) wisps elongated perpendicular to the jet axis.
Such complexity can hardly be followed at X-ray frequencies even with {\it Chandra}'s superb resolution. However, after a careful ACIS image deconvolution with sub-pixel resolution, presented in \citet{Thimmappa20}, the W hotspot could, in fact, be resolved into (i) a disk-like feature perpendicular to the jet axis, located $\simeq 1^{\prime\prime}.5$ to the south-east of the intensity peak of the main radio knot, but coinciding with the peak of the hotspot's optical emission, and (ii) an elongated feature aligned with the jet axis, and located even further upstream, i.e. within the region of the radio plateau. The disk-like feature could be traced for $\sim 4^{\prime\prime}$ in its longitudinal direction, but is resolved in its transverse direction only on sub-pixel scale.
The overall interpretation of the observed multiwavelength morphology of the W hotspot therefore emerges, in which the perpendicular disk-like structure at the position of the hotspot's optical and X-ray intensity peaks corresponds to the very front of the reverse shock, where the most efficient particle acceleration is expected to take place. The radio intensity peak located further away, on the other hand, marks the downstream of the reverse shock, where the radiative cooling of the plasma convected away from the shock front prevents production of high-energy optical and X-ray synchrotron photons. Finally, the nature of the X-ray jet-like feature upstream of the shock, as well as optical and radio filaments within the extended plateau region, remain unclear, although such structures may be related to a network of weaker oblique shocks formed around the head of the jet by the plasma back-flowing from the downstream of the reverse shock \citep[see, e.g.,][]{Saxton02,Mizuta10}.
The good agreement between the optical and X-ray maps, along with the general X-ray spectral properties of the hotspot, as well as hints for the X-ray time variability of the target, all imply in accord synchrotron origin of the observed X-ray photons \citep{Hardcastle16,Thimmappa20,Sunada22b}. For this reason, the X-ray spectral properties of the hotspot are crucial for a proper understanding of particle acceleration processes taking place at mildly-relativistic perpendicular shocks in general. And indeed, the very presence of X-ray synchrotron photons means that such shocks are able to accelerate electrons up to energies $E_e \sim 10^8 \, m_ec^2$, assuming the hotspot magnetic field of the order of $B \sim 0.1-1$\,mG \citep[see the discussion in][]{Thimmappa20,Sunada22b}.
However, for an X-ray spectral analysis with any of the available X-ray instruments, the source extraction region has to be relatively large, in order to maximize the photon statistics for a given Point Spread Function (PSF) and the source intrinsic extent. Such an extended region unavoidably includes therefore various sub-components of the system, and hence the resulting spectral constraints do not correspond to the reverse shock exclusively, but instead to a superposition of the reverse shock, its downstream region, and also of the upstream filaments/jet-like features. Here we propose a novel, alternative method for constraining the shape of the X-ray continuum emission at the very position of the reverse shock, with sub-arcsec resolution. The method is based on hardness map analysis, for \emph{separately de-convolved} soft and hard maps; this novelty resolves the problem of artefact features appearing on X-ray hardness maps due to the energy-dependent {\it Chandra} PSF.
Throughout the paper we assume $\Lambda$CDM cosmology with $H_{0} = 70$\,km\,s$^{-1}$, $\Omega_{\rm m} = 0.3$, and $\Omega_{\Lambda} = 0.7$. The Pictor\,A redshift $z=0.035$ \citep{Eracleous04}, corresponds therefore to the luminosity distance of 154\,Mpc, and the conversion angular scale 0.7\,kpc\,arcsec$^{-1}$. The photon index $\Gamma$ is defined here as $F_{\varepsilon} \propto \varepsilon^{-\Gamma}$ for the photon flux spectral density $F_{\varepsilon}$ and the photon energy $\varepsilon$; the spectral index is $\alpha = \Gamma - 1$.
\section{Chandra Data}
\label{sec:data}
The W hotspot of Pictor\,A was observed on-axis with the ACIS \citep{Garmire03} onboard the {\it Chandra} X-ray Observatory \citep{Weisskopf00} on 2002-09-17 (ID\,3090) and 2002-09-22 (ID\,4369). A combination of relatively long uninterrupted exposures for both pointings, totaling to an observing time of 95.5\,ksec, and a small off-axis angles $\theta \simeq 0.^{\prime}11$, makes them ideal dataset for our high-resolution study.
The observational data were reprocessed using the {\ttfamily chandra\_repro} script as per the Chandra Interactive Analysis of Observations \citep[CIAO v4.14;][]{Fruscione06} analysis threads\footnote{\url{http://cxc.harvard.edu/ciao/threads/}}, using Chandra Calibration Database (CALDB)\,v4.9.7. Pixel randomization and readout streaks were removed from the data during processing. Point sources in the vicinity of the hotspot were detected with {\ttfamily wavdetect} tool using the minimum PSF method, and removed. For our analysis, we selected photons in the range $0.5-7.0$\,keV. Photon counts and spectra were extracted for the source and background regions from individual event files using the {\ttfamily specextract} script. Spectral fitting was done with the {\fontfamily{qcr}\selectfont Sherpa} package \citep{Freeman01}.
The total number of counts for the hotspot, $\sim 10,000$ for both exposures together (see Table\,\ref{tab:PL_HR_map}), places us in the regime where calibration uncertainties dominate over statistical uncertainties \citep{Drake06}. Methods to account for calibration uncertainties in the analysis of {\it Chandra} data have been discussed by \citet{Lee11} and \citet{Xu14}. The moderate count-rate of $\simeq 0.1$\,s$^{-1}$ for the hotspot located at the center of the S3 chip, implies only small chances for a pile-up in the detector \citep{Davis01}; we have verified it during the spectral analysis, but nonetheless have included the pile-up model when performing {\fontfamily{qcr}\selectfont MARX} simulations anyway (see the following section).
\section{Data Analysis}
\label{sec:analysis}
\subsection{Spectral Modeling}
\label{sec:spectrum}
A composite hotspot spectrum was extracted for each ObsID from a circular region (position: RA\,=\,5:19:26.2993, DEC\,=\,--45:45:54.377) with a radius 20\,px ($\simeq 10^{\prime\prime}$, for the conversion scale $0.492^{\prime\prime}/{\rm px}$), and the background set as a concentric annulus of 30-60\,px radius \citep[see][Figure\,1 therein]{Thimmappa20}. The background-subtracted hotspot spectra were next fitted within the soft ($0.5-2.0$\,keV) and hard ($2.5-7.0$\,keV) bands \emph{separately}, assuming a power-law model modified by the Galactic column density $N_{\rm H,\,Gal} = 3.6 \times 10^{20}$\,cm$^{-2}$ \citep{HI4PI2016}. The results of spectral fitting are summarized in Table\,\ref{tab:PL_HR_map}, and the fitted spectra are shown in Figure\,\ref{fig:PL_HR_map}.
The power-law models with photon indices $\Gamma \simeq 1.9$ in the soft band, and significantly larger $\Gamma \sim 2.7$ in the hard band, provide a reasonable description of the source composite spectra, sufficient in particular for the purpose of the PSF modeling. We note that analogous fits with the Galactic absorption set free returned similar results, only with slightly decreased values of $N_{\rm H,\,Gal}$ and $\Gamma$. Finally, including the {\ttfamily jdpileup} model in the fitting procedure does not affect the best-fit values of the model parameters, as the fraction of piled-up events that result in a good grade turns out to be very low.
\subsection{PSF Modeling}
\label{sec:PSF}
To model the {\it Chandra} PSF at the position of the W hotspot, we used the Chandra Ray Tracer ({\fontfamily{qcr}\selectfont ChaRT}) online tool \citep{Carter03}\footnote{\url{http://cxc.harvard.edu/ciao/PSFs/chart2/runchart.html}} and the {\fontfamily{qcr}\selectfont MARX} software \citep{Davis12} \footnote{\url{https://space.mit.edu/cxc/marx}}. For both ObsID\,3090 and 4369, the centroid coordinates of the selected source region were taken as the position of a point source. The source spectra for {\fontfamily{qcr}\selectfont ChaRT} were the respective power-law models in the 0.5--2.0\,keV and 2.5--7.0\,keV bands, as described in Section\,\ref{sec:spectrum}. Since each particular realization of the PSF is different due to random photon fluctuations, in each case a collection of 50 event files was made, with 50 iterations using {\fontfamily{qcr}\selectfont ChaRT} by tracing rays through the {\it Chandra} X-ray optics. The rays were projected onto the detector through {\fontfamily{qcr}\selectfont MARX} simulation, taking into account all the relevant detector effects, including pileup and energy-dependent sub-pixel event repositioning. The PSF images were created with the size of $32\times 32$\,pix$^2$, and binned with 0.5\,px resolution. An example of the simulated PSF images for ObsID\,3090 in the soft and hard bands is presented in Figure\,\ref{fig:psf}.
In order to illustrate the size of the PSFs in both bands, we calculated the enclosed count fraction (ECF) for all the simulated PSFs, i.e., the fraction of counts that would be detected within a certain circular aperture for a particular realization of the PSF. The resulting ECFs are presented in Figure\,\ref{fig:ecf} for the soft and hard bands (left and right panels, respectively), ObsIDs\,3090. As shown, for the soft band, the $2 \sigma$ radius is typically $\simeq 6$\,px, while in the hard band the corresponding $2 \sigma$ radius has a wider spread, ranging from $\simeq 6$\,px for some realizations of the PSF, up to even $\simeq 15$\,px.
Note that, since the region encompassing the hotspot structure is relatively compact, one should not expect any significant change of the PSF across the field subjected to the image deconvolution, as described in the next section. The spectral information provided for the PSF modeling, on the other hand, corresponds to the composite radiative output of the entire structure, while below we argue for the presence of significant spectral changes on sub-pixel scale within the brightest segments of the hotspot. This inherent inconsistency does not however affect the main results of the analysis.
\subsection{Image Deconvolution}
\label{sec:deconvolve}
We used the Lucy-Richardson Deconvolution Algorithm (LRDA), which is implemented in the {\fontfamily{qcr}\selectfont CIAO} tool {\ttfamily arestore}, to remove the PSF blurring, and in this way to restore the intrinsic surface brightness distribution of the hotspot. This method does not affect the number of counts on the image, but only their distribution.
The algorithm requires an image form of the PSF, which is provided by our {\fontfamily{qcr}\selectfont ChaRT} and {\fontfamily{qcr}\selectfont MARX} simulations as described in Section\,\ref{sec:PSF} above, and exposure-corrected maps of the source \citep[for more details see][]{Marchenko17,Thimmappa20}. Here we perform the de-convolution separately for the soft and hard bands, in each case for 50 random realizations of the simulated PSF; those 50 deconvolved images were then averaged to a single image using {\ttfamily dmimgcalc} tool. The resulting images are shown in Figure\,\ref{fig:deconv}. The two main features of the hotspot observed by \citet{Thimmappa20}, namely the disk-like feature perpendicular to the jet axis, as well as a weaker jet-like feature extending to the south-east along the jet axis, are present on both soft and hard maps for both ObsIDs, although the jet-like feature is much less prominent on the hard maps.
\subsection{Hardness Ratios}
\label{sec:HR maps}
Based on the de-convolved soft and hard images of the W hotspot in Pictor\,A with 0.5\,px resolution, we perform spatially-resolved Hardness Ratio (HR) analysis, in order to investigate the spectral structure of the system on sub-arcsec scales, free, as much as possible, from the PSF blurring.
HR analysis of {\it Chandra} data has been widely applied to various classes of astrophysical sources before \citep[e.g.,][]{Balucinska05,Siemiginowska07,Nandra15,Haggard19}, being in particular considered as a useful tool that allows constraints on spectra of unresolved weak sources, for which the standard spectral modeling approach is not possible due to low numbers of counts. Spatially-resolved HR analysis for extended sources, however, remains largely unexplored, because of artefact features appearing on the HR maps, in relation to (i) the energy dependence of the {\it Chandra}'s PSF, and also (ii) random fluctuations of photons, relevant especially in the low surface brightness regime. Our approach resolves the aforementioned problems, since (i) the HR analysis is based on the separately de-convolved soft and hard maps, and (ii) we remove the effect of random fluctuations by averaging over 50 realizations of each modelled PSF. In particular, based on the soft (S) and hard (H) de-convolved images, we produce spectral maps defined as ${\rm HR}=({\rm H}-{\rm S})/({\rm H}+{\rm S})$. Next, we average the HR maps corresponding to ObsID\,3090 and 4369, obtaining at the end the final distribution of the X-ray HR for the W hotspot of Pictor\,A, shown in the left panel of Figure\,\ref{fig:HR}.
An HR variance map, i.e. the values of the variance at a given position $(x,y)$ on the map, was generated based on the same $N=100$ deconvolved HR images (50 for ObsID\,3090 and another 50 for ObsID\,4369), ${\rm HR}_i(x,y)$, simply as
\begin{equation}
V(x,y) = \frac{1}{N-1} \, \sum_{i=1}^{N} \left[{\rm HR}_i(x,y)-\overline{{\rm HR}(x,y)}\right]^2 \, ,
\end{equation}
where $\overline{{\rm HR}(x,y)}$ denotes the averaged HR image. The square-root of this variance, $\sigma = \sqrt{V(x,y)}$, corresponds therefore to the \emph{statistical} uncertainty in the derived values of the hardness ratio at a given position $(x,y)$ on the map. This uncertainty is shown in the right panel of Figure\,\ref{fig:HR}.
The main structure of the hotspot that is prominent on the total intensity map is characterized by the values $-1<$\,HR\,$<0$. This structure is surrounded by a soft halo with ${\rm HR}=-1$, meaning simply no hard photons; outside of the soft halo, where the X-ray flux also drops in the soft band, the HR values fluctuate around 0. This assures the reality of the spectral map produced, as no artifact features are present in the regions with background-level flux, and all the physically meaningful HR values are concentrated exclusively in the high-flux region. Moreover --- and this is the major finding of the analysis --- there is a clear systematic HR gradient across the main disk-like feature, ranging from approximately $-0.4$ down to $-0.9$ and below (see the left panel of Figure\,\ref{fig:HR}) across the main disk-like feature, i.e. from the upstream (south-east) to the downstream (north-west) of the shock. The HR uncertainty in that region is on average, $\pm 0.2$ (see the right panel in Figure\,\ref{fig:HR}), so that the HR gradient is statistically significant.
In addition to the statistics, however, a careful investigation of the systematic uncertainty is also required. We have therefore produced hardness maps of other astrophysical sources appearing point-like for {\it Chandra}, in the analogous way as described above. For a fair comparison with the Pictor\,A hotspot, we selected sources that are unresolved and were observed by {\it Chandra} around 2002 (in order to avoid complications related to the CCD degradation), were not variable during the {\it Chandra} exposure, were free of pile-up, and had comparable photon statistics to those of the analyzed Pictor\,A pointings. The best targets fulfilling such criteria, were the BL\,Lac object AO\,0235+16 ($z = 0.94$), and quasar 4C+13.85 ($z=0.673$). In both cases, we
found no evidence of any substructure introduced by the hardness ratio map. Thus we do not believe that the gradient seen above is a systematic effect of our method. The corresponding maps for the two targets are presented in Appending~\ref{A:comparative}.
\section{Discussion and Conclusions}
\label{sec:results}
Assuming a single power-law emission model, a given value of the HR corresponds to a particular set of values for the photon index $\Gamma$ and Galactic column density $N_{\rm H,\,Gal}$ (assuming zero intrinsic absorption). In Figure\,\ref{fig:indices}, left panel, we plot this dependence, adopting $N_{\rm H,\,Gal} = 3.6 \times 10^{20}$\,cm$^{-2}$. With such, the value HR\,$=-0.4$ gives the photon index $\Gamma \simeq 1.2$, while for example HR\,$=-0.9$ gives $\Gamma \simeq 2.8$. The resulting 0.5--7.0\,keV photon index map of the W hotspot in Pictor\,A, corresponding to the 0.5\,px resolution HR map discussed above, is given in the right panel of Figure\,\ref{fig:indices}. The uncertainties in the exact $N_{\rm H,\,Gal}$ value, even if at the level of $\sim 50\%$, are in this context much less relevant than the statistical HR mean uncertainty of $\simeq 0.2$, following from the square-root variance mapping of the disk feature. This statistical uncertainty would in particular translate into a wider range of the allowed photon indices, roughly speaking $\Gamma \leq 1.6$ for the upstream edge, and $\Gamma \geq 1.9$ for the downstream region. In the case of the synchrotron origin of the detected X-ray photons, those values of photon indices would then correspond to the index of the electron energy distribution $s \equiv - \log N\!(E_e) / \log E_e = 2 \, \Gamma - 1$ ranging from $\leq 2.2$ up to $>2.8$.
It is however important to emphasize at this point, that the \emph{broad-band} spectrum of ultra-relativistic electrons accelerated at the shock front, may be much more complex than a single power-law. A single power-law model is used here rather for illustrative purposes, to give a basic insight into the slope of the high-energy segment of the electron distribution, and the amount of spectral steepening observed across the shock front in the Pictor\,A hotspot.
The gradient in the HR values across the terminal reverse shock we have found has several important implications for understanding particle acceleration at relativistic shocks in general. First, the fact that the hardest X-ray spectra we see are concentrated at the upstream edge of the X-ray intensity peak, means that the efficient electron acceleration --- forming flat electron energy distributions with indices $s\leq 2.2$ (when approximated by a single power-law) and electron energies of the order 10-100\,TeV --- takes place at the very front of the \emph{mildly-relativistic shock with perpendicular magnetic field configuration}, and not in the far downstream, for example, where compact radio knots have been found in VLBA observations \citep{Tingay08}. Second, the HR gradient suggests that high-energy electrons advected from the shock front cool radiatively (leading to a steepening of their energy distribution and the corresponding X-ray spectrum). This confirms the origin of the offset between the X-ray hotspot and the VLA hotspot: the propagation length of the ultrarelativistic electrons that produce keV photons is of the order of a parsec for the expected hotspot magnetic field 0.1--1\,mG, and at most a hundred of parsecs for unrealistically low magnetic field intensity of a few $\mu$G \citep{Thimmappa20}, while it is much longer for radio-emitting electrons. By the time the jet has traveled the $\simeq 1$\,kpc between the X-ray hotspot and the compact radio knots, there are essentially no X-ray emitting electrons left.
Mildly-relativistic magentized shocks in electron--ion plasmas --- meaning shock bulk Lorentz factors $\gamma_{\rm sh} \lesssim (m_p/m_e)^{1/3} \simeq 10$ and magnetization parameters, defined as a ratio of the upstream Poynting flux to the kinetic energy flux, $10^{-3} < \sigma \leq 0.1$, matching the conditions expected to hold in the western hotspot of Pictor\,A --- have been investigated with 2D kinetic particle-in-cell simulations by \citet{Sironi11}, and more recently by \citet{Ligorini21a,Ligorini21b}. These studies do show some energization of electrons, due to the resonant interactions with large-amplitude longitudinal Langmuir waves, combined with shock-surfing acceleration \citep{Lyubarsky06,Hoshino08}, however with a rather low efficiency when compared to ultra-relativistic shocks (i.e., shocks with $\gamma_{\rm sh} \gg 10$). As a consequence, the downstream electron spectra observed in such simulations (i) are basically thermal with little or no non-thermal power-law components, (ii) have total energy density much below that of the ions, at the level of about $10\%$, and (iii) have limited maximum energies $E_e/m_e c^2 < (m_p/m_e) \gamma_{\rm sh} < 10^4$. This is in contrast to the observational findings presented here, and elsewhere in the literature, regarding the ion-electron energy equipartition \citep[see][]{Stawarz07}, electron energies of the order of 10-100\,TeV \citep[see][]{Sunada22b}, and flat slopes of the electron energy distribution (this work). Together, these observational findings indicate that electron acceleration is both fast and efficient at the jet termination shocks of luminous radio galaxies and quasars.
\begin{acknowledgements}
This research has made use of data obtained from the Chandra Data Archive. This work was supported by the Polish NSC grant 2016/22/E/ST9/00061 (R.T., \L .S.) and NASA award 80NSSC20K0645 (R.T., J.N). The authors thank the anonymous referee and O.~Kobzar for valuable comments and suggestions on the manuscript.
\end{acknowledgements}
\appendix
\section{Hardness Ratio Analysis of the Comparative Sources}
\label{A:comparative}
BL Lac object AO\,0235+16 ($z = 0.94$), has been observed on 20-08-2000 by {\it Chandra} on the ACIS-S3 chip (ObsID\,884) with 30.625\,ksec exposure time. The source spectrum was extracted from a circular region (position: RA\,=\,2:38:38.9560, DEC\,=\,16:36:59.440) with a radius 3\,px ($\simeq 1.5^{\prime\prime}$), and the 5-10\,px annulus background. The background-subtracted source spectra were fitted within the soft ($0.5-2.0$\,keV) and hard ($2.5-7.0$\,keV) bands, assuming single power-law models modified by the Galactic column density $N_{\rm H,\,Gal} = 6.79\times 10^{20}$\,cm$^{-2}$ \citep{HI4PI2016}.
Based on those spectra, we performed PSF simulations, and next produced deconvolved and hardness ratio images, all as presented in Figure\,\ref{fig:0235+16}. As shown, there is no sub-structure on the hardness ratio map: a point source at the position of the blazar, characterized by HR\,$\simeq -0.5$, is surrounded by the background with the HR values of either $-1$ or 0.
The radio-loud quasar B2251+134 (=4C+13.85, $z=0.673$) has been observed (ObsID 2146) on 18-01-2000 with 25.8\,ksec exposure time. The source spectrum was extracted from a circular region (position: RA\,=\,22:54:20.9771, DEC\,=\,13:41:48.802) with a radius 7\,px ($\simeq 3.5^{\prime\prime}$), and the background annulus of 10-15\,px radii.
We have modeled the {\it Chandra} spectra of B2251+134 in the soft ($0.5-2.0$\,keV) and hard ($2.5-7.0$\,keV) bands with single power-law models modified by the Galactic column density $N_{\rm H,\,Gal} = 4.67\times 10^{20}$\,cm$^{-2}$ \citep{HI4PI2016}, this time however allowing for the intrinsic absorption in addition to the Galactic one. The intrinsic column density was kept as a free parameter when fitting the soft spectrum; the resulting best-fit value was then frozen when fitting the hard segment of the source spectrum. The results of the following image deconvolution are presented in Figure\,\ref{fig:2251+134}. Again, what we see is a well-defined point source in the center with HR\,$\sim -0.8$, surrounded by the HR\,$=-1$ or $=0$ background.
|
Title:
Overcoming 1 part in $10^9$ of Earth angular rotation rate measurement with the G Wettzell data |
Abstract: The absolute measurement of the Earth angular rotation rate with ground-based
instruments becomes challenging if the 1 part in $10^9$ of precision has to be
obtained. This threshold is important for fundamental physics and for geodesy,
to investigate effects of General Relativity and Lorentz violation in the
gravity sector and to provide the fast variation of the Earth rotation rate.
High sensitivity Ring Laser Gyroscopes (RLG) are currently the only promising
technique to achieve this task in the near future, but their precision has been
so far limited by systematics related to the laser operation.
In this paper we analyze two different sets of observations, each of them
three days long. They were obtained from the G ring laser at the Geodetic
Observatory Wettzell. The applied method has been developed for the GINGERINO
ring laser in order to identify and extract the laser systematics. For the
available data sets the residuals show mostly white noise behavior and the
Allan deviation drops below 1 part in $10^9$ after an integration time of about
$10^4$~s.
| https://export.arxiv.org/pdf/2208.09134 |
\begin{document}
\section{Introduction}
At present large scale ring laser gyroscopes (RLGs) are the most sensitive instruments to measure absolute angular rotation rates\cite{uno, due, tre, EPJC21, ER1, HUST}. They are based on high finesse square optical cavities where an active medium is present, and two counterpropagating laser beams are generated. The frequencies of the two beams depend on the effective time the photon takes to travel along the perimeter, which is different when the gyro is rotated. However, non-reciprocal effects in the cavity from the laser excitation process will cause a systematic bias and this has to be avoided.
Furthermore, there can be small deviations from the expected rate of rotation, due to fundamental laws of physics. One example is the precession of the frame of reference, as predicted by General Relativity (GR), related to gravitoelectric and gravitomagnetic effects.
For a RLG rigidly connected with the Earth surface, the observed Sagnac frequency is the difference in angular frequency of the two laser beams $\omega_s$:
\begin{equation}
\omega_s =8 \pi \frac{A}{P\lambda}\Omega_\oplus\cos\theta\;,
\end{equation}
where $A$ is the area and $P$ the perimeter of the ring cavity, $\lambda$ the optical wavelength, $\Omega_\oplus$ the angular rotation rate, equivalent to the Earth rotation rate. $\theta$ is the angle between the optical cavity area vector and the $\Omega_\oplus$ rotation axis.
G has demonstrated a sensitivity of 12 prad/s in 1 s integration time and is able to operate continuously and unattended for months.\cite{please_Ulli} The sensitivity is a function of the size of the ring cavity and cavities of 77 and 121 m perimeter,\cite{NZ1} have been explored, several years ago. More recently the four component ring laser array ROMY has been installed in the geophysical observatory of Bavaria, Germany, which comprises 4 RLGs with triangular cavities, each with a perimeter of 36 m \cite{due, romy2}. Typically, the most sensitive existing devices capable of long term continuous operation employ square optical cavities with several meters on a side, rigidly attached to the ground. The mechanical structure of the optical cavity plays a big role in the performance of RLG and monolithic optical cavity structures have been the first RLGs to obtain relevant performance, getting close to tens prad/s level of sensitivity \cite{uno}. The G RLG, in operation at the Geodetic Observatory Wettzell, is the best known example and the optimal choice to have a very stable laser cavity. However, it is very difficult to build and cannot be reorientated easily. Over more than 20 years, heterolythic (HL) cavities have been developed, basically consisting of a rigid mounting frame giving hold to different mechanical elements, which support the mirrors contained inside metallic vacuum chambers.
GINGERINO has been the first HL RLG operating continuously with high sensitivity \cite{G90}. GINGERINO takes advantage of the quiet location inside the Gran Sasso, Italy, underground laboratory. Despite that, the standard deviation of the reconstructed Sagnac frequency of GINGERINO is typically more than 50 times larger than that of G \cite{tre,EPJC21}, mostly due to small mechanical issues, since the composed structure is not yet rigid enough\cite{EPJP2020}. Work is currently in progress to improve the mechanical HL design in order to increase the sensor stability. Despite these mechanical shortcomings in GINGERINO, the previous investigations \cite{tre,EPJC21} indicate that the intrinic noise level is lower than the expected shot noise limit for a RLG\cite{ chow}. This model does not take the presence of the active medium inside the cavity into account and there is work in progress to improve the theoretical model by the Italian group of GINGER.
The main signal obtained from a RLG is the interferogram of the two counterpropagating beams transmitted at one corner of the square cavity. The Sagnac frequency $\omega_s$, which is caused by the rate of rotation, must be not confused with the actually observed beat note $\omega_m$ of the interferogram \cite{Lamb, Aronowitz,Beghi,Cuccato}, since the latter is biased by laser systematics, usually caused by backscatter coupling and a null shift contribution. An original analysis scheme to remove laser systematics has been developed and successfully applied to analyse the data of GINGERINO \cite{DiVirgilio2019,DiVirgilio2020,tre,EPJC21}. This effort shows how any change due to the laser dynamics can be approximated with the sum of several terms, analytically evaluated using the laser parameters of the Lamb theory. These in turn can be determined from the signals provided by the RLG. It is important to remark that these laser systematics are not a stochastic effect, since they are caused by all sorts of small changes affecting the active optical cavity. These effects are much smaller in monolithic RLGs.
\section{Analysis procedure}
The laser dynamics can be described by a set of differential equations containing several parameters, which are known as Lamb parameters \cite{Lamb,Aronowitz,DiVirgilio2019, DiVirgilio2020}. They can be calculated from the diagnostic signals, available from the ring laser intensities, in particular those providing the AC ($IS_{1,2}$) and the DC ($PH_{1,2}$) levels of the detectors, each looking at a different laser beam (hereafter denoted as monobeam signals) and their relative phase offset $\epsilon$.
The model refers to the intracavity power, while the signals are taken outside the cavity; it is therefore necessary to know the values of mirror transmission and detector gain.
The analysis proceeds in steps. The first approximation of the Sagnac frequency, denoted as $\omega_{s0}$, is analytically evaluated as follows:
\begin{equation}
\omega_{s0} = \frac{1}{2} \sqrt{\frac{ 2 I_{S1} I_{S2} \omega _m^2 \cos (2 \epsilon )}{ PH_{1} PH_{2}}+\omega _m^2}+\frac{\omega _m}{2}\;,
\label{approx}
\end{equation}
where $\omega_m$ is the experimental interferometric angular frequency, that is the beat note of the interference of the two counterpropagating beams coming out from the cavity. It must be noted that the monobeam intensities enter Eq.~\ref{approx} as the ratio of their AC and DC components, leading to mirror transmission and the differences in electronic gains to play a smaller role. For many applications, $\omega_{s0}$ is indeed a good approximation of $\omega_s$.
Nevertheless, the different terms of the equation are certainly affected by errors, as for example the dark current of the applied photodiodes or any non-linearity in the electronic gain. In order to improve the $\omega_{s0}$ evaluation, six terms, namely $\omega_\xi$, have been elaborated, assuming small errors in the measurement data used in Eq.~\ref{approx} and expanding the equation to the first order. The first term was already discussed in the first paper dedicated to the analysis method \cite{DiVirgilio2019}, the other ones have been added more recently.
We remark that the six $\omega_\xi$ terms do not depend on the laser dynamics and are meant to improve $\omega_{s0}$ by correcting errors associated with the measurements themselves.
The null shift depends on the Lamb parameters associated with the laser excitation and is obtained by the term $\omega_{ns1}$, the first order expansion of the theory. Higher order expansion terms can be calculated, but in the following analysis only the first one will be used. The null shift is strictly connected to the non-reciprocity of the optical path in the two directions, related in turn to dissipative processes. In terms of the Lamb parameters of the laser functions, $\omega_{ns1}$ is connected to the difference $\mu_c-\mu_{cc}$ between the cavity losses, where $\mu_{c,cc}$ represent losses in clockwise and counter-clockwise propagation directions. However, it has been demonstrated in \cite{DiVirgilio2019} that, assuming a quasi-stationary laser and the same value for the parameter $\beta$ for the two beams\footnote{$\beta_{1,2}$ is the self-saturation parameter of the laser transition for the two counter-propagating beams. In the model it is assumed $\beta_c = \beta_{cc}=\beta$.}, the loss difference is made explicit, so that only one of the two $\mu_{c,cc}$ is a free parameter. In the following, we will consider only the clockwise cavity loss, which will be indicated simply as $\mu$; $\omega_{ns1}$ and $\beta$ are proportional to $\mu$ and completely defined by the theory in terms of the available signals. The null shift correction has values in the region of mHz (several ppm), so the accuracy of $\mu$, usually measured by the ring-down time of the optical cavity with percent accuracy, can severely limit the $\omega_s$ reconstruction.
The analysis developed for GINGERINO assumes constant the plasma temperature and pressure, while $\mu$ changes with time,
accordingly any change is interpreted as change of $\mu$, but it could be due to the other parameters, or to the electronic circuit regulating the gain tube. In the future the analysis model will be further expanded in order to better identify the origin of the changes.
A suitable procedure is developed to take into account variations of $\mu$ with time and refine the identification of the null shift.
Changes of $\mu$ in time, however, are small and slow enough to not invalidate the assumption of a quasi-stationary regime. It is possible to consider $\mu(t) = \mu(t_0) + \delta\mu(t)$, $t_0$ being the origin of the expansion in the series. It is convenient to describe $\delta\mu(t)$ using the available signals. To this aim, the laser gain monitor signal can be used, but in more recent versions of the analysis the Lamb parameter $\beta$ has been considered, since it is rather constant in time
and proportional to $\mu$. By using the relationship reported in the appendix of \cite{DiVirgilio2020} and the parameters of the RLG G, we have
\begin{equation}
\beta =\frac{2.98772 \mu }{2.98772 - PH_1}\;,
\label{eq:beta}
\end{equation}
where $PH_1$ is the DC value of monobeam 1, in Volts.
Equation \ref{eq:beta} shows that $\mu$ is a proportionality constant, accordingly the quantity that can be evaluated does not contain $\mu$. Assuming $\mu(t) = \mu(t_0) + \delta\mu(t)$, and taking into account that $\beta$ is rather constant, since it is related to the laser transition, $\delta\mu(t)$ can be evaluated at the first approximation as follows:
\begin{eqnarray}
\bar{\beta}&=&\frac{\beta(t)}{\mu} \nonumber\\
\delta\mu(t) &=&\mu(t_0)\times\left(\frac{\bar{\beta}(t_0)}{\bar{\beta}(t)}-1\right)\;.
\end{eqnarray}
In this way, $\mu(t_0)$, the loss at the time $t_0$, has to be determined by statistical means, with $t_0$ arbitrarily chosen: in the present analysis $t_0$ is the central point of the data set.
The Lamb parameters can be considered constant over short time intervals; therefore, the stationary mathematical relationships can be considered valid for short time intervals. Accordingly the required analytical relationships are elaborated at high frequency rate, and decimated afterwards down to the desired low frequency rate, details can be found in the related literature\cite{DiVirgilio2019,DiVirgilio2020}.
The second step of the analysis determines $\omega_s$ with a linear regression to optimize the subtraction of the $\omega_\xi$ terms, $\omega_{ns1}$ and $\omega_{ns1}\times\delta\mu$.
In this way, $\omega_s$ is recovered, but it is still necessary to further identify and subtract other known signals. The G RLG is well isolated, but in any case affected by global and local motion of the Earth crust, such as the diurnal polar motion and the solid Earth tides. In the present analysis the direct effect of the environmental disturbances on the mechanical apparatus is considered negligible, owing to the monolithic and stable ZERODUR\footnote{ZERODUR is a glass with very low thermal expansion coefficient, that can be considered practically negligible at room temperature.} structure. It is assumed that all external effects, hereafter denoted geodetic components, can be described by sufficiently accurate models. The geodetic components are well known, but they are added to the linear regression along with the other terms in order to avoid biases induced by previous analyses. The terms relevant to describe the main geodetic components are taken from model data provided by the G group. Furthermore, it is also possible to take the angular rotation around the vertical in the local reference frame at the G latitude and longitude into account. They are based on the publications of the International Earth Rotation and Reference Systems Service (IERS)\footnote{\texttt{https://hpiers.obspm.fr/eop-pc/index.php?index=C04\&lang=en}.} We denote relevant terms as $FGEO$. It is important to remark that $FGEO$ contains the Chandler and the Annual wobble, whose values are published with several days delay. For this reason these signals are not included in the near realtime data files of G. Finally, we have verified that the calibration and alignment of G is at the level of $0.3\%$.
\subsection{Details of data analysis}\label{details}
The whole analysis is based on linear regression (LR) \cite{Kay,Neter, Sen, direnzo}. Only terms with p-values below $0.4$ are kept in the linear regression, carried out by using the MATLAB routine $fitlm$, with the option $RobustOpt$ on. It has been checked that the final p-values are always below $0.2$.
The reconstruction of the AC and DC signals is based on Hilbert transform: data are band-pass filtered around the beat note (in a range $\pm12$ Hz around $\omega_m$) and Blackman windowing is implemented. This part of the analysis has been validated by processing a known ideal sinusoidal signal, without noise addition. It has been checked that the Hilbert transform routine leads to recover the expected frequency within 7 nHz before decimation, and within 5 nHz after decimation down to the rate of $0.016667$ Hz (60 s measurement time).
Relevant terms in the analysis are evaluated at 2 kHz, the rate of the available data, and decimated down to $0.016667$ Hz rate, taking into account the information provided by the G group.
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c}
Gas pressure & 10 mbar\\
Area of the beam at waist & $0.51\times0.74\time 10^{-6}$ m$^2$\\
Mirror transmission & 0.2 ppm\\
Perimeter of the cavity & 16 m\\
Kinetic discharge temperature & 360 K\\
Photodiode quantum efficiency & 0.5\\
Trans-impedance amplifier gain & $1.1\times10^{8}$ V/A\\
Beat note mean value & 348.516 Hz\\
average loss $<\mu>$ & $6.51280\times 10^{-5}$\\
Scale factor & $6.3211125158\times 10^6$ Hz s/rad
\end{tabular}
\caption{Data of the G RLG set up.}
\label{tab:Gparameters}
\end{table}
A set of explanatory variables is evaluated: the laser explanatory variables $\omega_\xi$, $\omega_{ns1}$, $\omega_{ns1} \times \delta\mu$, and the available geodetic terms.
The LR procedure has been repeated using different schemes. In the first scheme the LR uses the whole set of terms all at once. In the second scheme the LR has been applied in two steps: the first step evaluates $\omega_s$ using only the laser terms, leading to an initial evaluation of the laser corrections.
In the second step the geodetic terms are used as explanatory variables. When available, $FGEO$ is added as a known signal and used for check. The different schemes produce always very similar results: in the following, the first scheme is used, since it is the more conservative one.
Care has been put to avoid local minima in the regression, which sometimes occurred.
\subsection{The data}
The two data sets have 3 full days stored at 2 kHz rate, day 99, 100, and 101, year 2020, and day 121, 122, and 123, year 2022. The two data sets are different and the 2020 one presents more short duration glitches than the other.
The G data, all the related analysis steps, the employed geophysical models, and environmental monitors are stored on a daily basis in a file at 0.016667 Hz rate. Details of data associated to each column of the published files can be found in the Appendix: in the following, the column numbers will be used in the analysis description. Column 34 provides the measured Earth rotation with the the estimated backscatter correction applied and the theoretical diurnal polar motion and solid Earth tides model subtracted. Since the absolute orientation of the G structure is not known well enough, these latter corrections have to be considered as preliminary. Several tiltmeters are used in order to establish the long term changes of the sensor orientation. These in turn have to be corrected for temperature related effects and from mass attraction derived from a global weather model \cite{kluegel}. In particular the latter comes with several days delay, hence it can not be corrected in realtime.
The initial value of the ring laser orientation is taken from a local survey, which provides values with a substantial error, much larger than 5 ppm. Typical values are 49.144802 degrees for the latitude and 12.87630 degrees for the longitude.
While the analysis is based on the methods developed for GINGERINO, there are several differences in the experimental readout system. In GINGERINO the interferogram is the beat note taken at one corner and the monobeams are both measured at the same neighboring corner. For G, the available interferogram is the sum of the two beams at the beam combiner, and the monobeams are each measured at a different corner in order to avoid perturbations from back reflections. In this case the $\omega_\xi$ terms are effective in correcting differences in the mirror transmission, and other differences due to the measurement at different corners.
\section{Analysis results}
The beat note $\omega_m$ is evaluated with our method and compared with the one recorded in the published file, contained in column 3.
There are small differences in the standard deviation, which in the present analysis turns out smaller by $3-4$ $\mu$Hz, probably due to the band-pass filter around the beat note applied before the frequency reconstruction. This is relevant to reduce the impact of the laser systematic terms $\omega_{K1}$ and $\omega_{K2}$ \cite{DiVirgilio2019, DiVirgilio2020}, first and second order expansion, since the second harmonic contribution, which is very difficult to model and subtract, is effectively eliminated in this way. %
Remarkably, in the present data sets irregular signals occur in the separation between one and the subsequent day, in fact to keep constant the number of data for each day some points are missing at the end of each daily file, standard procedure of the Miniseed data retrieval. In the analysis the missing points are replaced with zeroes.
\subsection{Days 99-101, 2020}
We have analysed the days 99, 100 and 101 starting from raw data. Published files, containing one day data, are extracted from a continuous logging sequence with the help of a MINISEED data format, and, to prevent loss of precision, a set of data in a MATLAB readable format has been used. Analogously to the procedure used for GINGERINO, data are band-passed around $\omega_m = 348$ Hz, in a $\pm 12$ Hz interval. The relevant parameters are evaluated (AC and DC of the monobeams, relative phase of the mode $\epsilon$, and the beat note $\omega_m$) at 2 kHz rate. Different terms of the analysis, $\omega_{s0}$, $\omega_{\xi}$, and $\omega_{ns1}$ are evaluated using the available information summarized in Table \ref{tab:Gparameters}, including photodiode gain, parameters of the G RLG (size and gas pressure), the measured average losses $<{\mu}>$. Six $\omega_\xi$ terms are considered, to take into account errors in the evaluation of $\epsilon$, $I_{S1,2}$, and $PH_{1,2}$ of the monobeams.
The first step of the analysis evaluates $\omega_{s0}$ using Eq.~\ref{approx}. Remarkably, $\omega_{s0}$ values remain rather stable within each single day, but small discontinuities are evident between one day and the other, due to the fact that some points around midnight are missing.
For this reason data around midnight have been eliminated from the analysis: no other cuts have been applied to the data.
Figure \ref{fig:corr1} shows $\omega_s$, mean value subtracted, evaluated by the LR procedure for the three days (red solid line), and the discontinuities between different days are no longer evident. $\omega_m$, mean values subtracted, is also shown (blue solid line): small differences with $\omega_s$ are seen.
The top panel of Fig.~\ref{fig:corr2} reports the $\omega_{ns1}$ corrections evaluated by the LR procedure. The total geodetic components are plotted in the bottom panel, as evaluated by the LR procedure without including Chandler wobble data.
\subsection{Days 121-123, 2022}
A second, more recent, set of three days (day 121, 122, and 123, year 2022) has been also analysed. In this second data set the G RLG was operating under ideal conditions and at roughly 20\% lower beam power. The signals are cleaner, although small discontinuities between different days are still present, but fewer points have been eliminated around midnight to cure the problem. The $\omega_s$ has been evaluated analogously to the first data set. Data indicate that the signal is more stable compared with the previous set of data, accordingly the null shift terms have little impact, but remain to be meaningful for the Allan deviation analysis. Figure \ref{fig:422} and Fig.~\ref{fig:corr1} are very similar with each other.
\section{Residuals and Allan deviation of the two data sets}
The two data sets exhibit very similar results, however, more points have been eliminated in the 2020 case, probably in response to the higher beam power setting. The amount of data retained for the analysis is $93.4\%$ of the total for the 2020 and $95.8\%$ of the total for the 2022 data set. All cuts had to be applied around midnight. Figure \ref{fig:res} shows the distributions of the residuals for both sets; they are very similar with respect to each other. This indicates that the noise from the laser dynamics has been effectively identified and subtracted by the procedure in both cases.
Figure \ref{fig:Allan1} shows the Allan\footnote{The function allan of Matlab has been used.} deviation of the residuals for both data sets, sampled at 0.016667 Hz. The green solid lines display results obtained by applying the LR procedure at once on the whole three days data sets. The procedure has been also applied to each single day separately: orange solid lines report the corresponding results (day 99, year 2020, and day 121, year 2022, are considered as examples). Remarkably, plots cross the 1 part in $10^9$ threshold in less than one day of integration time, albeit with a larger standard deviation.
The same procedure has been applied to each single day separately and the two panels in Fig.~\ref{fig:Allan1} refer to the Allan deviation of the residuals. Curves obtained in the present analysis are always below those coming out from the standard one, with the exception of one of the days, around $\tau \simeq 8\times 10^3$ s. When $FGEO$ contributions are taken into account, the present analysis leads to even smaller Allan deviation. In any case, plots cross the 1 part in $10^9$ threshold with less than 1 day of integration time.
At the present stage we do not investigate the nature of the residuals,
since the focus is here to investigate whether the Allan deviation goes below the fundamental physics threshold, i.e., below about 1 part in $10^9$, while keeping the analysis as simple as possible. Moreover, owing to its monolithic structure, G is certainly less prone to those mechanical coupling effects accounted for, in GINGERINO analysis, by the extra term based on the product of the residuals and the tiltmeter signal.
The geophysical components have been subtracted to $\omega_m$ using the LR, without taking into account the laser terms $\omega_\xi$ and $\omega_{ns1}$. With the 2022 data set, definitely less affected by laser dynamics, the Allan deviation is a factor 2 worse at $10^4$ seconds compared to Fig.~\ref{fig:Allan1} RIGHT, where the Allan shown with the dashed red line is super-imposed.
With the 2020 data set, the result is even higher, indicating that, despite G is based on an extremely rigid optical cavity,
it is necessary to identify and subtract the laser systematic terms in order to reach, and go beyond, the 1 part in $10^9$ precision level of the Earth rotation rate measurement.
\section{Conclusions}
The analysis based on the model developed for GINGERINO has been extended to the G RLG, rewriting the mathematical relationships with the main aim to account for the characteristic values of the G apparatus.
In this way, corrections to the laser systematics have been subtracted in a deterministic way following the model based on the stationary solution of the RLG equations and the loss $\bar{\mu}$ of the optical cavity determined by statistical means. The signals of geodetic origin have been always subtracted assuming linear relationships to minimize the residuals via linear regression.
For the full set of three days of 2020, the Allan deviation drops below one part in $10^9$ in less than 1 day of integration time.
The data point close to midnight are eliminated, to overcome a problem in the data retrieval subroutine and the separate analysis of each single day exhibits Allan deviations going below 1 part in $10^9$ more rapidly.
The analysis has been repeated with a second, and more recent, set of three days in June 2022, where the ring laser was operated at lower power. In this case, the monobeam signals are cleaner, and the cavity appears to be more stable, accordingly the effects of $\omega_{ns1}$ are smaller, but still remain significant for the lower Allan deviation outcome.
G is based on a monolithic structure in ZERODUR, a very low thermal expansion material, with mirrors optically contacted to the structure. For this reason the cavity is extremely stable, nevertheless tiny laser systematic effects are present and it is necessary to subtract them in order to improve the performance above the 1 part in $10^9$ level. Certainly laser systematics cancellation is more relevant for RLGs based on a HL design, as GINGERINO and ROMY, since in this case cavity losses, the quantity effectively ruling laser systematics, is affected by variations of the mirror distances and their relative alignment.
\appendix
\section{The G daily file with data and results}
On a daily basis, G data sampled at 0.0166667 Hz rate and relevant results are published in a file.%
Contents of each column are listed in Table \ref{tab:column}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
\hline
Column & Quantity & Units\\
\hline
\hline
1& Epoch& [MJD]\\
\hline
2& Epoch& [day]\\
\hline
3& Sagnac (single tone extractor)& [Hz]\\
\hline
4&Sagnac rms & [Hz] \\
\hline
5& Phase between SW-port and Sagnac beam combiner port& [rad]\\
\hline
6& Phase between SE-port and Sagnac beam combiner port& [rad]\\
\hline
7& SW-AC &[V]\\
\hline
8&SW-DC& [V]\\
\hline
9& SW-AC/DC & -\\
\hline
10& SE-AC& [V]\\
\hline
11& SE-DC& [V]\\
\hline
12& SE-AC/DC&-\\
\hline
13& SW-SE-Phase& [rad]\\
\hline
14& estimated Sagnac correction factor&-\\
\hline
15& estimated Sagnac correction value& [mHz]\\
\hline
16& Sagnac BS-corrected& [Hz]\\
\hline
17& SE-DDS-Amplitude AC-level for driving LED& [V]\\
\hline
18& SW-DDS-Amplitude AC-level for driving LED& [V]\\
\hline
19& SW-Phase to DDS driving LED& [rad]\\
\hline
20& SE-Phase to DDS driving LED& [rad]\\
\hline
21&Sagnac geophys. model contribution subtracted& [Hz]\\
& (Oppolzer-Terms $\&$ Tilt-NS deformation effect)& \\
\hline
22& Sagnac of SE-monobeam estimation (Single Tone extractor)& [Hz]\\
\hline
23& Sagnac of SW-monobeam estimation (Single Tone extractor)& [Hz]\\
\hline
24& Geophysical Model: Oppolzer terms NS& [rad]\\
\hline
25& Geophysical Model: Oppolzer terms EW& [rad]\\
\hline
26& Geophysical Model: Theoretical tilt NS (attraction part)& [rad]\\
\hline
27& Geophysical Model: Theoretical tilt NS (deformation part)& [rad]\\
\hline
28& Observed tilt NS& [rad]\\
\hline
29& Geophysical Model: Theoretical tilt EW (attraction part)& [rad]\\
\hline
30& Geophysical Model: Theoretical tilt EW (deformation part)& [rad]\\
\hline
31& Observed tilt EW& [rad]\\
\hline
32& Geophysical Model: Sagnac geophys. model contribution& [$\mu$Hz]\\
& (Oppolzer-Terms $\&$ Tilt-NS deformation effect)& \\
\hline
33& Sagnac geophys. model reduced & [Hz]\\
&(Oppolzer-Terms $\&$ Tilt-NS deformation effect)& \\
\hline
34& Sagnac geophys. model and BS applied& [Hz]\\
\hline
35&Pressure vessel barometric pressure & [hPa]\\
&(Paroscientific at pressure vessel supply)& \\
\hline
36& Pressure vessel temperature& [$^o$C]\\
\hline
37& Pressure vessel humidity& [$\%$rH]\\
\hline
38& Ringlaser room barometric pressure & [hPa]\\
&(Vaisala outside pressure vessel)& \\
\hline
39& Ringlaser room temperature& [$^o$C]\\
\hline
40& Ringlaser room humidity& [$\%$rH]\\
\hline
41& Control room temperature& [$^o$C]\\
\hline
42& Control room humidity& [$\%$rH]\\
\hline
43& Plasma brightness& [V]\\
\hline
\hline
\end{tabular}
\caption{Content of columns in the G data and results file.}
\label{tab:column}
\end{table}
|
Title:
Scaling of electron heating by magnetization during reconnection and applications to dipolarization fronts and super-hot solar flares |
Abstract: Electron ring velocity space distributions have previously been seen in
numerical simulations of magnetic reconnection exhausts and have been suggested
to be caused by the magnetization of the electron outflow jet by the compressed
reconnected magnetic fields [Shuster et al., ${\it Geophys.~Res.~Lett.}, {\bf
41}$, 5389 (2014)]. We present a theory of the dependence of the major and
minor radii of the ring distributions solely in terms of upstream (lobe) plasma
conditions, thereby allowing a prediction of the associated temperature and
temperature anisotropy of the rings in terms of upstream parameters. We test
the validity of the prediction using 2.5-dimensional particle-in-cell (PIC)
simulations with varying upstream plasma density and temperature, finding
excellent agreement between the predicted and simulated values. We confirm the
Shuster et al. suggestion for the cause of the ring distributions, and also
find that the ring distributions are located in a region marked by a plateau,
or shoulder, in the reconnected magnetic field profile. The predictions of the
temperature are consistent with observed electron temperatures in
dipolarization fronts, and may provide an explanation for the generation of
plasma with temperatures in the 10s of MK in super-hot solar flares. A possible
extension of the model to dayside reconnection is discussed. Since ring
distributions are known to excite whistler waves, the present results should be
useful for quantifying the generation of whistler waves in reconnection
exhausts.
| https://export.arxiv.org/pdf/2208.00559 |
\title{Scaling of electron heating by magnetization during reconnection and applications to dipolarization fronts and super-hot solar flares}
\authors{M. Hasan Barbhuiya\affil{1}, P. A. Cassak\affil{1}, M. A. Shay\affil{2}, Vadim Roytershteyn\affil{3}, M. Swisdak\affil{4}, Amir Caspi\affil{5}, Andrei Runov\affil{6}, Haoming Liang\affil{7}}
\affiliation{1}{Department of Physics and Astronomy and the Center for KINETIC Plasma Physics, West Virginia University, WV 26506, USA}
\affiliation{2}{Department of Physics and Astronomy and the Bartol Research Center, University of Delaware, Newark, DE 19716, USA}
\affiliation{3}{Space Science Institute, Boulder, CO 80301, USA}
\affiliation{4}{Institute for Research in Electronics and Applied Physics, University of Maryland, College Park, MD 20742, USA}
\affiliation{5}{Southwest Research Institute, Boulder, CO 80302, USA}
\affiliation{6}{Department of Earth and Space Sciences, University of California Los Angeles, CA 90095, USA}
\affiliation{7}{Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35899, USA}
\correspondingauthor{M. Hasan Barbhuiya}{[email protected]}
\begin{keypoints}
\item We predict major and minor radii of ring distributions during reconnection in terms of upstream parameters and confirm with PIC simulations
\item We find that ring distributions occur at a shoulder (plateau) in the reconnected magnetic field in the simulations
\item The predicted temperatures are comparable to observed values in dipolarization fronts in Earth's magnetotail and in super-hot solar flares
\end{keypoints}
\section*{Plain Language Summary}
Solar flares and geomagnetic substorms are naturally occurring eruptions in space that can impact humans on Earth due to space weather. Both are caused by magnetic reconnection, during which magnetic field lines break and release energy into the surrounding ionized gas (plasma). From past research, we know that electrons near the reconnection site get magnetized in the strong magnetic fields that have already undergone reconnection, leading to a characteristic ring distribution of their velocities where all particles have similar speed in the plane perpendicular to the magnetic field. We predict the speed of the particles in terms of the ambient properties of the easily measured surrounding plasma, and we confirm the prediction with numerical simulations. We show that the rings are located in a region where there is a leveling off of the magnetic field strength, which is a signature that can be used to identify ring distributions in future satellite measurements. We then use the result to predict temperatures in geomagnetic substorms and solar flares, finding that there is reasonable agreement. This suggests that we can understand the observed temperatures in terms of the ambient plasma properties, which will make it easier to predict these temperatures going forward.
\section{Introduction}
\label{sec:intro}
Energy conversion by magnetic reconnection, and its after effects, are of significant importance in numerous magnetospheric and solar processes \cite{Birn07,Gonzalez16}. Two examples are solar flares, which are energetic eruptions in the solar corona caused by reconnection \cite{Priest02}, and geomagnetic storms and substorms, during which energy from the interplanetary magnetic field gets stored and released via reconnection in Earth's magnetotail
\cite{Angelopoulos08}. Some of the magnetic energy released during reconnection appears as bulk flow energy of a %
plasma jet. In Earth's magnetotail, the energy in the jet is
ultimately injected into the inner magnetosphere where it can greatly impact magnetospheric dynamics and has important space weather implications \cite{McPherron79,Pulkkinen07}. Analogous dynamics takes place in magnetospheres of other planets \cite{Smith18,Xu21} and in sunward jets that occur during solar flares \cite{Reeves08}.
In Earth's magnetotail, the reconnected magnetic field on the Earthward side of the reconnection site dipolarizes as it releases its stored energy \cite{Fu20}. The Earthward reconnection jet impinges on the pre-existing and relatively dense plasma sheet, which acts as an obstacle to the jet \cite{hesse&birn_1991_JGR}. The jet's kinetic energy compresses the reconnecting magnetic field, producing a dipolarization front (DF) \cite{ohtani_2004_JGR,runov_2009_GRL,sitnov_JGR_2009,runov_2010_Planet_Sci,Runov11,sitnov:2011,Hwang11,Schmid11,runov_2013_JGR,Fu12,Fu13,sitnov:2013}.
\textcolor{black}{(It has been argued that a more appropriate name for DFs is ``reconnection jet fronts,'' but we retain the name dipolarization fronts to conform to the majority of the literature.)}
Characteristic properties of DFs at the Earthward jet include a steep increase in the magnetic field component $B_z$ normal to the plasma sheet and a steep decrease in plasma density as one goes in the tailward direction. Here, we use Geocentric Solar Magnetospheric (GSM) coordinates, for which $x$ is Sunward, $y$ is the duskward direction normal to $x$ and Earth's magnetic dipole, and $z$ completes the right-handed coordinate system in the northward direction. DFs have been seen in the mid-tail plasma sheets associated with bursty bulk flows (BBFs) \cite{angelopoulos_1992_JGR}. Energy in the compressed magnetic field in DFs has been observed to convert into particle kinetic energy \cite{angelopoulos_2013_Sci} and particle heating \cite{runov_2015_JGR} while the DFs move Earthward. %
One of the many consequences of DFs, and the focus of this study, is that electrons are significantly heated near the fronts. %
An electron temperature $T_e$ close to 1.8 keV was observed in a DF event by Time History of Events and Macroscale Interactions during Substorms (THEMIS), a factor of $\sim$3 higher than the electron temperature before the spacecraft crossed the DF, with a small perpendicular temperature anisotropy $T_{e,\perp} > T_{e,\|}$, where $\perp$ and $||$ denote the directions perpendicular and parallel to the local magnetic field $\vec{B}$ \cite{runov_2010_Planet_Sci}. Later observations revealed electron temperatures in the DFs in the range of 1--4 keV \cite{runov_2015_JGR}. Observational studies \cite{Fu11,Pan12,ashour-abdalla_observations_2011,Liu19} attributed such heating to adiabatic processes such as Fermi and betatron acceleration. Moreover, observations of electron velocity distribution functions in DFs reveal various non-isotropic electron pitch-angle distributions (PADs) \cite{Wu06,Fu12b,Tang21}. So-called pancake PADs have a perpendicular temperature anisotropy
\cite{Wu13}. They were attributed to betatron acceleration in the compressed magnetic field of the DF
\cite{Xu18}. Also observed are so-called rolling pin PADs, which are a combination of a cigar PAD (with particles moving parallel and antiparallel to the local magnetic field, generated by Fermi acceleration in the bent magnetic field \cite{Wang14}) and a pancake PAD \cite{liu_explaining_2017}. Analytical theory suggests particle distributions with a perpendicular temperature anisotropy are unstable to wave generation, including whistler waves \cite{Gary85}. Whistler waves have been detected near DFs using satellite observations and cause non-adiabatic electron heating through wave-particle interactions \cite{LeContel09,Deng10,Viberg14,Li15,Yoo19}. A later observational study \cite{Grigorenko20} revealed that whistler waves heat electrons to
1--5 keV
in rolling pin PADs.
Electron dynamics in DFs have also been studied extensively in %
numerical simulations. Motivated by observations, particle-in-cell (PIC) simulations have been used to study two broad classes of DFs: (i) flux rope (FR) type DFs with multiple X-lines, and (ii) flux bundle (FB) type DFs with a single transient X-line \cite{divin:2007,sitnov_JGR_2009, lu_2016_JGR}.
The energization mechanism for electrons in FR-type DFs was found to be repeated reflections between the double peaked $B_z$ structure present
when there are two X-lines, and is betatron acceleration caused by the compressed $B_z$ in FB-type DFs \cite{Birn13,lu_2016_JGR}. %
A strong electron temperature anisotropy with $T_{e,\perp} > T_{e,||}$ appears in the magnetic flux pile-up region of FR-type DFs in their PIC simulations and this anisotropy was shown to generate whistler waves \cite{fujimoto_2008_whistler}. Electron velocity distribution functions in the electron diffusion region (EDR) and the downstream region were systematically investigated using PIC simulations \cite{Shuster2014,Bessho2014}. It was shown that the perpendicular temperature anisotropy is associated with electron ring distributions, {\it i.e.,} distributions that are toroidal in velocity space. They suggested the ring distributions form when electron outflow jets from reconnection get remagnetized by the stronger normal magnetic field $B_z$ in the DF.
In subsequent studies \cite{shuster_2015, Wang_2016_electron}, it was argued that this magnetization by the reconnected magnetic field heats the electrons downstream of the EDR.
In another PIC simulation study \cite{egedal_2016_PoP}, electron ring distributions were found to grow
in size when moving
downstream from the X-line as a result of betatron heating.
Recent PIC simulations \cite{huang_formation_2021} suggest that as the DF moves downstream,
first pancake PADs appear (as a result of betatron acceleration), followed by rolling pin PADs (when particles
undergo Fermi reflections along with betatron acceleration), and culminating with cigar PAD (when Fermi acceleration becomes the dominating heating mechanism). Thus, electron ring distribution functions are associated with elevated temperatures, wave generation, and subsequent heating via wave-particle interactions in the region of DFs in Earth's magnetotail.
In the solar corona, reconnection during solar flares produces sunward jets (``reconnection outflows'') that have some similarities to DFs \cite{Reeves08}. These jets are associated with both particle acceleration and plasma heating. Solar flares routinely exhibit temperatures of $\sim$10--25~MK ($\sim$0.9--2.2~keV), generally thought to result from collisional energy transfer by particles accelerated to tens or hundreds of keV in or near the reconnection region impacting the dense chromosphere and heating the ambient plasma, whereupon it expands to fill the newly-reconnected flare loop in a process
called chromospheric evaporation \cite{Holman11}. However, a growing body of evidence suggests that the hottest plasmas in the flare thermal distribution are heated directly in the corona \cite{Fletcher11,Cheung19}. While this likely occurs to some extent in flares of all intensities \cite{Warmuth16}, it appears most pronounced for so-called ``super-hot'' flares, where peak temperatures exceed 30~MK ($\sim$2.6~keV), significantly hotter than the component heated by chromospheric evaporation. Spectroscopic imaging analyses show that the super-hot plasma appears earlier and higher in the flare loop/arcade than the evaporative component \cite{Caspi10, Caspi15}. The densities of the super-hot component are $\sim$10 times smaller than the evaporative component, but $\sim$10 times larger than the background coronal plasma \cite{Caspi10}, suggestive of significant plasma compression. Such super-hot temperatures also appear to be associated exclusively with strong coronal magnetic fields exceeding 100~G \cite{Caspi14} and have a quasi-impulsive time profile, suggesting the mechanism for the heating of the super-hot plasma is directly connected to the magnetic reconnection process itself \cite{Caspi10}. Many super-hot plasma heating mechanisms have been suggested, including Ohmic pre-heating coupled followed by Fermi and betatron acceleration from collapsing magnetic traps \cite{Caspi10b}, gas dynamic shock heating from relaxation of the reconnected magnetic loop \cite{Longcope11, Longcope16}, Fokker-Planck collisions \cite{Allred20}, and others [\cite{Warmuth16} and references therein], but there is not yet a widely-accepted model.
We are not aware of any studies which give a first-principles prediction of the temperatures of the hot electrons downstream of reconnection exhausts as a function of the upstream plasma conditions, \textit{i.e.}, the upstream (lobe) magnetic field, electron temperature and density. Such a prediction requires an understanding of the processes causing the complex electron distribution functions in reconnection exhausts.
In this study, for reasons justified in what follows, we focus on %
electron ring distributions in the region of the dipolarization front. Our starting point is the suggestion \cite{Shuster2014,Bessho2014} that electron ring distributions are formed by the remagnetization of electron jets from reconnection. We quantitatively predict the major and minor radii of the ring distributions solely in terms of plasma parameters in the region upstream of the reconnecting region. In particular, if the ring distributions are formed by the magnetization of electron jets, the major radius is governed by the electron Alfv\'en speed of the electron outflow jet, and the minor radius is governed by the electron thermal speed. To test the predictions, we perform a parametric study using two-dimensional (2D) PIC simulations in which the upstream density and upstream temperature are independently varied. We find ring distributions appear in all ten simulations we perform, and the major and minor radii depend on the upstream plasma parameters in the predicted manner. We further show that the associated electron temperature and temperature anisotropy largely scale according to analytical predictions of the major and minor radii, with the perpendicular temperature in excellent agreement and the parallel temperature being more complicated because there are counterpropagating electron beams along the magnetic field that are not incorporated in the present model. We find the electron ring distributions are associated with the highest electron temperature observed in the simulations, justifying their systematic study here. We confirm that the location at which electron ring distributions appear is associated with the location where the radius of curvature of the magnetic field exceeds the gyroradius based on the bulk flow speed, validating the suggestion by \citeA{Shuster2014} and \citeA{Bessho2014} that the ring distributions form as a result of remagnetization of the electrons. We also show the ring distributions are suppressed by the presence of a background guide field, as is expected if they are caused by remagnetization. Moreover, we show that electron ring distributions consistently appear where there is a plateau, or shoulder, in the profile of the normal magnetic field $B_z$ downstream of the reconnection exhaust, which may be a useful signature for future observational studies. Finally, we show that the electron temperatures predicted from the theory are comparable to observed temperatures when applied to dipolarization fronts in Earth's magnetotail and super-hot solar flares in the solar corona.
This manuscript is organized as follows. Section~\ref{sec:theo} relates the major and minor radii of the ring distributions to upstream (lobe) plasma parameters and provides the associated analytical expressions of the temperature of ring distributions. Section~\ref{sec:sims} describes the PIC simulations used in the study. Section~\ref{sec:results} shows the simulation results, revealing ring distributions in all the simulations. Their major and minor radii are extracted and compared to the theory. The location of the ring distributions is related to features in the temperature and magnetic field profiles, and we confirm the rings are caused by remagnetization of the electron outflow jet.
We discuss applications to dipolarization fronts and super-hot solar flares in Section~\ref{sec:discussions}. We also discuss extending the theory to asymmetric reconnection for dayside magnetopause applications, and discuss implications for direct {\it in situ} observations of ring distributions. The manuscript concludes with Section~\ref{sec:conclusions}, where the key findings and limitations of our study are gathered, and future work is discussed.
\section{Theory}
\label{sec:theo}
We aim to relate the major and minor radii of ring distributions to macroscopic upstream properties of the reconnection process, {\it i.e.,} number density, temperature and magnetic field.
One form of an ideal ring velocity distribution function $f_{r}(v_\perp,v_{\|})$ is \cite{wu_1989_a,min&liu_2016a}
\begin{linenomath*}
\begin{equation}
f_{r}\left(v_{\perp}, v_{\|}\right)=\frac{n_{r}}{\pi^{3 / 2} v_{Th}^{3} \Lambda} e^{-\frac{v_{\|}^{2}}{v_{Th}^{2}}} e^{\frac{-\left(v_{\perp}-v_{\perp 0}\right)^{2}}{v_{Th}^{2}}},
\label{eq:ringVDF}
\end{equation}
\end{linenomath*}
where $n_r$ is the number density, $v_{\|}$ and $v_\perp$ are the velocity space coordinates parallel and perpendicular to the central axis of the ring distribution, $v_{\perp0}$ is the major radius of the ring distribution, %
and $v_{Th}$ is the minor radius of the ring distribution, assumed to be Gaussian and isotropic in the parallel and perpendicular directions.
The normalization factor $\Lambda$, defined by $\Lambda=r \sqrt{\pi} \operatorname{erfc}(-r)+e^{-r^{2}}$, enforces that $n_r = \int d^3v f_{r}$; here $r = v_{\perp0}/v_{Th}$ and erfc($-r$) = $(2/\sqrt{\pi})\int_{-r}^\infty e^{-z^2} dz$ is the complementary error function.
It was previously suggested \cite{Shuster2014,Bessho2014} that electron ring distributions form when the electron jet from reconnection gets magnetized by the strong normal (reconnected) magnetic field occurring as a result of compression at the dipolarization front. In principle, the same effect can happen for ions, but we only see rings in our simulations for electrons so we focus on them here.
We expect the major radius of the ring distribution
$v_{\perp 0}$ to be the electron outflow speed before the beam gets magnetized, which scales as the electron Alfv\'en speed $c_{Aup,e}$ \cite{Shay01,hoshino01a} based on the reconnecting magnetic field strength $B_{up,e}$ at the upstream edge of the EDR,
\begin{linenomath*}
\begin{equation}
v_{\perp 0}=\frac{B_{up, e}}{\sqrt{4 \pi m_{e} n_{up}}},
\label{eq:vperp0upOnly}
\end{equation}
\end{linenomath*}
where $m_e$ is the electron mass and $n_{up}$ is the density upstream of the EDR which is comparable to the density upstream of the ion diffusion region (IDR) and therefore the upstream (lobe) plasma.
For the minor radius $v_{Th}$,
we propose that
it is governed by the thermal speed $v_{Th}$ of the electron upstream of the reconnection site, {\it i.e.,}
\begin{linenomath*}
\begin{equation}
v_{Th} = \sqrt{\frac{2 k_{B} T_{e,up}}{m_{e}}},
\label{eq:vThupOnly}
\end{equation}
\end{linenomath*}
where $k_B$ is Boltzmann's constant and $T_{e,up}$ is the temperature upstream of the EDR, which is essentially the same as the (lobe) temperature upstream of the IDR at the early times when reconnection that forms a dipolarization front takes place. This effectively assumes that
the increase in temperature that takes place as electrons flow through the EDR or across separatrices as they go into the exhaust
\cite{Shay14} is small.
Using Eqs.~(\ref{eq:vperp0upOnly}) and (\ref{eq:vThupOnly}), we write the parameter $r$ in terms of upstream parameters as
\begin{linenomath*}
\begin{equation}
r = \frac{B_{up,e}}{\sqrt{8\pi n_{up} k_B T_{e,up}}},
\label{eq:rupOnly}
\end{equation}
\end{linenomath*}
\begin{comment}
Here $\beta_{e,UP}$ is the electron plasma beta based on upstream conditions written as
\begin{linenomath*}
\begin{equation}
\beta_{e,UP} = \frac{k_B T_{e,UP} n_{UP}}{B^2_{UP}/8\pi}.
\label{eq:BetaupOnly}
\end{equation}
\end{linenomath*}
\end{comment}
which is related to a form of the upstream electron plasma $\beta_{e,up}$ as $r = \beta_{e,up}^{-1/2}$. Using these expressions, we have the parameters necessary to write Eq.~(\ref{eq:ringVDF}) solely in terms of upstream plasma parameters.
The perpendicular and parallel temperatures $T_\perp$ and $T_{||}$ associated with the ring distribution in Eq.~(\ref{eq:ringVDF}) are calculated in the standard way using the second velocity moment of $f_r$, {\it i.e.}, $T_{\perp}=[m /(2n_r k_{B})] \int d^{3} v(\vec{v}_{\perp}-\vec{u}_{\perp})^2 f_{r}$ and $T_{||}=[m /(n_r k_{B})] \int d^{3} v(v_{||}-u_{||})^2 f_{r}$,
where
$\vec{u}_{\perp}$ and $u_{||}$ are the perpendicular and parallel components of the bulk flow velocity calculated from the first velocity moment of the distribution function,
$\vec{u}_\perp = (1/n_r) \int d^3v \vec{v}_\perp f_r$ %
and $u_{\|} = (1/n_r) \int d^3v v_{\|} f_r$. Since both $\vec{u}_\perp$ and $u_{\|}$ are zero for $f_r$ as given in Eq.~(\ref{eq:ringVDF}), the
resulting $T_{\perp}$ and $T_{\parallel}$ are \cite{wu_1989_a}
\begin{linenomath*}
\begin{equation}
T_{\perp}=\mathcal{M}T_{e,up},~T_{\parallel}=T_{e,up}
\label{eq:ringVDFTeperppara},
\end{equation}
\end{linenomath*}
where
\begin{linenomath*}
\begin{eqnarray}
\mathcal{M} & = & \frac{2 e^{-r^{2}}\left(1+r^{2}\right)+\sqrt{\pi} r\left(3+2 r^{2}\right) \operatorname{erfc}(-r)}{2\Lambda} \\
& = & \frac{3}{2} + r^2 - \frac{e^{-r^{2}}}{2\Lambda}.
\label{eq:ringVDFTeperpMdef}
\end{eqnarray}
\end{linenomath*}
A plot of ${\cal M}$ as a function of $r$ is shown in Fig.~\ref{fig:ContourPlotsTeperp}(a).
The effective temperature $T_{{\rm eff}}= (2T_{\perp}+ T_{\|})/3$ is
\begin{linenomath*}
\begin{equation}
T_{{\rm eff}}= T_{e,up} \left(\frac{2 \mathcal{M} + 1}{3} \right).
\label{eq:ringVDFTe}
\end{equation}
\end{linenomath*}
The temperature anisotropy, defined as $A_{\perp} = T_{\perp}/T_{\parallel}-1$, is
\begin{linenomath*}
\begin{equation}
A_{\perp}= \mathcal{M}-1.
\label{eq:ringVDFTeAniso}
\end{equation}
\end{linenomath*}
Thus, Eq.~(\ref{eq:ringVDFTe}) is equivalent to $T_{{\rm eff}}= T_{e,up} (1 + 2 A_\perp/3)$ for this distribution. These expressions give the properties associated with the ring distribution in terms of upstream parameters.
It has been shown \cite{Shuster2014,egedal_2016_PoP} that ring-type distributions in PIC simulations of reconnection are not always ideal like in Eq.~(\ref{eq:ringVDF}); some also have a %
Maxwellian core population. \textcolor{black}{It is possible that this population is related to the initial current sheet population in the simulations, but validating this conjecture is not carried out for the present study. It is not clear if this population is a numerical artifact or also present in Nature. Since it is not the focus of the present study and has been seen in previous independent studies, we simply include it in our analysis to give more accurate comparisons to the simulations.} Thus, we derive the temperatures associated with a distribution $f = f_r + f_M$ that is the sum of the ideal ring distribution $f_r$ from Eq.~(\ref{eq:ringVDF}) and a Maxwellian distribution $f_M =(n_{M}/\pi^{3 / 2} v_{T h, M}^{3}) e^{-v^{2}/v_{Th,M}^{2}}$ with density $n_M$ and temperature $T_M$ associated with the thermal speed
$v_{Th,M}=(2k_B T_M/m)^{1/2}$. The zeroth velocity moment of this distribution gives the total local density as $n=n_r + n_M$. The temperatures generalizing Eqs.~(\ref{eq:ringVDFTeperppara}) and
(\ref{eq:ringVDFTe}) are
\begin{linenomath*}
\begin{equation}
T_{\perp }=\mathcal{M}\frac{n_r T_{e,up}}{n} + \frac{n_M T_M}{n}, ~ T_{\parallel} =\frac{n_r T_{e,up}}{n}+\frac{n_M T_M}{n}
\label{eq:ring+coreVDFTeperppara}
\end{equation}
\end{linenomath*}
\begin{linenomath*}
\begin{equation}
T_{{\rm eff}}=\left(\frac{2 \mathcal{M}+1}{3}\right) \frac{n_r T_{e,up}}{n} + \frac{n_M T_M}{n},
\label{eq:ring+coreVDFTe}
\end{equation}
\end{linenomath*}
while the temperature anisotropy in Eq.~(\ref{eq:ringVDFTeAniso}) becomes
\begin{linenomath*}
\begin{equation}
A_{\perp}=\frac{(\mathcal{M}-1)n_r T_{e,up}}{n_r T_{e,up} + n_M T_M}.
\label{eq:ring+coreVDFTeAniso}
\end{equation}
\end{linenomath*}
A contour plot of $T_{\perp}$ as a function of $r$ and $v_{Th}$ in the limit that $n_M=n_r$ and $v_{Th.M}=v_{Th}$ is shown for reference in Fig.~\ref{fig:ContourPlotsTeperp}(b). These expressions will be useful when we analyze ring distributions in our PIC simulations.
\begin{comment}
\subsection{The magnetic field strength where ring distributions arise}
\label{subsec:ByScalingFarDwnstrm}
Here, we estimate the reconnected (normal) magnetic field strength at key locations in the downstream region of reconnection associated with dipolarization fronts. We use GSM coordinates in which $x$ is the (sunward) direction of the reversing magnetic field, $z$ is aligned with Earth's magnetic axis in the northward direction, and $y$ is the cross-tail direction that completes the right handed coordinate system.
We first analyze the peak reconnected magnetic field strength at the dipolarization front. The exhaust jet from reconnection impinges on a pre-existing current sheet, and it slows as it compresses the reconnected magnetic field $B_z$.
The total kinetic energy density at the downstream edge of the diffusion region resulting from reconnection is $(1/2) m_i n_{up} V_{i,out}^2$, where $V_{i,out}$ is the outflow speed and the energy is mostly in the ions with mass $m_i$.
The magnetic energy $\propto B_z^2$ at this location is weak, at the 1\% level compared to the bulk kinetic energy.
The magnetic energy density at the pileup region where the reconnected magnetic field $B_{z,DF}$ is compressed is $B_{z,DF}^2/8\pi$, and the bulk flow energy goes to zero as the reconnection jet stops. From conservation of energy, ignoring changes in the electron or ion thermal energy between these two locations, these two energies are comparable, giving
\begin{linenomath*}
\begin{equation}
\frac{B_{z,DF}^{2}}{8 \pi} \sim \frac{1}{2} m_{i} n_{up} V_{i,out}^{2}.
\label{eq:ByScalingFarDwnstrm}
\end{equation}
\end{linenomath*}
Since $V_{i,out} \sim c_{Aup}$ where $c_{Aup} = B_{up} / (4\pi m_i n)^{1/2}$ is the Alfv\'en speed based on the reconnecting magnetic field $B_{up}$ upstream of the {\it ion} diffusion region (in the lobe), we get
\begin{linenomath*}
\begin{equation}
B_{z,DF} \sim B_{up}.
\label{eq:ByScalingFarDwnstrm2}
\end{equation}
\end{linenomath*}
We use a similar analysis to estimate the reconnected magnetic field strength $B_{z,ring}$ where electron ring distributions arise. The appearance of electron ring distributions is associated with an increase in electron temperature $T_e$, so we retain thermal effects in the energy budget.
A similar argument based on conservation of energy gives
\begin{linenomath*}
\begin{equation}
\left[\frac{1}{2} m_{e} n V_{e,out}^{2} %
+ n k_B T_e \right]_{EDR} \sim \left[\frac{B_{z}^{2}}{8 \pi}+ %
n k_B T_e \right]_{ring}.
\label{eq:ByScalingWithRing}
\end{equation}
\end{linenomath*}
where \textit{EDR} denotes the downstream edge of the electron diffusion region and {\it ring} denotes the location where electrons are magnetized to form ring distributions, and $V_{e,out}$ is the electron outflow speed. In writing this, the magnetic energy is assumed small at the {\it EDR}, the ion kinetic and thermal energy density is assumed to not change much between the two points, and the bulk electron velocity at the ring is assumed to be small.
Solving Eq.~\ref{eq:ByScalingWithRing} for $B_{z,ring}$ gives
\begin{linenomath*}
\begin{equation}
B_{z,ring} \sim \sqrt{B_{up,e}^2- 8 \pi [(n k_B T_e)_{ring} - (n k_B T_e)_{EDR}]}
\label{eq:ByScalingWithRing2}
\end{equation}
\end{linenomath*}
where we use
$V_{e,out} \sim c_{Aup,e}$ as before.
For $T_{e,EDR}$, we assume it scales with the upstream temperature $T_{e,up}$ because the heating in the EDR is largely associated with counterpropagation of electron beams \cite{something}.
For scaling purposes, $n_{EDR}$ and $n_{ring}$ are assumed to be the
upstream value $n_{up}$. Using these assumptions and Eq.~\ref{eq:ringVDFTe} for the temperature of a pure ring distribution
for $T_{e,ring}$, Eq.~\ref{eq:ByScalingWithRing2} becomes
\begin{linenomath*}
\begin{equation}
B_{z,ring} \sim
B_{up,e} \sqrt{1- \frac{2 (\mathcal{M}-1)}{3 r^2}}
\label{eq:ByScalingWithRing4}
\end{equation}
\end{linenomath*}
where we make use of Eq.~\ref{eq:rupOnly}. This expression gives the magnetic field strength where the ring occurs as a function of upstream parameters.
\end{comment}
\begin{comment}
\subsection{Where to find ring distributions}
\label{subsec:ElectronRemagTheo}
In the presence of strong vertical component of the reconnected field, outflowing electrons are remagnetized as they flow downstream.
This happens when the electron gyroradius becomes smaller than the length-scale over which the downstream reconnected field strength changes. This is the key: when electron gyroradius becomes $\sim d_e$, they are remagnetized. Moreover, these remagnetized electrons after travelling a distance $2d_e$ turn into rings as most of them start to revolve around the strong reconnected field which is mostly vertical. %
In order to find where remagnetization occurs, we use two parameters: 1) the magnetic field curvature vector $\vec{\kappa}$, inverse of its magnitude is the radius of curvature $R_c$, 2) the electron gyroradius based on the bulk flow speed $\rho_{{\rm bfs}}$ which is the more appropriate gyroradius here because these are outflowing electron beams that get remagnetized and turned into rings and when it happens, electrons lose their outflow speed
. This implies the remagnetization condition is
\begin{linenomath*}
\begin{equation}
\frac{(|\vec{\kappa}|)^{-1}}{\rho_{b f s}} \approx 1.
\label{eq:CrossoverPointCond}
\end{equation}
\end{linenomath*}
As indicated in Section \ref{subsec:ByScalingWithRings}, the peak in electron temperature happens when electron rings appear.
\subsection{Radius of curvature as a function of X: scaling relations}
\label{subsec:RadCurvScaling}
The magnetic field curvature vector is given by $\vec{\kappa} = \hat{b} \cdot \nabla \hat{b}$ where $\hat{b} = \vec{B} / B$ is the local direction of the magnetic field.
The radius of curvature $R_c = 1 / |\vec{\kappa}|$ in rectangular coordinates in a two-dimensional system with ignorable direction $z$ is
\begin{linenomath*}
\begin{equation}
R_{c}=\frac{1}{\sqrt{\left(b_{x} \partial_{x} b_{x}+b_{y} \partial_{y} b_{x}\right)^{2}+\left(b_{x} \partial_{x} b_{y}+b_{y} \partial_{y} b_{y}\right)^{2}+\left(b_{x} \partial_{x} b_{z}+b_{y} \partial_{y} b_{z}\right)^{2}}},
\label{eq:RadiusOfCurvature}
\end{equation}
\end{linenomath*}
Along the symmetry axis downstream of a reconnection site, $\vec{B} \approx B_y \hat{y}$, so $b_x \sim b_z \sim 0$ and $b_y \sim 1$. Moreover, we know from the reconnection geometry that $b_x$ and $b_z$ change predominantly over $\hat{y}$, thus $\partial_y b_x$ and $\partial_y b_z$ are much larger than $\partial_y b_y$. Thus, Eq.~(\ref{eq:RadiusOfCurvature}) is approximately given by
\begin{linenomath*}
\begin{equation}
R_{c} \approx \frac{1}{\sqrt{\left(\partial_{y} b_{x}\right)^{2}+\left(\partial_{y} b_{z}\right)^{2}}}.
\label{eq:RadiusOfCurvature2}
\end{equation}
\end{linenomath*}
Here, $\partial_y b_z \sim \Delta b_z/\Delta y$ where $\Delta b_z$ is the change in $b_z$ over a length-scale $\Delta y(X)$ which is a function of $X$. Similarly, $\partial_y b_x \sim \Delta b_x/\Delta y$. So, Eq.~(\ref{eq:RadiusOfCurvature2}) becomes
\begin{linenomath*}
\begin{equation}
R_{c} \approx \frac{1}{\sqrt{\left(\Delta b_x/\Delta y\right)^{2}+\left(\Delta b_z/\Delta y\right)^{2}}} = \frac{\Delta y(X)}{\sqrt{\left(\Delta b_x\right)^{2}+\left(\Delta b_z\right)^{2}}}.
\label{eq:RadiusOfCurvature3}
\end{equation}
\end{linenomath*}
The change in $b_x$ and $b_z$ equal 1 in quadrature, which means $\left(\Delta b_x\right)^{2}+\left(\Delta b_z\right)^{2} \sim 1$. From the separatrix geometry, we have $\Delta y(X) = X \tan(E')$ where $E'$ is the normalized reconnection rate. Thus, we estimate
\begin{linenomath*}
\begin{equation}
R_{c} \sim \Delta y(X) \sim X \tan(E').
\label{eq:RadiusOfCurvature4}
\end{equation}
\end{linenomath*}
\subsection{Bulk flow speed gyroradius as a function of X: scaling relations}
\label{subsec:GyroRadbfsScaling}
The electron gyroradius based on the bulk flow speed is written as $\rho_{{\rm bfs}} = v_{{\rm bfs}}/\Omega_{ce}$ where $v_{{\rm bfs}}$ is the bulk flow speed, $\Omega_{ce}$ is the electron gyro-frequency $\Omega_{ce} = e B/m_e c$, $m_e$ is the electron mass, and $c$ is speed of light. Since $B \sim B_y(X)$ and $v_{{\rm bfs}} \sim V_{ex}$ downstream, where $V_{ex}$ is the electron outflow speed as seen in Fig.~\ref{Fig:ByScalingWithRing}, we can write
\begin{linenomath*}
\begin{equation}
\rho_{b f s} \sim \frac{V_{e x} m_{e} c}{e B_{y}(X)}.
\label{eq:GyroRadbfs}
\end{equation}
\end{linenomath*}
The electron outflow speed scales as the electron Alfv\'en speed based on upstream EDR conditions, so
\begin{linenomath*}
\begin{equation}
V_{e x} \sim c_{A, e, u p}=\frac{B_{u p, e}}{\sqrt{4 \pi m_{e} n_{u p}}} \approx \frac{2 \frac{d_{e}}{d_{i}} B_{u p}}{\sqrt{4 \pi m_{e} n_{u p}}}.
\label{eq:ElectronOutflowSpeed}
\end{equation}
\end{linenomath*}
We use Eq.~(\ref{eq:GyroRadbfs}) and (\ref{eq:ElectronOutflowSpeed}) along with simple manipulations to simplify $\rho_{{\rm bfs}}$ and write it as a function of X
\begin{linenomath*}
\begin{equation}
\rho_{{\rm bfs}} \sim 2 \frac{d_{e}}{d_{i}} B_{u p} \frac{d_{e}}{B_{y}(X)}.
\label{eq:GyroRadbfs2}
\end{equation}
\end{linenomath*}
\subsection{Finding where electrons remagnetize as a function of reconnection geometry}
\label{subsec:CrossoverPoint}
For electrons to remagnetize, Eq.~(\ref{eq:CrossoverPointCond}) needs to be satisfied. Physically, we know that when this happens, $\rho_{{\rm bfs}} =1d_e$, which using Eq.~(\ref{eq:GyroRadbfs2}) gives an interesting result that when electrons remagnetize, $B_y \sim 2 (d_e/d_i) B_{up}$. Lastly, using Eq.~(\ref{eq:RadiusOfCurvature4}), when electron remagnetization happens, we can write that
\begin{linenomath*}
\begin{equation}
\frac{X \tan \left(E^{\prime}\right)}{d_{e}} \sim 1 \Rightarrow X \sim \frac{d_{e}}{\tan \left(E^{\prime}\right)}.
\label{eq:XcpCond}
\end{equation}
\end{linenomath*}
In Section \ref{sec:results}, we compare Eq.~(\ref{eq:XcpCond}) to simulation results.
\end{comment}
\section{Simulations}
\label{sec:sims}
We use the PIC code {\tt p3d} \cite{zeiler2002} to perform simulations of symmetric antiparallel magnetic reconnection that are 2.5D in position space and 3D in velocity space. {\tt p3d} employs the trapezoidal leapfrog method \cite{guzdar93a} to advance electromagnetic fields in time %
and the particles are advanced in time using a relativistic Boris stepper \cite{birdsall91a}. The multigrid technique \cite{Trottenberg00} is used to clean the divergence of the electric field every 10 particle time-steps.
In the simulations, lengths are normalized to the ion inertial scale $d_{i0} = c/\omega_{pi0}$ based on a reference density $n_0$ that is the peak density of the initial current sheet population, where $\omega_{pi0} = (4 \pi n_0 e^2 /m_i)^{1/2}$, $e$ is the ion charge, and $c$ is the speed of light. Magnetic fields are normalized to the initial asymptotic upstream reconnecting magnetic field $B_0$. Velocities are normalized to the Alfv\'en speed $c_{A0} = B_0/(4 \pi m_i n_0)^{1/2}$. Times are normalized to the inverse ion cyclotron frequency $\Omega_{ci0}^{-1}= (e B_{0} / m_{i} c)^{-1}$. Temperatures are normalized to $m_i c_{A0}^2/k_B$. Reduced velocity distribution functions are normalized to $n_0/c^2_{Ao}$.
The simulation coordinate system is defined such that reconnection outflows are along $\pm \hat{x}$ and inflows are along $\pm\hat{z}$, with periodic boundary conditions in both directions. The simulations are initialized with two Harris current sheets and a uniform background plasma population.
The initial magnetic field profile is
\begin{linenomath*}
\begin{equation}
B_x(z) = \tanh{\left(\frac{z-l_z/4}{w_0}\right)}-\tanh{\left(\frac{z-3l_z/4}{w_0}\right)} -1,
\end{equation}
\end{linenomath*}
with no initial out-of-plane guide magnetic field unless otherwise stated. Here, $w_0$ is the thickness of the current sheet and $l_z$ is the length of the computational domain in the $\hat{z}$ direction. The temperature and density of the background populations can be varied independently of the current sheet population. The initial electron and ion density profiles are
\begin{linenomath*}
\begin{equation}
n(z) = \frac{1}{2(T_{e,CS}+T_{i,CS})} \left[ \operatorname{sech}^{2}\left(\frac{z-l_z/4}{w_0}\right) + \operatorname{sech}^{2}\left(\frac{z-3l_z/4}{w_0}\right) \right] + n_{up},
\end{equation}
\end{linenomath*}
where $n_{up}$ is the initial density of the background plasma. The current sheet electron temperature $T_{e,CS}$ is uniform with a value of 1/12, and the current sheet ion temperature $T_{i,CS}$ is uniform with a value 5$T_{e,CS}$.
The speed of light $c$ is 15, and the electron to ion mass ratio is $m_e/m_i = 0.04$. There are $4096 \times 2048$ grid cells in all the simulations, initialized with 100 weighted particles per grid (PPG). A weak initial magnetic perturbation of the form $\delta B_x = -B_{pert} \sin{(2 \pi x/l_x)} \sin{(4\pi z/l_z)}$ and $\delta B_z = B_{pert}l_z/(2 l_x) \cos{(2 \pi x/l_x)} [1-\cos{(4\pi z/l_z)}]$ with $B_{pert} = 0.025$ is used to seed an X- and O-line pair in each of the two current sheets, where $l_x$ is the computational domain size in the $\hat{x}$ direction.
\begin{table}
\caption{Numerical parameters for two sets of simulations with varying (top) upstream total temperature $T_{TOT,up}$ and (bottom) upstream number density $n_{up}$. $l_x$ and $l_z$ are system sizes along $\hat{x}$ and $\hat{z}$, respectively, $w_0$ is the initial current sheet thickness, $\Delta x$ is the grid scale along $\hat{x}$ and $\hat{z}$, and $\Delta t$ is the time step.}
\centering
\begin{tabular}{ccccccc}
\hline
$T_{TOT,up}$ & $l_x \times ~l_z$ & $w_0$ & $\Delta x$ & $\Delta t$ \\
\hline
0.2 & 51.20 $\times$ ~25.60 & 0.50 & 0.0125 & 0.00100 \\
0.4 & 51.20 $\times$ ~25.60 & 0.50 & 0.0125 & 0.00100 \\
0.6 & 51.20 $\times$ ~25.60 & 0.50 & 0.0125 & 0.00100 \\
0.8 & 51.20 $\times$ ~25.60 & 0.50 & 0.0125 & 0.00100 \\
1.0 & 51.20 $\times$ ~25.60 & 0.50 & 0.0125 & 0.00100 \\
\hline
$n_{up}$ & $l_x \times ~l_z$ & $w_0$ & $\Delta x$ & $\Delta t$ \\
\hline
0.2 & 51.20 $\times$ ~25.60 & 0.50 & 0.0125 & 0.00100 \\
0.4 & 47.41 $\times$ ~23.71 & 0.46 & 0.0116 & 0.00093 \\
0.6 & 44.35 $\times$ ~22.17 & 0.43 & 0.0108 & 0.00087 \\
0.8 & 41.81 $\times$ ~20.91 & 0.41 & 0.0102 & 0.00082 \\
1.0 & 39.68 $\times$ ~19.84 & 0.39 & 0.0097 & 0.00078 \\
\hline
\end{tabular}
\label{table:setOfSims}
\end{table}
Two sets of five simulations are performed. Table~\ref{table:setOfSims} lists relevant simulation parameters, including the system size $l_x \times l_z$,
the initial current sheet half-thickness $w_0$, the grid scale $\Delta x$ in both
directions, and the time step $\Delta t$. In all simulations, the ion to electron temperature ratio $T_{i,up}/T_{e,up}$ of the background plasma is initially 5. One set of simulations has varying $T_{TOT,up} = T_{i,up} + T_{e,up}$, while the initial background density is kept fixed at $n_{up}$ = 0.2. The other set has varying $n_{up}$, with the initial background temperatures kept fixed at $T_{e,up}$ = 1/12 and $T_{i,up}$ = $5 T_{e,up}$. The smallest length scale for each of the simulations is the electron Debye length $\lambda_{De}$ based on the total initial density at the center of the current sheet 1 + $n_{up}$. Thus, $\lambda_{De}$ decreases as $n_{up}$ is increased from 0.2 to 1 by a factor of $(1.2/2)^{1/2}$, \textit{i.e.,} it is $22.5\%$ lower for the $n_{up}=1$ simulation than the $n_{up}=0.2$ simulation. Thus, for the $n_{up} = 1$ simulation, the system size, grid length, initial current sheet thickness, and time step are also reduced by $22.5\%$ (as listed in Table \ref{table:setOfSims}). For other $n_{up}$ values, a similar approach is used to determine their simulation parameters.
\textcolor{black}{Since we use periodic boundary conditions, the minimum system size that allows the ions to fully couple back to the reconnection process is approximately 40 $d_{i0}$ \cite{Pyakurel19}.} Since $l_x$ is smaller than necessary for ions to fully couple back to the reconnected magnetic field, this study focuses on electron dynamics. In some of the simulations, the upper current sheet develops secondary islands which do not coalesce with the primary island by the time the system reaches steady-state. Hence we focus on the lower current sheet. Finally, we note that the ion and electron inertial lengths $d_i$ and $d_e$ based on the upstream (background) density are related to the length scale used for normalization via $d_i = d_{i0}/\sqrt{n_{up}}$ and $d_{e}=0.2 d_{i}$ for the mass ratio used in the simulations. Since $n_{up}$ is fixed at 0.2 for the simulations with varying $T_{TOT,up}$, $d_i = 2.24 \ d_{i0}$ and $d_{e}=
0.45 \ d_{i0}$ for those simulations.
For simulations with varying $n_{up}$, the length scales change with $n_{up}$; for example, for $n_{up} = 1$, we have $d_i = d_{i0}$ and $d_{e}=0.2 \ d_{i0}$.
\textcolor{black}{Each simulation is carried out long enough for the reconnection to reach a steady-state, meaning that the reconnection rate becomes approximately constant in time.}
For plotting reduced electron velocity distribution functions (rEVDFs), which are 2D velocity distributions produced from the full 3D distributions after integrating over one of the three velocity directions, a domain of size $0.5 \ d_{i0} \times 0.5 \ d_{i0}$ centered at the location of interest is used. %
A velocity space bin of size $0.1 \ c_{A0}$ is used in all velocity directions.
\section{Methods and Results}
\label{sec:results}
\subsection{Presence of ring distributions}
\label{subsec:RingExistence}
A result of this simulation study is that all ten simulations reveal electron ring distributions beyond the downstream edge of the EDR near the region of the dipolarization fronts. This is ascertained by plotting rEVDFs in the plane perpendicular to the local magnetic field. Since the magnetic field in the region of interest is predominantly in the $\hat{z}$ direction,
we identify $\hat{x} \approx (\hat{u} \times \hat{b}) \times \hat{b} \equiv \perp1$, $\hat{y} \approx \hat{u} \times \hat{b} \equiv \perp2$, and $\hat{z} \approx \hat{b} \equiv \parallel$, where $\hat{b}$ and $\hat{u}$ are the unit vectors along the magnetic field $\vec{B}$ and the bulk flow velocity $\vec{u}$.
Defining the X-line location as $(x_0,z_0)$, the rEVDFs are plotted along the horizontal line $z = z_0$ as a function of $x$ from the X-line to the magnetic island.
At the earliest times in the steady-state reconnection time interval for all simulations, we find
that rEVDFs near the X-line have striations, and they are rotated by the reconnected magnetic field $B_z$ as one moves in the outflow direction within the EDR.
Beyond the downstream edge of the EDR, ring-like features begin to arise in the distributions as some electrons complete at least one full gyration around $B_z$, leading to swirls and arcs (not shown), and finally to electron ring distributions for which most electrons complete at least one full gyration. These results are consistent with previous simulation studies \cite{Bessho2014,Shuster2014, shuster_2015,egedal_2016_PoP}
The panels of Fig.~\ref{fig:ringVDFsALLRUNS} show rEVDFs as a function of $v_{\perp1}$ and $v_{\perp2}$ for representative ring distributions seen in all ten simulations, with varying $T_{TOT,up}$ on the left from 0.2 to 1 in (a)-(e) and varying $n_{up}$ on the right from 0.2 to 1 in (f)-(j). The title on each panel provides the locations $x-x_0$ and times $t$ at which each rEVDF is plotted. The plotted rEVDFs reveal that there is a noticeable agyrotropy in the ring distributions, but the major and minor radii are well-formed.
\textcolor{black}{It is likely that the cause of the agyrotropy is that not all particles complete one full gyration, as also seen in previous studies \cite{Shuster2014}, but we do not study this feature further in the present study.}
Looking at the rEVDFs in other planes (not shown), we find that along with the ring population and the colder Maxwellian core \textcolor{black}{also seen in previous simulations studies \cite{Shuster2014}}, a population of counterstreaming beams is also present in every simulation \textcolor{black}{in} every rEVDF. \textcolor{black}{As elevated values of $T_{e,||}$ that would be associated with parallel propagating beams are not seen at the reconnecting magnetic field reversal region in the study by \citeA{Shay14} [see their Fig.~2(d)], we believe it is likely that this population is an artifact due to our simulation size being smaller than in that previous study, leading to accelerated electrons to be transmitted through the boundary to the location we are measuring distributions, but we leave verifying this conjecture for future work.} These rEVDFs reveal that the ring distributions follow clear qualitative trends: with increasing background temperature $T_{TOT,up}$, the rings stay approximately the same size but are thicker in the $v_{\perp1} - v_{\perp2}$ plane [Fig.~\ref{fig:ringVDFsALLRUNS}(a)-(e)], whereas with increasing background density $n_{up}$, the rings shrink in size [Fig.~\ref{fig:ringVDFsALLRUNS}(f)-(j)] while maintaining a similar thickness.
\subsection{Parametric dependence of ring distribution major and minor radii}
\label{subsec:RingPartFitting}
We now quantitatively investigate the parametric dependence of the ring distributions by extracting their major and minor radii from the simulations.
For each distribution in Fig.~\ref{fig:ringVDFsALLRUNS}, we take separate 1D cuts of the rEVDF along $v_{\perp1}=0$ and $v_{\perp2}=0$. For each 1D cut, we fit three Gaussians to the distribution given by $\sum_{i=1}^3 a_i e^{-[(x-b_i)/c_i]^2}$ using the \textit{Curvefit} tool in \textit{MATLAB R2020a}. The outer two Gaussians are used to fit the ring portion of the distribution and the central Gaussian is used to fit the core. The coefficients $a_i$ are used to calculate $n_r$ and $n_M$, $b_i$ give the bulk flow of each component of the distribution and are related to $v_{\perp0}$, and $c_i$ give the associated thermal speeds $v_{Th}$ and $v_{Th,M}$.
As a case study, 1D cuts and the associated fits are shown in Fig.~\ref{fig:ThreeGaussFit} for the $n_{up} = 0.2$ simulation from Fig.~\ref{fig:ringVDFsALLRUNS}(f). The black curve is the raw distribution function and the red curve is the best fit. Because the rEVDFs are not perfectly symmetric, the best fit coefficients and associated major and minor radii $v_{\perp0}$ and $v_{Th}$ are different in the $v_{\perp1} = 0$ and $v_{\perp2} = 0$ cuts. We calculate average values for $v_{\perp0}$ and $v_{Th}$ and their standard deviations $\sigma$ derived from propagating the errors in quadrature. The best fit procedure also provides 95\% confidence bounds, which we take as another estimate of the uncertainty of the values. The results of this procedure for all ten simulations are listed in Table \ref{table:OneDfittingData}.
\begin{table}
\caption{Data from the fitting method described in Sec.~\ref{subsec:RingPartFitting} for all simulations. The first column gives the value being varied, and $n_r, v_{\perp0}$, and $v_{Th}$ are the ring density, major radius, and minor radius. The $\sigma$ values are standard deviations from the mean from cuts in the $\perp 1$ and $\perp 2$ directions, and 95\% err is the error calculated using 95\% confidence bounds from the fit.}
\centering
\begin{tabular}{c c c c c c c c}
\hline
$T_{TOT,up}$ & $n_{r}$ & $v_{\perp0}$ & $\sigma_{v_{\perp0}} $ & 95\% err$_{v_{\perp0}}$ & $v_{Th}$ & $\sigma_{v_{Th}}$ & 95\% err$_{v_{Th}}$ \\
\hline
0.2 & 0.30 & 4.29 & 0.19 & 0.15 & 1.47 & 0.05 & 0.22 \\
0.4 & 0.36 & 4.33 & 0.27 & 0.22 & 1.81 & 0.04 & 0.32 \\
0.6 & 0.31 & 4.24 & 0.17 & 0.26 & 2.13 & 0.12 & 0.35 \\
0.8 & 0.33 & 4.23 & 0.19 & 0.49 & 2.41 & 0.05 & 0.57 \\
1.0 & 0.26 & 4.42 & 0.49 & 0.49 & 2.59 & 0.09 & 0.49 \\
\hline
$n_{up}$ & $n_{r}$ & $v_{\perp0}$ & $\sigma_{v_{\perp0}}$ & 95\% err$_{v_{\perp0}}$ & $v_{Th}$ & $\sigma_{v_{Th}}$ & 95\% err$_{v_{Th}}$ \\
\hline
0.2 & 0.28 & 4.26 & 0.32 & 0.12 & 1.99 & 0.23 & 0.17 \\
0.4 & 0.46 & 2.93 & 0.32 & 0.19 & 2.11 & 0.19 & 0.19 \\
0.6 & 0.91 & 2.52 & 0.29 & 0.19 & 2.08 & 0.15 & 0.17 \\
0.8 & 1.17 & 1.99 & 0.13 & 0.35 & 1.99 & 0.07 & 0.28 \\
1.0 & 1.28 & 1.89 & 0.07 & 0.12 & 1.94 & 0.11 & 0.12 \\
\hline
\end{tabular}
\label{table:OneDfittingData}
\end{table}
We now compare the theoretical predictions for the major and minor radii
to the simulation results. For the theoretical predictions, we need to obtain $B_{up,e}, n_{up}$ and $T_{e,up}$ to evaluate $v_{\perp0}$ in Eq.~(\ref{eq:vperp0upOnly}) and $v_{Th}$ in Eq.~(\ref{eq:vThupOnly}).
We define the upstream edge of the EDR where the electron bulk inflow speed
starts to differ from the $\hat{z}$ component of the $\vec{E} \times \vec{B}$ velocity. Then, the measured plasma parameters are obtained by averaging quantities over $0.06~d_{i0}$ centered around this location. We find that the upstream parameters vary in time, changing between the transient time when reconnection onset takes place and when a steady-state is reached. We reason that the dipolarization fronts occur due to jets that arise in the transient initial phase of reconnection. Therefore, we measure the upstream parameters at early times when the reconnection rate starts to increase. For the simulations with varying $T_{TOT,up}$, this time is $t$ = 5 whereas for $n_{up}$ simulations, the time varies from $t$ = 5 for $n_{up} = 0.2$ to $t$ = 10 for $n_{up} = 1$ since increasing $n_{up}$ from 0.2 to 1 decreases the speeds
by a factor of $5^{1/2}$.
At the chosen time, we average the desired upstream quantities over five code time units. We find that the data variations are small (within 5\%) during this interval. We also confirm the densities and temperatures do not vary appreciably between the upstream value at the electron layer and the upstream value at the ion layer. The results of this procedure are listed in Table \ref{table:UpstreamData}, along with theoretical predictions of $v_{\perp0}$ using Eq.~(\ref{eq:vperp0upOnly}) and $v_{Th}$ using Eq.~(\ref{eq:vThupOnly}).
\begin{table}
\caption{Upstream plasma parameters from the simulations using the method described in Sec.~\ref{subsec:RingPartFitting}. The first column gives the value being varied, $B_{up,e}$ is the upstream magnetic field, $n_{up}$ is the upstream density, and $T_{e,up}$ is the upstream temperature at the EDR edge. The last two columns give the theoretical predictions for the major radius $v_{\perp0}$ and minor radius $v_{Th}$ based on the upstream values using Eqs.~(\ref{eq:vperp0upOnly}) and (\ref{eq:vThupOnly}), respectively.}
\centering
\begin{tabular}{c c c c c c}
\hline
$T_{TOT,up}$ & $B_{up,e}$ & $n_{up}$ & $T_{e,up}$ & Theoretical ${v_{\perp0}}$ & Theoretical $v_{Th}$\\
\hline
0.2 & 0.33 & 0.14 & 0.034 & 4.41 & 1.30 \\
0.4 & 0.34 & 0.14 & 0.068 & 4.54 & 1.84 \\
0.6 & 0.33 & 0.14 & 0.10 & 4.41 & 2.24 \\
0.8 & 0.36 & 0.16 & 0.13 & 4.50 & 2.55 \\
1.0 & 0.35 & 0.15 & 0.17 & 4.52 & 2.92 \\
\hline
$n_{up}$ & $B_{up,e}$ & $n_{up}$ & $T_{e,up}$ & Theoretical ${v_{\perp0}}$ & Theoretical $v_{Th}$\\
\hline
0.2 & 0.35 & 0.15 & 0.084 & 4.51 & 2.05 \\
0.4 & 0.36 & 0.32 & 0.086 & 3.18 & 2.07 \\
0.6 & 0.38 & 0.51 & 0.087 & 2.66 & 2.08 \\
0.8 & 0.36 & 0.69 & 0.086 & 2.17 & 2.07 \\
1.0 & 0.37 & 1.01 & 0.083 & 1.84 & 2.04 \\
\hline
\end{tabular}
\label{table:UpstreamData}
\end{table}
The simulation data and theoretical predictions are plotted in Fig.~\ref{fig:RingParamCompare}. The simulation data are displayed as black dots connected by solid black lines. The error bars are the larger of the two errors associated with each measurement given in Table~\ref{table:OneDfittingData}. The theoretical predictions, given in the last two columns of Table \ref{table:UpstreamData}, are displayed as red dots connected by red lines. The simulations with varying upstream temperature are shown in Figs.~\ref{fig:RingParamCompare}(a) and (b), displaying $v_{\perp 0}$ and $v_{Th}$, respectively, as a function of $T_{TOT,up}$. The theoretical results are within the error bars from the simulations, confirming that $v_{\perp 0}$ is not dependent on $T_{e,up}$ while $v_{Th}$ scales as $T_{e,up}^{1/2}$. Analogous results for the simulations with varying upstream density are shown in Figs.~\ref{fig:RingParamCompare}(c) and (d). The predictions again are within the error bars from the simulations, and confirm the scaling of $v_{\perp 0}$ with $n_{up}^{-1/2}$ and the independence of $v_{Th}$ on $n_{up}$.
In summary, we find excellent agreement between the predicted values of both the major and minor radii of the ring distribution and the measured values from the ten simulations.
\begin{comment}
We now compare the predicted perpendicular and parallel temperatures associated with the ring distributions to the simulated values. As detailed in Section \ref{subsec:ringTheo}, the presence or absence of a Maxwellian core modifies the associated electron temperature. Hence, we first treat the presence of core distributions by showing rEVDFs for two of the simulations, namely the $n_{up}=0.2$ and $1$ simulations, which exemplify the different features seen in the ten simulations. Figure~\ref{fig:AllPlanesVDFs_nBG02and1} contains rEVDFs in the (a,d) $v_{\perp1} - v_{||}$, (b,e) $v_{||} - v_{\perp2}$ and (c,f) $v_{\perp1} - v_{\perp2}$ planes from the simulations with $n_{up} = 0.2$ in (a)-(c) and $n_{up} = 1$ in (d)-(f).
The $n_{up}=0.2$ simulation
reveals field-aligned counter-streaming beams
with bulk speeds close to the $v_{\perp0}$ value of the ring population.
Compared to the simulation results in Figure 2 of \citeA{Shuster2014}, the phase space densities of the counter-streaming beam populations in our simulations are quite small. We also see that there is an actual core population centered at $(v_{\perp1},v_{\perp2})\simeq (0,0)$. Thus, when $v_{||}$ is integrated out to produce the rEVDF in $v_{\perp1}-v_{\perp2}$ plane, the ``core'' has contribution from the actual Maxwellian core and weak (but present) beams. These features are contrasted against the $n_{up}=1$ case in Figs.~\ref{fig:AllPlanesVDFs_nBG02and1}(d)-(f). \textcolor{magenta}{The counter-streaming beams are at much lower speeds ($\sim 2.24$) and there is an absence of a true core population as appears in the $n_{up}=0.2$ simulation. Thus, when $v_{||}$ is integrated out to produce rEVDFs in the $v_{\perp1}-v_{\perp2}$ plane, the core has a contribution only from the beams.} Of the ten simulations, we find the $n_{up}=0.8$ and 1 simulations have an absence of a true core population, while the other eight simulations do have a core. We note the ring rEVDFs seen in the $n_{up}=0.8$ simulation is similar to the ring rEVDFs seen in \citeA{Shuster2014}, where they observe rings with only counter-streaming beams and no true Maxwellian core.
\end{comment}
We now compare the electron temperatures associated with the ring distributions with the analytical expressions from Section~\ref{sec:theo} by using
Eq.~(\ref{eq:ring+coreVDFTeperppara})
and (\ref{eq:ring+coreVDFTe}) to find the predicted $T_{e,\perp},~T_{e,||}$, and $T_{e,{\rm eff}}$.
For the core population parameters, we use the fitting results for the central Gaussian described earlier in this section. We find that the core population thermal speed $v_{Th,M}$ values are not those associated with the upstream electron temperatures, but a study of how the core population parameters scale with upstream plasma parameters is beyond the scope of this work.
In the simulations, ring distributions are seen over a finite region of space, so the presented temperature values are mean values over that range. The error is estimated as the standard deviation of the mean.
The results are shown in Fig.~\ref{fig:TeCompareTheovsSims}, with simulation results in black and theoretical results in red. The perpendicular temperatures, in panel (a) for simulations with varying $T_{TOT,up}$ and (d) for simulations with varying $n_{up}$, show excellent agreement between the theory and simulations. For the parallel electron temperature in panels (b) and (e), we observe a sizable difference between the simulated and predicted values. This is attributed to our theory not accounting for the parallel propagating counter-streaming beams mentioned in the previous subsection.
However,
we do find some qualitative agreement.
Since $T_{e,||}$ has a smaller weight than $T_{e,\perp}$ in $T_{e,{\rm eff}}$, we find good qualitative agreement between simulation results and predicted values of $T_{e,{\rm eff}}$ for all ten simulations, shown in panels (c) and (f). The results for varying $n_{up}$ in panel (f) have very good quantitative agreement, as well. In summary, we find that the temperature in the region where rings are present increases with increasing upstream temperature %
and decreases with increasing upstream density, %
and the model based on ring distributions is quite effective at predicting the scaling and the absolute perpendicular temperatures.
\subsection{Relation of ring distributions to temperature and magnetic field profiles}
\label{subsec:Overview}
We now consider the location of the electron ring distributions in relation to the plasma parameter profiles in the region downstream of the EDR. Some plasma parameter profiles in the downstream region are shown in Fig.~\ref{fig:TePeakScaling}. Panels (a) and (f) show 2D plots of $T_{e,{\rm eff}}$ from the $T_{TOT,up}$ = 0.2 simulation and the $n_{up}$ = 0.2 simulation, both at $t=38$. In both cases, the highest electron temperatures observed in the simulation are in the dipolarization front region, between positions $x-x_0$ of -10 and -15. There are also high temperature regions along the separatrix, but these are potentially impacted by the periodic boundary conditions of the simulation and are not treated further here. From previous work \cite{Fu12b,egedal_2016_PoP}, we expect higher temperatures to arise from betatron acceleration of the electrons in the compressed magnetic field. However, the rEVDFs at later times during the steady-state time period (not shown) reveal the ring distributions do not increase in size in our simulations. We believe we do not observe this because our computational domain is smaller than in the previous study, preventing ions from coupling back to the magnetic field in the exhaust region.
The rest of the panels show comparisons of horizontal cuts of various quantities along the line $z = z_0$ for all $T_{TOT,up}$ (left plots) and $n_{up}$ (right plots) simulations. The times $t$ that each profile is taken are given in panels (b) and (g). Panels (b) and (g) show the perpendicular electron temperature $T_{e,\perp}$, revealing similar profiles %
for each upstream temperature with peak values near the dipolarization front, increasing with upstream temperature and decreasing with higher density. Panels (c) and (h) show the temperature anisotropy $A_{e,\perp}$.
We observe strong electron temperature anisotropies with all the upstream temperature simulations having similar values. We also find a systematic reduction in $A_{e,\perp}$ with increasing upstream densities in the dipolarization front region.
Panels (d) and (i) show the reconnected magnetic field $B_z$. The profiles have the characteristic appearance of a dipolarization front, with a sharply peaked value at the front that decreases towards the X-line. Importantly, in all simulations, we observe a plateau, or shoulder, in $B_z$ that occurs upstream of the dipolarization front. Blue vertical lines are used to highlight the shoulder in $B_z$ for the $T_{TOT,up}=0.2$ and $n_{up}=0.2$ simulations. We find that for all the simulations, the $B_z$ shoulder is spatially correlated with the regions of high $T_{e,\perp}$ and $A_{e,\perp}$.
Finally, panels (e) and (j) show the horizontal electron velocity $V_{ex}$, showing the characteristic increase in speed with distance from the X-line before rolling over and decreasing for all simulations as electron outflows exit the EDR. The horizontal velocity is close to zero in the region of peaked perpendicular temperature and the shoulder in $B_z$.
The spatial profiles in Fig.~\ref{fig:TePeakScaling} are very similar to previous simulations by \citeA{fujimoto_2008_whistler} (see their Figure 2), \textit{i.e.,} the peak in $A_{e,\perp}$ (due to an enhancement in $T_{e,\perp}$) appears in the magnetic pileup region where the electron outflow speed goes to zero.
We now discuss the locations of the ring distributions relative to these profiles. We find that the ring distributions shown in Fig.~\ref{fig:ringVDFsALLRUNS} are co-located with the shoulder region of $B_z$ for all simulations. For simulations with increasing upstream temperature, the shoulder regions in $B_z$ are in similar locations and the ring distributions accordingly appear over a similar region in all five simulations (see the location of the ring distributions in the left column of Fig.~\ref{fig:ringVDFsALLRUNS}). However, as upstream density is increased, the shoulder in $B_z$ appears closer to the X-line and so does the location of ring distributions (see the location of ring distributions in the right column of Fig.~\ref{fig:ringVDFsALLRUNS}). For all simulations, we find that the shoulder in $B_z$ has an extent of $\sim 1~d_{i0}$, with a field strength of $\sim 0.5 B_0$.
A possible mechanism for the presence of a shoulder in $B_z$ at the location where there are ring distributions is the diamagnetic effect of the electrons that are magnetized by the strong reconnected magnetic field. The associated current reduces the magnetic field strength in the region where rings are present and increase the field strength outside. This change to the magnetic field appears as a plateau on the $B_z$ profile as it ramps up with distance from the X-line.
To estimate the amount by which the reconnected magnetic field decreases in the presence of ring distributions, we use conservation of energy. Using Eq.~(\ref{eq:ringVDFTeperpMdef}) and (\ref{eq:ring+coreVDFTeperppara}) to rewrite Eq.~(\ref{eq:ring+coreVDFTe}) for the effective temperature of electrons as an energy equation gives
\begin{linenomath*}
\begin{equation}
\frac{3}{2} k_B T_{e,\mathrm{eff}} \simeq \frac{3}{2} k_B T_{e,up} + \frac{1}{2}m_e c_{Aup,e}^2 + \left(1 - \frac{e^{-r^2}}{2 \Lambda} \right) k_B T_{e,up}.
\label{eq:energyconvRing}
\end{equation}
\end{linenomath*}
The left-hand side blue gives the plasma energy at the location where rings are seen because the electron bulk speed vanishes so all energy is thermal. The first two terms on the right-hand side approximately describe the thermal plus kinetic energy of electrons as they leave the EDR. The last term on the right side is associated with the thermal energy arising from the generation of the ring distribution. This extra energy is approximately the energy that is lost by the magnetic field as it decreases due to diamagnetism of the remagnetized electrons. The term in parentheses goes from 0.5 to 1 as $r = v_{\perp 0}/v_{Th}$ goes from 0 to $\infty$.
In order to conserve total energy, we expect the magnetic field energy to decrease by
\begin{linenomath*}
\begin{equation}
\Delta \left(\frac{B^2}{8 \pi}\right) \sim \left(1 - \frac{e^{-r^2}}{2 \Lambda} \right) k_B T_{e,up},
\label{eq:Bzshoulder}
\end{equation}
\end{linenomath*}
where $\Delta (B^2/8 \pi)$ is the change in magnetic field energy.
Assuming the change in the magnetic field is weak, this decrease is approximately $B \Delta B/(4 \pi)$ where $\Delta B$ is the change in the magnetic field.
In the normalized units of our simulations, $B \simeq 0.5$ at the shoulder, and $r \ge 1$ so $(1-e^{-r^2}/(2 \Lambda))$ is close to 1. For the varying $n_{up}$ simulations where $T_{e,up}$ is kept fixed at 0.0833, this prediction gives a change in magnetic field of $\Delta B \simeq 0.2$. For the varying $T_{TOT,up}$ simulations where $T_{e,up}$ goes from 0.033 to 0.167, this prediction gives a change in magnetic field of $\Delta B \sim 0.1 - 0.3$. From the profiles of $B_z$ in Fig.~\ref{fig:TePeakScaling}(d) and (i), we find that the difference of the profile from a linearly increasing ramp away from the X-line is approximately 0.1 - 0.3, in reasonable agreement with the prediction.
\subsection{Confirmation that ring distributions are caused by remagnetization}
\label{subsec:ElecRemag}
\begin{comment}
\end{comment}
We now confirm the proposed model that electron rings are associated with their remagnetization in the reconnected magnetic field \cite{Shuster2014,Bessho2014}. We calculate two quantities as a function of $x$: (1) the magnetic field radius of curvature $R_c = |(\hat{b} \cdot \nabla) \hat{b}|^{-1}$, where $\hat{b}$ is the unit vector along the local magnetic field, and (2) the electron gyroradius $\rho_{{\rm bfs}} = V_{ex} / \Omega_{ce}$ based on the horizontal bulk flow speed $V_{ex}$ and the local electron gyrofrequency $\Omega_{ce} = e B / m_e c$. The bulk flow speed is the appropriate speed because the ring distributions are proposed to be formed by outflowing electron beams that get remagnetized. %
The condition for remagnetization is
\begin{comment}
\begin{linenomath*}
\begin{equation}
\frac{(|\vec{\kappa}|)^{-1}}{\rho_{b f s}} \approx 1.
\label{eq:CrossoverPointCond}
\end{equation}
\end{linenomath*}
\end{comment}
$\sqrt{\kappa} = R_c/\rho_{{\rm bfs}} \approx 1$ \cite{Buchner89}. %
We plot $R_c/\rho_{{\rm bfs}}$
as a function of $x-x_0$ in Fig.~\ref{fig:ElecRemag}(a)
for the $n_{up}=0.2$ simulation at $t=38$. A horizontal red dashed line marks where $R_c / \rho_{{\rm bfs}} = 1$, which is $x - x_0 \approx -9$ as marked by the vertical red dashed line. Fig.~\ref{fig:ElecRemag}(b) shows $\rho_{{\rm bfs}}$ as a function of $x-x_0$. Its value where $R_c / \rho_{{\rm bfs}} = 1$ is $\approx 0.5 \ d_{i0}$, which for this simulation is $\approx 1.1 \ d_e$.
\begin{comment}
\begin{table}
\caption{Table showing where electrons remagnetize and compares theoretical prediction and simulation result}
\centering
\begin{tabular}{l c c c c}
\hline
$n_{up}$ & $R_c/\rho_{{\rm bfs}}=1$ location & $\rho_{{\bf bfs}}$ value & $E'$ & Theory-predicted location \\
\hline
0.2 & 8.81 & 0.52 & 0.15 & 3.44 \\
0.4 & 7.29 & 0.30 & 0.09 & 3.32 \\
0.6 & 6.64 & 0.25 & 0.07 & 3.57 \\
0.8 & 5.90 & 0.23 & 0.06 & 3.83 \\
1.0 & 2.66 & 0.19 & 0.05 & 3.79 \\
\hline
$T_{TOT,UP}$ & $R_c/\rho_{{\rm bfs}}=1$ location & $\rho_{{\rm bfs}}$ value & $E'$ & Theory-predicted location \\
\hline
0.2 & 10.69 & 0.50 & 0.16 & 3.09 \\
0.4 & 10.29 & 0.47 & 0.15 & 3.11 \\
0.6 & 9.98 & 0.47 & 0.14 & 3.33 \\
0.8 & 9.74 & 0.45 & 0.14 & 3.19 \\
1.0 & 8.84 & 0.45 & 0.14 & 3.19 \\
\hline
\end{tabular}
\label{table:ElecRemag}
\end{table}
\end{comment}
We now compare this to the location where ring distributions are observed in this simulation. Ring distributions are seen throughout the blue shaded region of Fig.~\ref{fig:TePeakScaling}(g)-(j). This is located $\simeq 2 d_e$ downstream of the location where $R_c / \rho_{{\rm bfs}} = 1$.
Since the gyroradius of the electron beam is $\sim 1d_e$, the ring distributions are observed one gyro-diameter downstream of the location where the remagnetization condition is first met. This same behavior is seen in each of the other nine simulations studied here (not shown). This confirms that the remagnetization of the electron outflow jet is responsible for the generation of the ring distributions.
\begin{comment}
\end{comment}
A further test that the ring distributions are caused by remagnetization of electron exhaust beams is that they should cease to be present with the addition of a sufficiently strong out of plane (guide) magnetic field.
\begin{comment}
\textcolor{red}{We modify $R_c$ \cite{ZenitaniSomething?} along the horizontal symmetry axis (where $B_x$, $B_y$ and change in $B_z$ along $\hat{z}$ are negligibly small) to include non-zero guide field $B_g$
\begin{linenomath*}
\begin{equation}
R_c \sim \frac{\sqrt{B_z^2+B_g^2}}{B_z}X \tan E\prime,
\label{eq:RcGuideField}
\end{equation}
\end{linenomath*}
where $X$ is the distance away from the X-line and $E\prime$ is the normalized reconnection rate. The electron outflow speed $V_{ex}$ can be modeled using \citep{Yi-HsinPaper}\cite{YiHsinSomething} to find upstream magnetic field at the EDR edge $B_{up,e}$ so $B_{up,e} \sim \sqrt{2} (m_e/m_i)^{1/4} B_{up}$ and inclusion of $B_g$ modifies $\rho_{{\rm bfs}}$
\begin{linenomath*}
\begin{equation}
\rho_{{\rm bfs}} \sim \sqrt{2} \left(\frac{m_e}{m_i}\right)^{\frac{1}{4}} B_{up} \frac{d_e}{\sqrt{B_z^2+B_g^2}}.
\label{eq:rhobfsGuideField}
\end{equation}
\end{linenomath*}
Using Eq. \ref{eq:RcGuideField} and \ref{eq:rhobfsGuideField}, we solve for critical $B_g$ for which $R_c/\rho_{{\rm bfs}}$ is always equal to 1.
\begin{linenomath*}
\begin{equation}
B_g \sim \sqrt{B_z\left[ \frac{\sqrt{2} (m_e/m_i)^{1/4} B_{up} d_e}{X \tan E\prime} \right] - B_z^2 }.
\label{eq:criticalBg}
\end{equation}
\end{linenomath*}
We chose typical value for reconnection rate $E\prime \sim 0.1$. At the edge of the fully formed EDR downstream of the X-line, $X \sim 5d_{i0} \sim 10 d_e$ and we observe $B_z$ while slowly ramping up has a value $\sim 0.1$. Using $m_e/m_i = 0.04$ for these simulations, we find the critical $B_g \sim 0.3.$}
\end{comment}
To test this, we perform simulations with initial guide fields $B_g$ of $0.05$ and $0.25$ for $n_{up}=0.2$, with all other parameters the same as before. A similar analysis as shown in Fig.~\ref{fig:ElecRemag} (not shown) reveals that for the $B_g=0.05$ simulation, $R_c/\rho_{{\rm bfs}}$ is very similar to the no guide field case, \textit{i.e.,} away from the X-line, $R_c/\rho_{{\rm bfs}}$ increases and then crosses 1 signalling remagnetization of the electron outflow jet. The plasma parameter profiles are similar to those seen in Fig.~\ref{fig:TePeakScaling} for the no guide field case (not shown). A scan of rEVDFs as described in previous sections shows ring distributions in the region of a $B_z$ shoulder (not shown). However, for the $B_g=0.25$ simulation, $R_c/\rho_{{\rm bfs}}$ (not shown) is never less than 1 in the downstream region, implying that electrons are never demagnetized so no remagnetization occurs downstream. We also find no presence of ring EVDFs (not shown) in our scan. This provides additional evidence that the rings are formed by magnetization of the electron exhaust beams.
\begin{comment}
\section{Comparison with THEMIS Observations}
\label{sec:Observations}
To test the theoretical results in observations, we consider dipolarization front observations from February 27, 2009 \cite{runov_2010_Planet_Sci}. Four of the five THEMIS spacecraft traversed a dipolarization front between 0750 and 0800 UT, and burst mode data was available during this time. Their Figs.~4 and 5 reveal classic signatures of a DF, with a significant decrease in density and an increase in $B_z$ (in GSM coordinates). The P1 (THEMIS B) spacecraft passed through the DF at 0751:26 UT, shown on the left side of their Fig.~4, with the vertical dashed line denoting the DF. We note that immediately upstream of the DF (around 0751:30 UT), the electron temperature in both directions perpendicular to the magnetic field exceed the parallel electron temperature. Therefore, this location is a candidate for having an electron ring distribution.
To determine whether there is an electron ring distribution at this time, we investigate the eVDFs in the time interval when $T_{e,\perp} > T_{e,||}$. The distributions are averaged over two spacecraft spins, between 7:51:30 and 7:51:36 UT, to get better statistics than a single spin. The eVDFs are calculated in a coordinate system with $v_{||}$, $v_{\perp 2}$ for the velocity along the direction of the cross product of the magnetic field and the bulk flow (nominally the electric field direction), and $v_{\perp 1}$ for the velocity component along the direction perpendicular to the magnetic and electric fields. Two-dimensional cuts of the eVDF in this time range are shown in Fig.~\ref{fig:ObserRing}, with the left plot at $V_{\perp2V} = 0$, the middle plot at $V_{\perp1} =0$, and the right plot at $V_{||}=0$. The black circle corresponding to an energy of 60 eV denotes the low energy cutoff below which VDFs are not reliably measured; the color inside this circle is an artifact of data interpolation. The plots reveal the clear pattern of a ring distribution, with structure well outside the low energy cutoff. \hl{Andrei, does this give enough information about how the data was taken? Do we need to refer to an instrument?}
There is a similar electron temperature anisotropy seen immediately upstream of the dipolarization front by the P2 (THEMIS C) and P4 (THEMIS E) satellites. We confirm that the eVDFs in the time frames of interest also have a ring distribution (not shown). Interestingly, P3 (THEMIS D) does not see a signal with higher perpendicular than parallel temperature. We confirm that the eVDF from P3 does not reveal a ring distribution structure. Thus, the $T_{e,\perp} > T_{e,||}$ region is associated with electron ring distributions just upstream of an observed dipolarization front for this event. As in the simulations, this occurs in the region of the highest electron temperatures observed during the event, as seen in the simulations. Since only three out of four spacecraft observed electron ring distributions, this suggests they are likely localized in space, as also seen in the numerical simulations. \hl{Andrei - can anything more quantitative be said about the physical size of the region with rings?}
We employ a similar methodology used to analyze the simulated ring distributions as described in Section \ref{subsec:RingPartFitting} to extract the major and minor radii from the eVDFs in Fig.~\ref{fig:ObserRing}, except that the distribution is fit to only two Gaussian function since there is no observed core. The 1D cuts of the distributions at $V_V=0$ and $V_{B\times V}=0$ are shown as the black curves in Fig.~\ref{fig:FittingObsData}(a) and (b), respectively. The blue boxes show the energy cutoff in the THEMIS detector, and the fit is restricted so that it does not incorporate the data from below the energy cutoff in the fit. The results of the fits are overplotted as red curves. The resultant major and minor radii are $v_{\perp0}=(1.11 \pm 0.14) \times 10^4$ km/s and $v_{Th}=(1.01 \pm 0.16) \times 10^4$ km/s.
\textcolor{red}{Now, we employ Eq. (\ref{eq:vThupOnly}) and use the major and minor radii for the observed rings to find upstream electron temperature to be $T_{e,up}=305 \pm 96$ eV. We compare this electron temperature with THEMIS B observation of electron temperature (see Fig. 4(f) of \citeA{runov_2010_Planet_Sci}) when the density is $\sim 0.2 ~cm^{-3}$ and $B_x$ is $\sim 20$ nT because we posit that THEMIS B is in the upstream (lobe) region in this time interval. Doing so, we find reasonable agreement with THEMIS B $T_{e,{\rm eff}}$ observation of $\sim 500$ eV. We also look at the EVDFs in this upstream region and they look Maxwellian (not shown here) and do not have ring-like features shown in Fig.~\ref{fig:ObserRing}.}
Next, we use the upstream conditions for this DF event as $B_{up} \approx 20$ nT, $n_{up} \approx 0.1 ~cm^{-3}$, and $T_{e,up}\approx 700$ eV \cite{runov_2011_Planet_Sci} and calculate theoretical predictions for $v_{\perp0}$ and $v_{Th}$ which come out to be $8.9 \times 10^3$ km/s and $1.48 \times 10^4$ km/s, respectively. We note here that the upstream magnetic field on the EDR edge is calculated using \hl{Yi-Hsin's unpublished paper} $B_{up,e} \approx B_{up}(m_e/m_i)^{0.25}$ which gives a prefactor of 6.2 instead of 2 as seen in Eq.~\ref{eq:vperp0upOnly}. Thus, we find that $v_{\perp0}$ agrees within 21 \% and $v_{Th}$ agrees within 47 \% with the observation data.
\end{comment}
\section{Discussion and Applications}
\label{sec:discussions}
The results of this research are potentially useful for a variety of reasons. By relating the properties of the ring distribution to the upstream (lobe) plasma parameters in Sec.~\ref{sec:theo}, we can make quantitative predictions of the electron temperatures achieved downstream of reconnection exhausts, such as a dipolarization front or a solar flare reconnection outflow. We can also approximately account for the betatron acceleration that is expected to occur following the generation of ring distributions \cite{Fu12b,egedal_2016_PoP}. We characteristically see the $B_z$ shoulder at a magnetic field strength of about 0.5 as shown in Fig.~\ref{fig:TePeakScaling}, and it further compresses to a strength of 1. If betatron acceleration were to occur and assuming that the magnetic moment is conserved, we expect the perpendicular temperature to increase by a factor of $\sim 2$ from our predicted values.
To apply the theory to real systems, we also need to estimate the magnetic field $B_{up,e}$ at the upstream edge of the electron layer from the asymptotic magnetic field strength $B_{up}$. There is no widely accepted theory for this, so we discuss two possible options. In model 1, we use \begin{linenomath*}
\begin{equation}
B_{up,e} \approx 2\left(\frac{m_e}{m_i}\right)^{1/2} B_{up},
\end{equation}
\end{linenomath*}
which captures that the electron outflow velocity at the EDR is often observed to be approximately twice the ion Alfv\'en speed. In model 2, we use \cite{liu_FirstPrinciple_2022}
\begin{linenomath*}
\begin{equation}
B_{up,e} \approx \left(\frac{m_e}{m_i}\right)^{1/4} B_{up},
\end{equation}
\end{linenomath*}
which follows from conservation of magnetic flux at the electron and ion layers.
We first consider Earth's magnetotail, where there is typically only a weak guide field and typical plasma parameters may be taken as $B_{up} \approx 20$ nT, $n_{up} \approx 0.1$ cm$^{-3}$, and $T_{e,up} \approx 700$ eV, although there is significant uncertainty in all three values. Using the expressions in Sec.~\ref{sec:theo}, we find the predicted $v_{\perp0}$ to be $(2.8 - 9.2) \times 10^8$ cm/s. Here and in what follows, the first number in the provided range is using model 1 and the second is using model 2. We also get $v_{Th}=1.6 \times 10^9$ cm/s, so the perpendicular and effective temperatures associated with ring distributions is $T_{\perp} = 890-1270$ eV and $T_{{\rm eff}} = 850-1100$ eV, with an anisotropy of $A_{e,\perp} = 0.2-0.7$. For comparison, the DF studied in Fig.~4 of \citeA{runov_2010_Planet_Sci} had electron temperatures reaching about 1800 eV with perpendicular temperature $T_{e,\perp}\sim 2000$ eV.
Doubling our prediction to account for betatron acceleration, we find the predicted values are broadly consistent with the observations. %
We next consider implications for reconnection in solar flares. The presence of a guide field may suppress the mechanism in the present study entirely. However, a range of guide fields is observed including examples with little to no guide field \cite{Qiu17}. Moreover, a leading model for the observed heating from MHD simulation studies also requires a low guide field strength \cite{Longcope10,Longcope16}. We assume typical values of a background coronal temperature of $T_{e,up} = 1$ MK, a density of $n_{up} = 10^9$ cm$^{-3}$, and an ambient magnetic field for a large flare of $B \sim 100$ G, the latter of which is consistent with values inferred from radio and other measurements for large flares \cite{Asai2006,krucker10a,Caspi14}. The associated upstream magnetic field at the electron layer is estimated to be $B_{up,e} = 4.6-15.6$ G using model 1 and 2. Then, the predicted major and minor radii of the ring distributions are $v_{\perp0}=1.4-4.6\times10^9$ cm/s and $v_{Th}=5.5\times10^8$ cm/s. This implies $r = 3-8$, $A_{e,\perp} = 7-70$, and $T_{e,\perp} = 8-70$ MK. Since the coronal plasma $\beta$ is small, $r$ is significantly larger than 1, much higher than its magnetotail counterpart, leading to a much more dramatic increase in temperature due to remagnetizing the electrons. Taking an asymptotic expansion for the large $r$ limit of Eq.~(\ref{eq:ringVDFTeperpMdef}) gives
\begin{linenomath*}
\begin{equation}
\mathcal{M} \approx \frac{3}{2} + r^2.
\end{equation}
\end{linenomath*}
Using Eqs.~(\ref{eq:vThupOnly}) and (\ref{eq:rupOnly}) for $v_{Th}$ and $r$, Eq.~(\ref{eq:ringVDFTe}) gives an expression for $T_{{\rm eff}}$ for large $r$ as
\begin{linenomath*}
\begin{equation}
T_{e,{\rm eff}} \approx T_{e,up} \left(\frac{4}{3}+\frac{B_{up,e}^2}{12 \pi n_{up} k_BT_{e,up}} \right).
\label{eq:ringVDFTelarge_r}
\end{equation}
\end{linenomath*}
Evaluating this expression in terms of the typical coronal parameters provided above, we get %
\begin{linenomath*}
\begin{equation}
T_{e,{\rm eff}} = 1.33 {\rm \ MK} \left(\frac{T_{e,up}}{1 {\rm \ MK}}\right) + (4.2 {\rm \ MK} - 45 {\rm \ MK}) \left(\frac{B_{up}}{100 {\rm \ G}}\right)^2 \left(\frac{n_{up}}{10^9 {\rm \ cm}^{-3}}\right)^{-1},
\label{eq:ringVDFTelarge_r_normalized}
\end{equation}
\end{linenomath*}
where the range in the second term is for model 1 and 2 of $B_{up,e}$. Therefore, the predicted effective temperature is $T_{e,{\rm eff}} = 5-46$ MK using models 1 and 2 for the typical coronal parameters employed here. This relation predicts a scaling dependence of the temperature approximately as $B_{up}^2.$
The temperatures predicted here, even when doubled to account for betatron acceleration, are in the same range as the 10s of MK observed during super-hot flares \cite{Caspi10,Caspi14,Warmuth16}. The heating mechanism in our models is the reconnection process, significant heating occurs for magnetic fields starting at about 100~G, and there is an increase in temperature with magnetic field strength. These features are broadly consistent with the relationships derived from a statistical study of X-ray observations of intense flares \cite{Caspi14}. We therefore suggest it may be possible that the super-hot temperatures in such flares are generated by electron beams getting magnetized in reconnected fields, and potentially also subsequently heated further by betatron acceleration as the reconnected magnetic field continues to compress. This compression likely leads to higher densities than the ambient coronal value, as has been previously suggested \cite{Caspi10b,Longcope11}.
The proposed mechanism would also help explain the observed association of super-hot temperatures with coronal non-thermal emission and energy content \cite{Caspi15,Warmuth16}. \textcolor{black}{Significant future studies to further explore the viability of the present model for explaining observed temperatures in super-hot solar flares is needed, including a parametric test of Eq.~(\ref{eq:ringVDFTelarge_r_normalized}), determining whether this mechanism is consistent with the high level of compression seen in observations, studying if the small regions where the ring distributions are generated can transmit to the large scales endemic to solar flares, and determining whether guide field strengths in solar flares would magnetize the ring distributions.}%
The results of this study could also be applicable to Earth's dayside magnetopause, where ring distributions and whistler mode generation were recently observed both in simulations of asymmetric reconnection with a guide field and in Magnetospheric Multiscale (MMS) Mission observations \cite{Yoo19,Choi2022}. The theory presented in this study is exclusively for symmetric reconnection, but dayside reconnection is typically asymmetric. We expect the mechanism for ring distribution generation to be similar in asymmetric reconnection. We hypothesize that in asymmetric reconnection, the speed that sets the major radius $v_{\perp0}$ in Eq.~(\ref{eq:vperp0upOnly}) becomes the asymmetric version of the Alfv\'en speed that controls the outflow speed of asymmetric reconnection,
\begin{linenomath*}
\begin{equation}
v_{\perp 0,asym}=\frac{B_{up,asym,e}}{\sqrt{4 \pi m_{e} n_{up,asym}}},
\label{eq:vperp0upOnlyasym}
\end{equation}
\end{linenomath*}
and the thermal speed that sets the minor radius is replaced by
\begin{linenomath*}
\begin{equation}
v_{Th,asym} = \sqrt{\frac{2 k_{B} T_{e,up,asym}}{m_{e}}},
\label{eq:vThupOnlyasym}
\end{equation}
\end{linenomath*}
where $B_{up,asym,e} = B_{up,1,e}B_{up,2,e}/(B_{up,1,e}+B_{up,2,e})$ and $n_{up,asym} = (n_{up,1} B_{up,2,e} + n_{up,2} B_{up,1,e})/(B_{up,1,e}+B_{up,2,e})$ \cite{Cassak07d,Cassak08b} and $T_{e,up,asym} = (T_{e,up,1} n_{up,1} B_{up,2,e} + T_{e,up,2} n_{up,2} B_{up,1,e})/(n_{up,2} B_{up,1,e}+n_{up,1} B_{up,2,e})$ \cite{Shay14}. It is beyond the scope of the present study to test this hypothesis, but it would be interesting to do so for future work.
We now discuss implications for direct measurements of ring distributions in reconnection events, especially in dipolarization fronts that are accessible to {\it in situ} observations. The simulations suggest that the physical size of the region where ring distributions are present is relatively small. In the simulations, the range over which rings are seen is about $1 \ d_i$, corresponding to approximately $720$ km (based on a lobe density of 0.1 cm$^{-3}$) in Earth's magnetotail. Temporally, we expect that they appear transiently at the dipolarization front. Simulations of reconnection in large domains do not reveal temperature peaks in the downstream region in the steady-state \cite{Shay14}. Moreover, since ring distributions are unstable to wave generation \cite{Gary85}, they are expected to rapidly decay, making their direct observation even more challenging. \textcolor{black}{It is also challenging to observe ring distributions when the major radius is smaller than the minor radius, {\it i.e.,} when $r<1$. For typical parameters in Earth’s magnetotail, $r$ is theoretically expected to be approximately 0.2 - 0.6, so \textit{in situ} observations of rings might be challenging but can be potentially possible. Rings are more likely to be identifiable in large $r$ (low electron plasma beta) systems.}
To illustrate the challenges of direct measurement of a ring distribution, we describe an unsuccessful attempt to identify one in Earth's magnetotail using the THEMIS spacecraft \cite{Angelopoulos2009}. On February 27, 2009, four of the five THEMIS spacecraft traversed a DF between 0750 and 0800 UT \cite{runov_2010_Planet_Sci}, and burst mode data were available during this time. Their Figs.~4 and 5 reveal classic signatures of a DF, with a significant decrease in density and an increase in $B_z$ (in GSM coordinates). The P1 (THEMIS B) spacecraft passed through the DF at 07:51:26~UT, shown on the left side of their Fig.~4, with the vertical dashed line denoting the DF. Immediately upstream of the DF (around 07:51:30~UT), the electron temperature in both directions perpendicular to the magnetic field exceeds the parallel electron temperature, making this location a candidate for having an electron ring distribution.
To determine whether there is an electron ring distribution at this time, we investigate the EVDFs in the time interval when $T_{e,\perp} > T_{e,||}$. The distributions are averaged over two spacecraft spin periods (6~s), between 07:51:30 and 07:51:36~UT, to get better statistics than a single spin. The low-energy cutoff due to spacecraft charging is $\sim 60$~eV, which is smaller than the predicted major radius for this event, so we expect it to be ostensibly possible to resolve a ring distribution if it is present. Two-dimensional cuts of the EVDF are produced from recombined ElectroStatic Analyzer (ESA) and Solid State Telescope (SST) data in this time range (not shown). Clear signatures of counterstreaming electron beams along the magnetic field are seen in both $\perp-\parallel$ planes. When the raw data is smoothed, a weak signature of what appears to be a ring population is seen. %
However, a closer examination of the uncombined ESA-only burst mode data with no smoothing reveals that the weak ring population signal is not present in the $\perp1-\perp2$ cut
where it should be, judging from the $\perp-\parallel$ plane cuts.
There are a few reasons for the misidentification of a ring distribution structure. In the $\perp1-\perp2$ plane, there is a substantial population of low-energy particles which are of ionospheric origin. When the distribution function is smoothed, this population gives the appearance of a ring. However, the ionospheric population is not what would cause the appearance of a ring distribution by the mechanism studied here and must be excluded. The reason that $T_{e,\perp} > T_{e,\|}$ for this distribution is that the more diffuse magnetotail population is
rather elongated in the $\perp$ directions. %
To determine if this higher-energy magnetotail population is part of a ring distribution, we look at the $\|-\perp$ planes. Because of the strong field-aligned counterpropagating beams, it makes it difficult to tell if removing that population would leave a ring in the high-energy population, but the population in question does not clearly disappear for more field-aligned angles. Consequently, we are unable to definitively claim there is an electron ring distribution in this particular THEMIS event.
We suggest that observing a ring distribution in situ likely requires higher temporal resolution than available to THEMIS, but it may be accessible to MMS \cite{Schmid16,Liu18,Zhao19,Grigorenko20,Ma20} which has a much higher temporal resolution.
\section{Conclusions}
\label{sec:conclusions}
The appearance of ring distributions of electrons has been previously identified in particle-in-cell simulations near dipolarization fronts \cite{Shuster2014,Bessho2014} and for dayside reconnection \cite{Choi2022}. It was suggested that they are caused by remagnetization of the electrons in the reconnected magnetic field \cite{Shuster2014,Bessho2014}. In this study, we carry out a theoretical and numerical analysis that verifies and quantifies this prediction. Our analysis gives the major and minor radii of the ring distribution in terms of upstream conditions that dictate the properties of the reconnection, \textit{i.e.,} the plasma density, electron temperature, and reconnecting magnetic field strength. In particular, the major radius is given by the electron Alfv\'en speed based on the magnetic field and density upstream of the electron current layer, while the minor radius is governed by the electron thermal speed in the upstream region.
We employ 2.5D PIC simulations to test our predictions using five simulations with varying upstream temperature (with the upstream density held fixed) and five simulations with varying upstream density (with the upstream temperature held fixed). We find ring distributions in all 10 simulations.
We extract the major and minor radii of the ring distributions for all ten simulations
by fitting Gaussians to 1D cuts of the reduced distributions. We find that the major radius $v_{\perp0}$ is independent of upstream temperature but decreases for increasing upstream density, while the minor radius $v_{Th}$ increases for increasing upstream temperature and is independent of upstream density. The results are qualitatively and quantitatively consistent with the theoretical predictions, with agreement within one standard deviation of the theoretical predictions for all simulations.
Next, we use the major and minor radii of the ring distributions to compare the electron temperature associated with ring distributions to analytical predictions. We find that the predicted and measured perpendicular electron temperature agrees very well, within 12\%.
The parallel electron temperature is consistently different by about a factor of 2 between theory and simulation because the simulated plasma also contains counterstreaming beams in the parallel direction that are omitted from the analytical model. Since the perpendicular electron temperature contributes to the total electron temperature more than the parallel, the simulated total temperature is within 20\% of the theoretical predictions.
By investigating the plasma parameter profiles in the region where the ring distributions are observed, we find the ring distributions, and their associated perpendicular temperature anisotropy, are spatially coincident with a plateau, or shoulder, in the profile of the reconnected magnetic field $B_z$. The shoulder in $B_z$ is present where the ring distributions are because the remagnetized electrons are diamagnetic, thereby slightly lowering $B_z$ within the electron orbit and slightly increasing $B_z$ outside the orbit, thereby setting up a plateau in the $B_z$ profile. A simple calculation using conservation of energy reproduces the approximate perturbed magnetic field due to this effect.
We show that the ring distributions appear approximately two electron gyroradii (one diameter of the gyromotion) downstream from the location where the electrons are remagnetized by the strong reconnected magnetic field, {\it i.e.,} the location where the radius of curvature of the magnetic field exceeds the gyroradius of the electrons based on the bulk flow speed. This result is consistent with the prediction that the ring distributions are associated with reconnection jets that are remagnetized by the reconnected field in a dipolarization front \cite{Shuster2014,Bessho2014}. We further confirm this by showing that the ring distributions become weaker and then are completely suppressed as an increasingly strong guide field is added.
\begin{comment}
Motivated by the perpendicular electron anisotropy seen in the region of compressed reconnected field in our simulations, we look for observations in Earth's magnetotail, specifically at dipolarization fronts. For a case study of a particular event observed by THEMIS, a perpendicular electron anisotropy immediately upstream of the dipolarization front is observed, and electron ring distributions are observed in those regions. We extract the major and minor radii from the observed distributions \textcolor{magenta}{and find the predicted upstream electron temperature which is found to be in good agreement with the effective electron temperature observed in the lobe (upstream) by THEMIS.} We note here that the ring distribution seems to be lacking parallel propagating beams and colder ``core'' but due to the low energy cutoff of THEMIS data, their presence or absence can not be validated. This observation differs from Cluster observations in \citeA{Shuster2014}, where they do not see clear ring distributions.
\end{comment}
Finally, we discuss applications of the present results in magnetospheric and solar settings. For dipolarization fronts in Earth's magnetotail, the electron temperatures predicted by the scaling analysis presented here are in the few keV range (when subsequent heating via betatron acceleration is accounted for), which is comparable to the observed electron temperatures. When applied to solar flares, we predict electron temperatures up to 10s of MK for very energetic flares, and an increase in temperature with the square of the reconnecting magnetic field. Such temperatures are consistent with those observed in super-hot flares, which are highly likely to come from the coronal reconnection process but for which there is not yet a widely accepted mechanism for their production. We further motivate a possible extension of the present work to antiparallel asymmetric systems, which may be important for applications to the dayside magnetopause.
The direct {\it in situ} measurement of ring distributions in the magnetotail is expected to be difficult, but potentially possible. Various characteristic pitch-angle distributions have been observed in dipolarization fronts \cite{liu_explaining_2017,liu_rapid_2017} and studied using simulations \cite{huang_formation_2021}.
It is possible that pancakes and/or the perpendicular features of rolling pins are ring distributions,
and testing this would be interesting future work. We note that a pitch-angle distribution plot of a ring distribution would have a pancake-type structure, but it is not possible using a pitch-angle distribution plot to confirm the lack of low energy particles that is characteristic of a ring distribution.
Rather, a direct investigation of the velocity distribution function is required.
Based on a case study using THEMIS observations, we find that it is difficult to identify ring distributions. Higher temporal resolution, such as that afforded by MMS, would facilitate their identification.
It is known that the significant anisotropy arising in ring distributions makes them unstable to the generation of waves, especially whistlers \cite{Gary85,Umeda2007, fujimoto_2008_whistler, Winske&Daughton2012_whistler}. More broadly, \citeA{Grigorenko20} showed that electrons at 1--5 keV with a perpendicular temperature anisotropy generate whistler waves near DFs. By knowing the major and minor radii of the ring distributions in terms of upstream parameters, the temperature anisotropy can be calculated, which allows for a quantitative estimate of the linear growth rate of these modes. Such information is an important aspect of understanding particle acceleration and heating as a result of wave-particle interactions \cite{roytershteyn&delzanno2018}.
While whistler waves associated with temperature anisotropies are regularly measured {\it in situ} in Earth's magnetosphere, much less has been studied for the possibility of whistler wave generation associated with solar flares. There has been theoretical work on understanding whistler wave generation in solar coronal loops \cite{vocks_whistler_2006}. In their work, the whistlers are generated from loss cone distributions rather than the mechanism discussed here. Since the characteristic length scale for the ring distributions is $d_e$, we expect the frequency of whistler waves associated with ring distributions to be comparable to the electron cyclotron frequency $\Omega_{ce} = e B / m_e c$. For the characteristic solar flare plasma parameters used here, we find that the whistler frequencies would be at least on the order of 0.3 GHz. Interestingly, an observational study has seen a long-lived source at 0.327 GHz \cite{aurass_gle_2006}. Whether the mechanism discussed here can account for observed frequencies and whether this can be used as remote evidence in favor of the model presented here would be an interesting topic for future work.
There are many avenues for future work. The present simulations are two-dimensional; we do not expect the fundamental aspects of the results to change in three dimensions, especially given that there is no guide field in the system studied here, but it would be interesting to confirm that 3D effects known to occur in magnetotail-type settings \cite{pritchett:2013,Sitnov14} do not alter the conclusions. The initial conditions of the present simulations did not include an equilibrium normal magnetic field, which is important for magnetotail reconnection \cite{Lembege82}; we do not anticipate this normal magnetic field would appreciably change the results herein, but it should be verified. The simulation domain size we employ is too small to allow ions to fully couple back to the plasma, so future work should confirm that the results are valid for larger system sizes. For dayside magnetopause applications, the proposed generalization incorporating asymmetries needs to be tested. For solar corona applications, electron-ion collisions may need to be taken into account, and observations should be used to test the functional dependence of the temperature on the magnetic field strength during solar flares predicted here, as well as whether a guide field suppresses such high temperatures. The physical size of the region where electrons are remagnetized is expected from the simulations to be relatively small, so questions about how ring distributions thermalize and whether they control the temperature over a greater volume, as would be necessary to explain the temperatures seen in super-hot flares, would be excellent topics for future work.
Future work to quantify the rate of production of anisotropy-driven wave modes such as whistlers and their interaction with the downstream plasma would be important for applications.
\acknowledgments
M.H.B. acknowledges insightful discussions with Benjamin Woods. P.A.C. gratefully acknowledges support from NSF Grant PHY-1804428, NASA Grant 80NSSC19M0146, and DOE Grant DE-SC0020294. M.A.S. acknowledges NASA LWS Grant 80NSSC20K1813 and NSF AGS-2024198. V.R. acknowledges DOE grant DE‐SC001931. A.C. was supported by NASA grants NNX17AI71G, 80NSSC19K0287 and 80NSSC22M0111, and by NSF grant 1841039. H.L. acknowledges the partial support of a NASA Parker Solar Probe contract SV4-84017, an NSF EPSCoR RII-Track-1 Cooperative Agreement OIA-1655280, a NASA IMAP subaward under NASA contract 80GSFC19C0027, and NASA awards 80NSSC20K1783 and 80NSSC21K0003. This research uses resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility supported by the Office of Science of the US Department of Energy under Contract no. DE-AC02-05CH11231. Simulation data used in this manuscript are available on Zenodo (https://doi.org/10.5281/zenodo.6383101).
\bibliography{DiFrpaper}
|
Title:
VLT/UVES Observation of the Outflow in Quasar SDSS J1439-0106 |
Abstract: We analyze the VLT/UVES spectrum of the quasar SDSS J143907.5-010616.7,
retrieved from the UVES Spectral Quasar Absorption Database. We identify two
outflow systems in the spectrum: a mini broad absorption line (mini-BAL) system
and a narrow absorption line (NAL) system. We measure the ionic column
densities of the mini-BAL ($v=-1550$ km s$^{-1}$) outflow, which has excited
state absorption troughs of Fe II. We determine that the electron number
density $\log{n_e}=3.4^{+0.1}_{-0.1}$ based on the ratios between the excited
and ground state abundances of Fe II, and find the kinetic luminosity of the
outflow to be $\lesssim 0.1 \%$ of the quasar's Eddington luminosity, making it
insufficient to contribute to AGN feedback.
| https://export.arxiv.org/pdf/2208.07405 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
editorials, notices -- miscellaneous
\end{keywords}
\begingroup
\let\clearpage\relax
\tableofcontents
\endgroup
\newpage
\section{Introduction}
The journal \textit{Monthly Notices of the Royal Astronomical Society} (MNRAS) encourages authors to prepare their papers using \LaTeX.
The style file \verb'mnras.cls' can be used to approximate the final appearance of the journal, and provides numerous features to simplify the preparation of papers.
This document, \verb'mnras_guide.tex', provides guidance on using that style file and the features it enables.
This is not a general guide on how to use \LaTeX, of which many excellent examples already exist.
We particularly recommend \textit{Wikibooks \LaTeX}\footnote{\url{https://en.wikibooks.org/wiki/LaTeX}}, a collaborative online textbook which is of use to both beginners and experts.
Alternatively there are several other online resources, and most academic libraries also hold suitable beginner's guides.
For guidance on the contents of papers, journal style, and how to submit a paper, see the MNRAS Instructions to Authors\footnote{\label{foot:itas}\url{http://www.oxfordjournals.org/our_journals/mnras/for_authors/}}.
Only technical issues with the \LaTeX\ class are considered here.
\section{Obtaining and installing the MNRAS package}
Some \LaTeX\ distributions come with the MNRAS package by default.
If yours does not, you can either install it using your distribution's package manager, or download it from the Comprehensive \TeX\ Archive Network\footnote{\url{http://www.ctan.org/tex-archive/macros/latex/contrib/mnras}} (CTAN).
The files can either be installed permanently by placing them in the appropriate directory (consult the documentation for your \LaTeX\ distribution), or used temporarily by placing them in the working directory for your paper.
To use the MNRAS package, simply specify \verb'mnras' as the document class at the start of a \verb'.tex' file:
\begin{verbatim}
\documentclass{mnras}
\end{verbatim}
Then compile \LaTeX\ (and if necessary \bibtex) in the usual way.
\section{Preparing and submitting a paper}
We recommend that you start with a copy of the \texttt{mnras\_template.tex} file.
Rename the file, update the information on the title page, and then work on the text of your paper.
Guidelines for content, style etc. are given in the instructions to authors on the journal's website$^{\ref{foot:itas}}$.
Note that this document does not follow all the aspects of MNRAS journal style (e.g. it has a table of contents).
If a paper is accepted, it is professionally typeset and copyedited by the publishers.
It is therefore likely that minor changes to presentation will occur.
For this reason, we ask authors to ignore minor details such as slightly long lines, extra blank spaces, or misplaced figures, because these details will be dealt with during the production process.
Papers must be submitted electronically via the online submission system; paper submissions are not permitted.
For full guidance on how to submit a paper, see the instructions to authors.
\section{Class options}
\label{sec:options}
There are several options which can be added to the document class line like this:
\begin{verbatim}
\documentclass[option1,option2]{mnras}
\end{verbatim}
The available options are:
\begin{itemize}
\item \verb'letters' -- used for papers in the journal's Letters section.
\item \verb'onecolumn' -- single column, instead of the default two columns. This should be used {\it only} if necessary for the display of numerous very long equations.
\item \verb'doublespacing' -- text has double line spacing. Please don't submit papers in this format.
\item \verb'referee' -- \textit{(deprecated)} single column, double spaced, larger text, bigger margins. Please don't submit papers in this format.
\item \verb'galley' -- \textit{(deprecated)} no running headers, no attempt to align the bottom of columns.
\item \verb'landscape' -- \textit{(deprecated)} sets the whole document on landscape paper.
\item \verb"usenatbib" -- \textit{(all papers should use this)} this uses Patrick Daly's \verb"natbib.sty" package for citations.
\item \verb"usegraphicx" -- \textit{(most papers will need this)} includes the \verb'graphicx' package, for inclusion of figures and images.
\item \verb'useAMS' -- adds support for upright Greek characters \verb'\upi', \verb'\umu' and \verb'\upartial' ($\upi$, $\umu$ and $\upartial$). Only these three are included, if you require other symbols you will need to include the \verb'amsmath' or \verb'amsymb' packages (see section~\ref{sec:packages}).
\item \verb"usedcolumn" -- includes the package \verb"dcolumn", which includes two new types of column alignment for use in tables.
\end{itemize}
Some of these options are deprecated and retained for backwards compatibility only.
Others are used in almost all papers, but again are retained as options to ensure that papers written decades ago will continue to compile without problems.
If you want to include any other packages, see section~\ref{sec:packages}.
\section{Title page}
If you are using \texttt{mnras\_template.tex} the necessary code for generating the title page, headers and footers is already present.
Simply edit the title, author list, institutions, abstract and keywords as described below.
\subsection{Title}
There are two forms of the title: the full version used on the first page, and a short version which is used in the header of other odd-numbered pages (the `running head').
Enter them with \verb'\title[]{}' like this:
\begin{verbatim}
\title[Running head]{Full title of the paper}
\end{verbatim}
The full title can be multiple lines (use \verb'\\' to start a new line) and may be as long as necessary, although we encourage authors to use concise titles. The running head must be $\le~45$ characters on a single line.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Authors and institutions}
Like the title, there are two forms of author list: the full version which appears on the title page, and a short form which appears in the header of the even-numbered pages. Enter them using the \verb'\author[]{}' command.
If the author list is more than one line long, start a new line using \verb'\newauthor'. Use \verb'\\' to start the institution list. Affiliations for each author should be indicated with a superscript number, and correspond to the list of institutions below the author list.
For example, if I were to write a paper with two coauthors at another institution, one of whom also works at a third location:
\begin{verbatim}
\author[K. T. Smith et al.]{
Keith T. Smith,$^{1}$
A. N. Other,$^{2}$
and Third Author$^{2,3}$
\\
$^{1}$Affiliation 1\\
$^{2}$Affiliation 2\\
$^{3}$Affiliation 3}
\end{verbatim}
Affiliations should be in the format `Department, Institution, Street Address, City and Postal Code, Country'.
Email addresses can be inserted with the \verb'\thanks{}' command which adds a title page footnote.
If you want to list more than one email, put them all in the same \verb'\thanks' and use \verb'\footnotemark[]' to refer to the same footnote multiple times.
Present addresses (if different to those where the work was performed) can also be added with a \verb'\thanks' command.
\subsection{Abstract and keywords}
The abstract is entered in an \verb'abstract' environment:
\begin{verbatim}
\end{verbatim}
\noindent Note that there is a word limit on the length of abstracts.
For the current word limit, see the journal instructions to authors$^{\ref{foot:itas}}$.
Immediately following the abstract, a set of keywords is entered in a \verb'keywords' environment:
\begin{verbatim}
\begin{keywords}
keyword 1 -- keyword 2 -- keyword 3
\end{keywords}
\end{verbatim}
\noindent There is a list of permitted keywords, which is agreed between all the major astronomy journals and revised every few years.
Do \emph{not} make up new keywords!
For the current list of allowed keywords, see the journal's instructions to authors$^{\ref{foot:itas}}$.
\section{Sections and lists}
Sections and lists are generally the same as in the standard \LaTeX\ classes.
\subsection{Sections}
\label{sec:sections}
Sections are entered in the usual way, using \verb'\section{}' and its variants. It is possible to nest up to four section levels:
\begin{verbatim}
\section{Main section}
\subsection{Subsection}
\subsubsection{Subsubsection}
\paragraph{Lowest level section}
\end{verbatim}
\noindent The other \LaTeX\ sectioning commands \verb'\part', \verb'\chapter' and \verb'\subparagraph{}' are deprecated and should not be used.
Some sections are not numbered as part of journal style (e.g. the Acknowledgements).
To insert an unnumbered section use the `starred' version of the command: \verb'\section*{}'.
See appendix~\ref{sec:advanced} for more complicated examples.
\subsection{Lists}
Two forms of lists can be used in MNRAS -- numbered and unnumbered.
For a numbered list, use the \verb'enumerate' environment:
\begin{verbatim}
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
\end{verbatim}
\noindent which produces
\begin{enumerate}
\item First item
\item Second item
\item etc.
\end{enumerate}
Note that the list uses lowercase Roman numerals, rather than the \LaTeX\ default Arabic numerals.
For an unnumbered list, use the \verb'description' environment without the optional argument:
\begin{verbatim}
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
\end{verbatim}
\noindent which produces
\begin{description}
\item First item
\item Second item
\item etc.
\end{description}
Bulleted lists using the \verb'itemize' environment should not be used in MNRAS; it is retained for backwards compatibility only.
\section{Mathematics and symbols}
The MNRAS class mostly adopts standard \LaTeX\ handling of mathematics, which is briefly summarised here.
See also section~\ref{sec:packages} for packages that support more advanced mathematics.
Mathematics can be inserted into the running text using the syntax \verb'$1+1=2$', which produces $1+1=2$.
Use this only for short expressions or when referring to mathematical quantities; equations should be entered as described below.
\subsection{Equations}
Equations should be entered using the \verb'equation' environment, which automatically numbers them:
\begin{verbatim}
\begin{equation}
a^2=b^2+c^2
\end{equation}
\end{verbatim}
\noindent which produces
\begin{equation}
a^2=b^2+c^2
\end{equation}
By default, the equations are numbered sequentially throughout the whole paper. If a paper has a large number of equations, it may be better to number them by section (2.1, 2.2 etc.). To do this, add the command \verb'\numberwithin{equation}{section}' to the preamble.
It is also possible to produce un-numbered equations by using the \LaTeX\ built-in \verb'\['\textellipsis\verb'\]' and \verb'$$'\textellipsis\verb'$$' commands; however MNRAS requires that all equations are numbered, so these commands should be avoided.
\subsection{Special symbols}
\begin{table}
\caption{Additional commands for special symbols commonly used in astronomy. These can be used anywhere.}
\label{tab:anysymbols}
\begin{tabular*}{\columnwidth}{@{}l@{\hspace*{50pt}}l@{\hspace*{50pt}}l@{}}
\hline
Command & Output & Meaning\\
\hline
\verb'\sun' & \sun & Sun, solar\\[2pt] %
\verb'\earth' & \earth & Earth, terrestrial\\[2pt]
\verb'\micron' & \micron & microns\\[2pt]
\verb'\degr' & \degr & degrees\\[2pt]
\verb'\arcmin' & \arcmin & arcminutes\\[2pt]
\verb'\arcsec' & \arcsec & arcseconds\\[2pt]
\verb'\fdg' & \fdg & fraction of a degree\\[2pt]
\verb'\farcm' & \farcm & fraction of an arcminute\\[2pt]
\verb'\farcs' & \farcs & fraction of an arcsecond\\[2pt]
\verb'\fd' & \fd & fraction of a day\\[2pt]
\verb'\fh' & \fh & fraction of an hour\\[2pt]
\verb'\fm' & \fm & fraction of a minute\\[2pt]
\verb'\fs' & \fs & fraction of a second\\[2pt]
\verb'\fp' & \fp & fraction of a period\\[2pt]
\verb'\diameter' & \diameter & diameter\\[2pt]
\verb'\sq' & \sq & square, Q.E.D.\\[2pt]
\hline
\end{tabular*}
\end{table}
\begin{table}
\caption{Additional commands for mathematical symbols. These can only be used in maths mode.}
\label{tab:mathssymbols}
\begin{tabular*}{\columnwidth}{l@{\hspace*{40pt}}l@{\hspace*{40pt}}l}
\hline
Command & Output & Meaning\\
\hline
\verb'\upi' & $\upi$ & upright pi\\[2pt] %
\verb'\umu' & $\umu$ & upright mu\\[2pt]
\verb'\upartial' & $\upartial$ & upright partial derivative\\[2pt]
\verb'\lid' & $\lid$ & less than or equal to\\[2pt]
\verb'\gid' & $\gid$ & greater than or equal to\\[2pt]
\verb'\la' & $\la$ & less than of order\\[2pt]
\verb'\ga' & $\ga$ & greater than of order\\[2pt]
\verb'\loa' & $\loa$ & less than approximately\\[2pt]
\verb'\goa' & $\goa$ & greater than approximately\\[2pt]
\verb'\cor' & $\cor$ & corresponds to\\[2pt]
\verb'\sol' & $\sol$ & similar to or less than\\[2pt]
\verb'\sog' & $\sog$ & similar to or greater than\\[2pt]
\verb'\lse' & $\lse$ & less than or homotopic to \\[2pt]
\verb'\gse' & $\gse$ & greater than or homotopic to\\[2pt]
\verb'\getsto' & $\getsto$ & from over to\\[2pt]
\verb'\grole' & $\grole$ & greater over less\\[2pt]
\verb'\leogr' & $\leogr$ & less over greater\\
\hline
\end{tabular*}
\end{table}
Some additional symbols of common use in astronomy have been added in the MNRAS class. These are shown in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols}. The command names are -- as far as possible -- the same as those used in other major astronomy journals.
Many other mathematical symbols are also available, either built into \LaTeX\ or via additional packages. If you want to insert a specific symbol but don't know the \LaTeX\ command, we recommend using the Detexify website\footnote{\url{http://detexify.kirelabs.org}}.
Sometimes font or coding limitations mean a symbol may not get smaller when used in sub- or superscripts, and will therefore be displayed at the wrong size. There is no need to worry about this as it will be corrected by the typesetter during production.
To produce bold symbols in mathematics, use \verb'\bmath' for simple variables, and the \verb'bm' package for more complex symbols (see section~\ref{sec:packages}). Vectors are set in bold italic, using \verb'\mathbfit{}'.
For matrices, use \verb'\mathbfss{}' to produce a bold sans-serif font e.g. \mathbfss{H}; this works even outside maths mode, but not all symbols are available (e.g. Greek). For $\nabla$ (del, used in gradients, divergence etc.) use \verb'$\nabla$'.
\subsection{Ions}
A new \verb'\ion{}{}' command has been added to the class file, for the correct typesetting of ionisation states.
For example, to typeset singly ionised calcium use \verb'\ion{Ca}{ii}', which produces \ion{Ca}{ii}.
\section{Figures and tables}
\label{sec:fig_table}
Figures and tables (collectively called `floats') are mostly the same as built into \LaTeX.
\subsection{Basic examples}
Figures are inserted in the usual way using a \verb'figure' environment and \verb'\includegraphics'. The example Figure~\ref{fig:example} was generated using the code:
\begin{verbatim}
\end{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
The example Table~\ref{tab:example} was generated using the code:
\begin{verbatim}
\begin{table}
\caption{An example table.}
\label{tab:example}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
Sun & 1.00 & 1.00\\
$\alpha$~Cen~A & 1.10 & 1.52\\
$\epsilon$~Eri & 0.82 & 0.34\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
\subsection{Captions and placement}
Captions go \emph{above} tables but \emph{below} figures, as in the examples above.
The \LaTeX\ float placement commands \verb'[htbp]' are intentionally disabled.
Layout of figures and tables will be adjusted by the publisher during the production process, so authors should not concern themselves with placement to avoid disappointment and wasted effort.
Simply place the \LaTeX\ code close to where the figure or table is first mentioned in the text and leave exact placement to the publishers.
By default a figure or table will occupy one column of the page.
To produce a wider version which covers both columns, use the \verb'figure*' or \verb'table*' environment.
If a figure or table is too long to fit on a single page it can be split it into several parts.
Create an additional figure or table which uses \verb'\contcaption{}' instead of \verb'\caption{}'.
This will automatically correct the numbering and add `\emph{continued}' at the start of the caption.
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:continued} was generated using the code:
\begin{verbatim}
\begin{table}
\contcaption{A table continued from the previous one.}
\label{tab:continued}
\begin{tabular}{lcc}
\hline
Star & Mass & Luminosity\\
& $M_{\sun}$ & $L_{\sun}$\\
\hline
$\tau$~Cet & 0.78 & 0.52\\
$\delta$~Pav & 0.99 & 1.22\\
$\sigma$~Dra & 0.87 & 0.43\\
\hline
\end{tabular}
\end{table}
\end{verbatim}
To produce a landscape figure or table, use the \verb'pdflscape' package and the \verb'landscape' environment.
The landscape Table~\ref{tab:landscape} was produced using the code:
\begin{verbatim}
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & ...\\
Unit & Unit & ...\\
\hline
Data & Data & ...\\
Data & Data & ...\\
...\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\end{verbatim}
Unfortunately this method will force a page break before the table appears.
More complicated solutions are possible, but authors shouldn't worry about this.
\begin{landscape}
\begin{table}
\caption{An example landscape table.}
\label{tab:landscape}
\begin{tabular}{cccccccccc}
\hline
Header & Header & Header & Header & Header & Header & Header & Header & Header & Header\\
Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit & Unit \\
\hline
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
Data & Data & Data & Data & Data & Data & Data & Data & Data & Data\\
\hline
\end{tabular}
\end{table}
\end{landscape}
\section{References and citations}
\subsection{Cross-referencing}
The usual \LaTeX\ commands \verb'\label{}' and \verb'\ref{}' can be used for cross-referencing within the same paper.
We recommend that you use these whenever relevant, rather than writing out the section or figure numbers explicitly.
This ensures that cross-references are updated whenever the numbering changes (e.g. during revision) and provides clickable links (if available in your compiler).
It is best to give each section, figure and table a logical label.
For example, Table~\ref{tab:mathssymbols} has the label \verb'tab:mathssymbols', whilst section~\ref{sec:packages} has the label \verb'sec:packages'.
Add the label \emph{after} the section or caption command, as in the examples in sections~\ref{sec:sections} and \ref{sec:fig_table}.
Enter the cross-reference with a non-breaking space between the type of object and the number, like this: \verb'see Figure~\ref{fig:example}'.
The \verb'\autoref{}' command can be used to automatically fill out the type of object, saving on typing.
It also causes the link to cover the whole phrase rather than just the number, but for that reason is only suitable for single cross-references rather than ranges.
For example, \verb'\autoref{tab:journal_abbr}' produces \autoref{tab:journal_abbr}.
\subsection{Citations}
\label{sec:cite}
MNRAS uses the Harvard -- author (year) -- citation style, e.g. \citet{author2013}.
This is implemented in \LaTeX\ via the \verb'natbib' package, which in turn is included via the \verb'usenatbib' package option (see section~\ref{sec:options}), which should be used in all papers.
Each entry in the reference list has a `key' (see section~\ref{sec:ref_list}) which is used to generate citations.
There are two basic \verb'natbib' commands:
\begin{description}
\item \verb'\citet{key}' produces an in-text citation: \citet{author2013}
\item \verb'\citep{key}' produces a bracketed (parenthetical) citation: \citep{author2013}
\end{description}
Citations will include clickable links to the relevant entry in the reference list, if supported by your \LaTeX\ compiler.
\defcitealias{smith2014}{Paper~I}
\begin{table*}
\caption{Common citation commands, provided by the \texttt{natbib} package.}
\label{tab:natbib}
\begin{tabular}{lll}
\hline
Command & Ouput & Note\\
\hline
\verb'\citet{key}' & \citet{smith2014} & \\
\verb'\citep{key}' & \citep{smith2014} & \\
\verb'\citep{key,key2}' & \citep{smith2014,jones2015} & Multiple papers\\
\verb'\citet[table 4]{key}' & \citet[table 4]{smith2014} & \\
\verb'\citep[see][figure 7]{key}' & \citep[see][figure 7]{smith2014} & \\
\verb'\citealt{key}' & \citealt{smith2014} & For use with manual brackets\\
\verb'\citeauthor{key}' & \citeauthor{smith2014} & If already cited in close proximity\\
\verb'\defcitealias{key}{Paper~I}' & & Define an alias (doesn't work in floats)\\
\verb'\citetalias{key}' & \citetalias{smith2014} & \\
\verb'\citepalias{key}' & \citepalias{smith2014} & \\
\hline
\end{tabular}
\end{table*}
There are a number of other \verb'natbib' commands which can be used for more complicated citations.
The most commonly used ones are listed in Table~\ref{tab:natbib}.
For full guidance on their use, consult the \verb'natbib' documentation\footnote{\url{http://www.ctan.org/pkg/natbib}}.
If a reference has several authors, \verb'natbib' will automatically use `et al.' if there are more than two authors. However, if a paper has exactly three authors, MNRAS style is to list all three on the first citation and use `et al.' thereafter. If you are using \bibtex\ (see section~\ref{sec:ref_list}) then this is handled automatically. If not, the \verb'\citet*{}' and \verb'\citep*{}' commands can be used at the first citation to include all of the authors.
\subsection{The list of references}
\label{sec:ref_list}
It is possible to enter references manually using the usual \LaTeX\ commands, but we strongly encourage authors to use \bibtex\ instead.
\bibtex\ ensures that the reference list is updated automatically as references are added or removed from the paper, puts them in the correct format, saves on typing, and the same reference file can be used for many different papers -- saving time hunting down reference details.
An MNRAS \bibtex\ style file, \verb'mnras.bst', is distributed as part of this package.
The rest of this section will assume you are using \bibtex.
References are entered into a separate \verb'.bib' file in standard \bibtex\ formatting.
This can be done manually, or there are several software packages which make editing the \verb'.bib' file much easier.
We particularly recommend \textsc{JabRef}\footnote{\url{http://jabref.sourceforge.net/}}, which works on all major operating systems.
\bibtex\ entries can be obtained from the NASA Astrophysics Data System\footnote{\label{foot:ads}\url{http://adsabs.harvard.edu}} (ADS) by clicking on `Bibtex entry for this abstract' on any entry.
Simply copy this into your \verb'.bib' file or into the `BibTeX source' tab in \textsc{JabRef}.
Each entry in the \verb'.bib' file must specify a unique `key' to identify the paper, the format of which is up to the author.
Simply cite it in the usual way, as described in section~\ref{sec:cite}, using the specified key.
Compile the paper as usual, but add an extra step to run the \texttt{bibtex} command.
Consult the documentation for your compiler or latex distribution.
Correct formatting of the reference list will be handled by \bibtex\ in almost all cases, provided that the correct information was entered into the \verb'.bib' file.
Note that ADS entries are not always correct, particularly for older papers and conference proceedings, so may need to be edited.
If in doubt, or if you are producing the reference list manually, see the MNRAS instructions to authors$^{\ref{foot:itas}}$ for the current guidelines on how to format the list of references.
\section{Appendices and online material}
To start an appendix, simply place the \verb'\appendix' command before the next \verb'\section{}'.
This will automatically adjust the section headings, figures, tables, and equations to reflect the fact that they are part of an appendix.
It is only necessary to enter the \verb'\appendix' command once -- everything after that command is in an appendix.
Remember that appendices should be placed \textit{after} the list of references.
Unlike other astronomy class files, there are no special commands for online material.
If your paper has any online material, it should be placed in a separate file.
See our instructions to authors$^{\ref{foot:itas}}$ for guidance.
\section{Packages and custom commands}
\label{sec:packages}
\subsection{Additional packages}
Sometimes authors need to include additional \LaTeX\ packages, which provide extra features.
For example, the \verb'bm' package provides extra bold maths symbols, whilst the \verb'pdflscape' package adds support for landscape pages.
Packages can be included by adding the \verb'\usepackage{}' command to the preamble of the document (not the main body).
Please \emph{only include packages which are actually used in the paper}, and include a comment to explain what each one does.
This will assist the typesetters.
If you are using \texttt{mnras\_template.tex}, it includes a specific section for this purpose, near the start of the file with the header 'authors - place your own packages here'.
For example, to include \verb'pdflscape', use:
\begin{verbatim}
\usepackage{pdflscape} %
\end{verbatim}
Consult the documentation for that package for instructions on how to use the additional features.
\subsection{Custom commands}
Authors should avoid duplicating or redefining commands which are already available in \LaTeX\ or \verb'mnras.cls'.
However it may sometimes be necessary to introduce a custom command e.g. as a shortcut while writing the paper.
Please \emph{only include commands which are actually used in the paper}, and include a comment to explain what each one does.
This will assist the typesetters.
Use \verb'\newcommand', \emph{not} \verb'\def', as this will avoid accidentally overwriting existing commands.
Place custom commands in the preamble of the document (not the main body).
If you are using \texttt{mnras\_template.tex}, it includes a specific section for this purpose, near the start of the file with the header 'authors - place your own commands here'.
As an example, a shortcut for the unit \kms can be defined like this:
\begin{verbatim}
\newcommand{\kms}{\,km\,s$^{-1}$} %
\end{verbatim}
Velocities can then be written as e.g. \verb'2.3\kms' which produces 2.3\kms.
Similar shortcuts can be used for frequently quoted object designations.
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
This guide replaces an earlier one originally prepared by Cambridge University Press (CUP) in 1994, and last updated in 2002 by Blackwell Publishing.
Some code segments are reproduced from, and some examples are based upon, that guide.
The authors were: A.~Woollatt, M.~Reed, R.~Mulvey, K.~Matthews, D.~Starling, Y.~Yu, A.~Richardson (all CUP), and Penny~Smith, N.~Thompson and Gregor~Hutton (all Blackwell), whose work is gratefully acknowledged.
The accompanying \bibtex\ style file was written by John Sleath, Tim Jenness and Norman Gray, without whom \bibtex\ support would not have been possible.
Some special symbols in tables~\ref{tab:anysymbols}--\ref{tab:mathssymbols} were taken from the Springer Verlag \textit{Astronomy \& Astrophysics} \LaTeX\ class, with their permission.
KTS thanks Nelson Beebe (University of Utah) for helpful advice regarding CTAN.
\section*{Data Availability}
The inclusion of a Data Availability Statement is a requirement for articles published in MNRAS. Data Availability Statements provide a standardised format for readers to understand the availability of data underlying the research results described in the article. The statement may refer to original data generated in the course of the study or to third-party data analysed in the article. The statement should describe and provide means of access, where possible, by linking to the data or providing the required accession numbers for the relevant databases or DOIs.
\appendix
\section{Journal abbreviations}
\label{sec:abbreviations}
Abbreviations for cited journals can be accessed using the commands listed in table~\ref{tab:journal_abbr}.
Although some of these may appear to be outdated or rarely cited, they have been selected to be compatible with the \bibtex\ output by the NASA Astrophysics Data System$^{\ref{foot:ads}}$, commands used by other astronomy journals, and with additional entries for journals with non-standard abbreviations in MNRAS.
For journals which are not on this list, see our instructions to authors$^{\ref{foot:itas}}$ for guidance on how to abbreviate titles.
\begin{table*}
\caption{Commands for abbreviated journal names, see appendix~\ref{sec:abbreviations}.}
\label{tab:journal_abbr}
\begin{tabular}{@{}l@{\:}l@{\:}l@{}} %
\hline
Command & Output & Journal name\\
\hline
\verb'\aap' or \verb'\astap' & \aap & Astronomy and Astrophysics$^a$\\
\verb'\aapr' & \aapr & The Astronomy and Astrophysics Review\\
\verb'\aaps' & \aaps & Astronomy and Astrophysics Supplement Series\\
\verb'\actaa' & \actaa & Acta Astronomica\\
\verb'\afz' & \afz & Astrofizika\\
\verb'\aj' & \aj & The Astronomical Journal\\
\verb'\ao' or \verb'\applopt' & \ao & Applied Optics\\
\verb'\aplett' & \aplett & Astrophysics Letters\\
\verb'\apj' & \apj & The Astrophysical Journal\\
\verb'\apjl' or \verb'\apjlett' & \apjl & The Astrophysical Journal Letters$^a$\\
\verb'\apjs' or \verb'\apjsupp' & \apjs & The Astrophysical Journal Supplement Series\\
\verb'\apss' & \apss & Astrophysics and Space Science\\
\verb'\araa' & \araa & Annual Review of Astronomy and Astrophysics\\
\verb'\arep' & \arep & Astronomy Reports$^b$\\
\verb'\aspc' & \aspc & Astronomical Society of the Pacific Conference Series\\
\verb'\azh' & \azh & Astronomicheskii Zhurnal$^c$\\
\verb'\baas' & \baas & Bulletin of the American Astronomical Society\\
\verb'\bac' & \bac & Bulletin of the Astronomical Institutes of Czechoslovakia\\
\verb'\bain' & \bain & Bull. Astron. Inst. Netherlands\\
\verb'\caa' & \caa & Chinese Astronomy and Astrophysics\\
\verb'\cjaa' & \cjaa & Chinese Journal of Astronomy and Astrophysics\\
\verb'\fcp' & \fcp & Fundamentals of Cosmic Physics\\
\verb'\gca' & \gca & Geochimica Cosmochimica Acta\\
\verb'\grl' & \grl & Geophysics Research Letters\\
\verb'\iaucirc' & \iaucirc & International Astronomical Union Circulars\\
\verb'\icarus' & \icarus & Icarus\\
\verb'\japa' & \japa & Journal of Astrophysics and Astronomy\\
\verb'\jcap' & \jcap & Journal of Cosmology and Astroparticle Physics\\
\verb'\jcp' & \jcp & Journal of Chemical Physics\\
\verb'\jgr' & \jgr & Journal of Geophysics Research\\
\verb'\jqsrt' & \jqsrt & Journal of Quantitiative Spectroscopy and Radiative Transfer\\
\verb'\jrasc' & \jrasc & Journal of the Royal Astronomical Society of Canada\\
\verb'\memras' & \memras & Memoirs of the Royal Astronomical Society\\
\verb'\memsai' & \memsai & Memoire della Societa Astronomica Italiana\\
\verb'\mnassa' & \mnassa & Monthly Notes of the Astronomical Society of Southern Africa\\
\verb'\mnras' & \mnras & Monthly Notices of the Royal Astronomical Society$^a$\\
\verb'\na' & \na & New Astronomy\\
\verb'\nar' & \nar & New Astronomy Review\\
\verb'\nat' & \nat & Nature\\
\verb'\nphysa' & \nphysa & Nuclear Physics A\\
\verb'\pra' & \pra & Physical Review A: Atomic, molecular, and optical physics\\
\verb'\prb' & \prb & Physical Review B: Condensed matter and materials physics\\
\verb'\prc' & \prc & Physical Review C: Nuclear physics\\
\verb'\prd' & \prd & Physical Review D: Particles, fields, gravitation, and cosmology\\
\verb'\pre' & \pre & Physical Review E: Statistical, nonlinear, and soft matter physics\\
\verb'\prl' & \prl & Physical Review Letters\\
\verb'\pasa' & \pasa & Publications of the Astronomical Society of Australia\\
\verb'\pasp' & \pasp & Publications of the Astronomical Society of the Pacific\\
\verb'\pasj' & \pasj & Publications of the Astronomical Society of Japan\\
\verb'\physrep' & \physrep & Physics Reports\\
\verb'\physscr' & \physscr & Physica Scripta\\
\verb'\planss' & \planss & Planetary and Space Science\\
\verb'\procspie' & \procspie & Proceedings of the Society of Photo-Optical Instrumentation Engineers\\
\verb'\rmxaa' & \rmxaa & Revista Mexicana de Astronomia y Astrofisica\\
\verb'\qjras' & \qjras & Quarterly Journal of the Royal Astronomical Society\\
\verb'\sci' & \sci & Science\\
\verb'\skytel' & \skytel & Sky and Telescope\\
\verb'\solphys' & \solphys & Solar Physics\\
\verb'\sovast' & \sovast & Soviet Astronomy$^b$\\
\verb'\ssr' & \ssr & Space Science Reviews\\
\verb'\zap' & \zap & Zeitschrift fuer Astrophysik\\
\hline
\multicolumn{3}{l}{$^a$ Letters are designated by an L at the start of the page number, not in the journal name}\\
\multicolumn{3}{l}{\footnotesize$^b$ In 1992 the English translation of this journal changed its name from Soviet Astronomy to Astronomy Reports}\\
\multicolumn{3}{l}{\footnotesize$^c$ Including the English translation Astronomy Letters}\\
\end{tabular}
\end{table*}
\clearpage %
\section{Advanced formatting examples}
\label{sec:advanced}
Sometimes formatting doesn't behave exactly as expected when used in titles or section headings, and must be modified to obtain the correct appearance.
Generally the publishers can fix these problems during the typesetting process after a paper is accepted, but authors may wish to adjust these themselves to minimise the possibility of errors and/or for the benefit of the refereeing process.
Below are some examples of output, followed by the \LaTeX\ code which produces them.
Most mathematics and text formatting works as expected, but some commands might not be the correct size, bold or italic.
If so they can be finessed by hand, as in the bold mathematics here:
\boxit{\huge\bf \textit{Herschel} observations of galaxies at $\bm{\delta > 60\degr}$}
\begin{verbatim}
\title{\textit{Herschel} observations of galaxies at
$\bm{\delta > 60\degr}$}
\end{verbatim}
Most fonts do not provide bold and italic versions of small capitals, so the \verb'\ion{}{}' command doesn't produce the expected output in headings.
The effect has to be `faked' using font size commands, remembering that the running head is a different style:
\boxit{\huge\bf Abundances in H\,{\Large \textbf{II}} regions}
\begin{verbatim}
\title
[Abundances in H\,{\normalsize \textit{II}} regions]
{Abundances in H\,{\Large \textbf{II}} regions}
\end{verbatim}
Complex mathematics can cause problems with links, so might require adding a less formatted short version of the heading:
\boxit{\bf 2\quad FINDING Mg\,{\sevensize II} ABSORBERS AT $\bm{z > 2}$}
\begin{verbatim}
\section
[Finding Mg II absorbers at z > 2]
{Finding M\lowercase{g}\,{\sevensize II} absorbers
at $\lowercase{\bm{z > 2}}$}
\end{verbatim}
Using square brackets in headings can cause additional linking problems, which are solved by wrapping them in \{\textellipsis\}:
\boxit{\bf 2.1\quad [C\,{\sevensize II}] 158$\bmath{\umu}$m emission}
\begin{verbatim}
\subsection
[{[C II] 158$\umu$m emission}]
{[C\,{\sevensize II}] 158$\bmath{\umu}$m
emission}
\end{verbatim}
Use \verb'\text{}' (not \verb'\rm') for non-variables in mathematics, which preserves the formatting of the surrounding text.
For the same reasons, use \verb'\textit{}' for italics (not \verb'\it').
\boxit{\bf 3.1\quad Measuring $\bm{T}_\text{eff}$ from \textit{Gaia} photometry}
\begin{verbatim}
\subsection{Measuring $\bm{T}_\text{eff}$ from
\textit{Gaia} photometry}
\end{verbatim}
\section{Additional commands for editors only}
The following commands are available for the use of editors and production staff only.
They should not be used (or modified in the template) by authors.
\begin{description}
\item \verb'' inserts the title, authors and institution list in the correct formatting.
\item \verb'\nokeywords' tidies up the spacing if there are no keywords, but authors should always enter at least one.
\item \verb'\volume{}' sets the volume number (default is 000)
\item \verb'\pagerange{}' sets the page range. The standard template generates this automatically, starting from 1.
\item \verb'\bsp' adds the `This paper has been typeset\textellipsis' comment at the end of the paper.
The command name refers to Blackwell Science Publishing, who were the publishers at the time when MNRAS began accepting \LaTeX\ submissions in 1993.
\item \verb'\mniiiauth{}' used by the \bibtex\ style to handle MNRAS style for citing papers with three authors. It should not be used manually.
\item \verb'\eprint{}' used by the \bibtex\ style for citing arXiv eprints.
\item \verb'\doi{}' used by the \bibtex\ style for citing Digital Object Identifiers.
\end{description}
\bsp %
\label{lastpage} |
Title:
LeXInt: Package for Exponential Integrators employing Leja interpolation |
Abstract: We present a publicly available software for exponential integrators that
computes the $\varphi_l(z)$ functions using polynomial interpolation. The
interpolation method at Leja points have recently been shown to be competitive
with the traditionally-used Krylov subspace method. The developed framework
facilitates easy adaptation into any Python software package for time
integration.
| https://export.arxiv.org/pdf/2208.08269 |
\begin{frontmatter}
\title{\lexint: Package for Exponential Integrators employing Leja interpolation}
\author[inst1]{Pranab J. Deka\corref{lod1}} \ead{[email protected]}
\author[inst1]{Lukas Einkemmer} \ead{[email protected]}
\author[inst2]{Mayya Tokman} \ead{[email protected]}
\cortext[lod1]{Corresponding author}
\affiliation[inst1]{organization = {Department of Mathematics, University of Innsbruck},
city = {Innsbruck},
postcode = {6020},
country = {Austria}}
\affiliation[inst2]{organization = {School of Natural Sciences, University of California},
city = {Merced},
postcode = {CA 95343},
country = {USA}}
\begin{keyword}
Time Integration \sep Numerical Methods \sep Exponential Integrators \sep Leja Points \sep Polynomial Interpolation
\end{keyword}
\end{frontmatter}
\section{Motivation and significance}
Time-dependent partial differential equations (PDEs) are ubiquitous in various fields in science. Integrating PDEs in time with high accuracy whilst incurring as little computational cost as possible is highly desirable. Substantial amount of research has been devoted to the development of numerical algorithms and codes to perform high-resolution simulations with high fidelity.
Explicit temporal integrators are widely used in many scenarios owing to the simplicity of their algorithm and implementation. However, as the number of physical processes considered in a certain PDE increases or if the stiff nature of the underlying PDE becomes prominent, the performance of the explicit integrators is severely deteriorated owing to the stability constraints. The increase in the stiffness of an equation results ever more stringent Courant--Friedrich--Levy (CFL) time step size limit. Implicit integrators have been widely used as alternatives to the explicit methods owing to their ability to take large step sizes. They can provide substantial boost to the simulations. In many practical cases, however, one has to resort to iterative schemes to solve large systems of linear equations. Furthermore, the use of preconditioners to speed-up the simulations is a common practice in many situations. In some cases, the simulations fail to converge without the use of a good preconditioner. The complexity involved in such an algorithm may make them unfavourable for intricate problems.
Exponential integrators are a class of temporal integrators that linearise the underlying PDE - the linear term is solved exactly (in time) and the nonlinear term is approximated with some explicit methods. An extensive review on exponential integrators hav been presented by Hockbruck \& Ostermann \cite{Ostermann10}. Let us consider the initial-value problem
\begin{equation*}
\frac{\partial u}{\partial t} = f(u), \qquad u(t = 0) = u^0,
\end{equation*}
where $f(u)$ is some nonlinear function of $u$. We re-write the above equation as
\begin{equation*}
\frac{\partial u}{\partial t} = \mathcal{A} \, u + g(u),
\end{equation*}
where $\mathcal{A}$ is a matrix and $g(u)$ is the nonlinear remainder. The solution to this equation is given by \[ u^{n + 1} = u^n + \varphi_1(\mathcal{A} \Delta t) f(u^n) \Delta t. \] This is the first-order exponential Euler method. If $A$ is replaced by the Jacobian of $f(u)$ evaluated at the respective time step, one obtains the Rosenbrock-Euler method, which is of second order. It is to be noted that replacing the linear part by the Jacobian allows for one to obtain higher-order schemes with fewer internal stages. The $\varphi_l(z)$ functions are given by:
\[\varphi_{l + 1}(z) = \frac{1}{z} \left(\varphi_l(z) - \frac{1}{l!} \right), \quad l \geq 1 \] where \[\varphi_0(z) = e^z \] corresponds to the matrix exponential. We compute these $\varphi_l(z)$ functions, the most expensive part of exponential integrators, using the method of polynomial interpolation at Leja points. Details on this iterative scheme are provided in Sec. \ref{sec:software}.
Exponential integrators do not suffer from any CFL restrictions (unlike explicit integrators), are unconditionally stable, and can take much larger step sizes than implicit methods. This makes them highly attractive for solving time-dependent problems. Additionally, one can obtain the exact solution of a linear PDE (subject to the spatial discretisation) for any given step size. This is an added bonus over the implicit integrators, as they will always incur some error, irrespective of their order of convergence.
In this paper, we aim to make our tools and contributions available to the scientific community with the release of the \textbf{Le}ja interpolation for e\textbf{X}ponential \textbf{Int}egrators (\lexint; \url{https://github.com/Pranab-JD/LeXInt}) package. This is a cummulation of the algorithms implemented and tested out in our previous work where we studied the performance of an automatic step-size controller for improved computational efficiency \cite{Deka22a} and analysed the performance of the Leja method with explicit, implicit, and Krylov-based exponential integrators for the set of magnetohydrodynamical (MHD) equations \cite{Deka22b}.
\section{Software description}
\label{sec:software}
\lexint comprises of several exponential integrators suited for both constant and variable step size implementation. The integrators are implemented in a modular format, in essence, any integrator can easily be integrated into the package and any integrator can be used for any given problem. However, it is to be noted that the performance of the integrators may vary with the problem under consideration. We primarily focus on integrators that are based on linearising the underlying PDE by computing the Jacobian at every time step: Exponential Rosenbrock (EXPRB) and Exponential Propagation Iterative Runge--Kutta (EPIRK) integrators. As already mentioned, these integrators require fewer stages to achieve a certain order of accuracy owing to the use of the Jacobian and as such, are computationally more efficient. We have adopted the vertical implementation procedure, proposed by Rainwater \& Tokman \cite{Tokman16}, for optimised performance. Although this was initially proposed only for the Krylov subspace algorithm, we show, in Deka et al. \cite{Deka22c}, that the vertical approach can, very well, result in substantial amount of computational savings for the Leja interpolation method.
For adaptive step size implementation, one requires an error estimate at every time step. One of the cheapest ways to compute the error estimate is if it is inherently embedded in the integrator, i.e. an embedded integrator. This is why we focus only on embedded exponential integrators, where the error estimate does not require additional internal stages. The list of embedded exponential integrators implemented in \lexint include EXPRB32 \cite{Caliari09, Hochbruck09, Ostermann10}, EXPRB43 \cite{Caliari09, Hochbruck09, Ostermann10}, EXPRB53s3 \cite{Luan14}, EXPRB54s4 \cite{Luan14}, EPIRK4s3 \cite{Tokman17a, Tokman17b}, EPIRK4s3A \cite{Tokman16}, and EPIRK5P1 \cite{Tokman12}. Each of these integrators have been implemented in \lexint in a way that the integrator function returns the lower-order and the higher-order solutions. The difference between these two solutions yields an estimate of the error incurred. As there are a multitude of step-size controllers in the literature, we give the user full flexibility in choosing their desired step-sizing strategy. The function also returns the number of the number of matrix-vector products computed at a given time step that can be considered as a proxy of the computational cost. For constant step sizes, in addition to the aforementioned ones, we have implemented Rosenbrock--Euler \cite{Pope63}, EXPRB42 \cite{Luan17}, EPIRK4s3B \cite{Tokman16}, and EPIRK5P2 \cite{Tokman12}. In cases where the integrators do not possess an embedded error estimator, one can generate an error estimate using Richardson extrapolation.
\subsection*{Polynomial Interpolation at Leja points}
One of the crucial aspects of efficient implementation of exponential integrators is an adept iterative scheme. Whilst the Krylov subspace algorithm has long been proposed as an effective iterative scheme for exponential integrators \cite{Sidje98, Moler03, Higham10, Ostermann10, Tokman12, Einkemmer17}, it does have the drawback of having the need to compute inner products that becomes a serious impediment on massively parallel structures (GPUs). We choose the method of polynomial interpolation at Leja points \cite{Leja1957, Reichel90, Baglama98} that has been shown to outperform Krylov-based methods \cite{Bergamaschi06, Caliari07b, Deka22b}. This can be attributed mainly to the simplicity of the algorithm. One of the minor drawbacks of the Leja interpolation method is that it needs some approximation of the spectrum. It is to be noted that one needs only a crude estimate of the largest and the smallest eigenvalue of the matrix (for linear equations) or the Jacobian (for nonlinear equations). Thus, the method of power iterations can be employed to compute the spectrum every `n' time steps. In Deka \& Einkemmer \cite{Deka22b}, we have shown this to be an efficient technique for the highly nonlinear MHD equations.
Assuming that we have an estimate of the largest ($\alpha$) and the smallest ($\beta$) eigenvalue (in magnitude), the scaling and the shifting factors can be defined as $c = (\alpha + \beta)/2$ and $\gamma = (\beta - \alpha)/4$, respectively \cite{Caliari14}. The factor of $4$ emerges from the fact that we have chosen Leja points ($\xi$) in the arbitrary domain $[-2, 2]$. Now, we compute the coefficients of the polynomial, to be interpolated, using the divided differences algorithm of $\exp(c + \gamma \xi)$ or $\varphi_l(c + \gamma \xi)$. Then, we form the polynomial by adding an additional term at every iteration until the desired accuracy is reached. This can be mathematically written as
\begin{align*}
p_n(z) & = p_{n - 1}(z) + d_n \, y_{n - 1}(z), \\
y_n(z) & = y_{n - 1}(z) \times \left(\frac{z - c}{\gamma} - \xi_n \right),
\end{align*}
where $p_n(z)$ is the $n^\mathrm{th}$ term of the polynomial. We note that the convergence of the algorithm is sensitive to the step size: if the step size is too large, it can cause the algorithm to diverge. This is prominent only for large tolerances where a step size controller may allow for extremely large step sizes. Therefore, we adopt a safety measure of checking that the error incurred does not exceed a certain threshold. Very large values of this error is an indication of impending divergence of the algorithm. In such a case, we reject the step size and restart the time step with a smaller step size.
\lexint has two functions for interpolating $\varphi_l(z)$ on real (\texttt{`real\_Leja\_phi'}) and imaginary (\texttt{`imag\_Leja\_phi'}) Leja points. To speed-up convergence, it is recommended that if the largest eigenvalue of the Jacobian under consideration is real, one interpolates the exponential-like function on real Leja points, whereas if the largest eigenvalue (in magnitude) is imaginary, the interpolation is performed on imaginary Leja points. If the magnitude of the largest real and imaginary eigenvalues are relatively similar, one could interpolate on either real or imaginary Leja points.
To compute the exponential of a matrix, \lexint provides \texttt{`real\_Leja\_exp'} and \texttt{`imag\_Leja\_exp'}, for interpolations on real and imaginary Leja points, respectively. One can compute the exact solution (in time) for linear equations, which is why the functions for interpolation of the matrix exponential are provided only for constant step size implementation, i.e., without any error estimate. The desired accuracy can be chosen by the user by tuning the tolerance.
It is to be noted that \lexint can work in fully matrix-free structure as well as in any given formulation of the matrix, provided that one has a well-defined RHS function (this is similar to how one would implement an explicit method). Obviously, a matrix-free formulation is preferable from the computational viewpoint.
\section{Illustrative examples}
We show the performance of a selected number of integrators for a couple of problems. These problems have been drawn from Einkemmer \cite{Einkemmer18} and Deka \& Einkemmer \cite{Deka22a}. In both these problems, we consider periodic boundary conditions on $[0, 1]$. The first example is the Burgers' equation,
\begin{equation*}
\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + \frac{\eta}{2} \frac{\partial u^2}{\partial x},
\end{equation*}
where $\eta$ is the P\'eclet number and the initial condition is given by
\begin{equation*}
u(x, t = 0) = 1 + \exp\left(1 - \frac{1}{1-(2x - 1)^2}\right) + \frac{1}{2} \exp\left(-\frac{(x - x_0)^2}{2\sigma^2}\right),
\end{equation*}
with $x_0 = 0.9$ and $\sigma = 0.02$. We consider two different cases of the resolution, in terms of the number of grid points ($N$), $\eta$, and the simulation time $t_f$: (a) $N = 64, \eta = 200, t_f = 10^{-3}$ and (b) $N = 256, \eta = 10, t_f = 10^{-2}$. The second example is the Allen-Cahn equation:
\begin{equation*}
\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} + 100 \, \left(u - u^3\right).
\end{equation*}
The initial conditions are chosen to be
\begin{equation*}
u(x, t = 0) = A\,(1 + \cos(2\pi X)),
\end{equation*}
with $A = 0.1$. Similar to the previous example, we consider two cases: (c) $N = 64, t_f = 0.1$ and (d) $N = 256, t_f = 0.1$, where the symbols have the usual meanings.
We show the order of convergence for a selected number of integrators in Fig. \ref{fig:order} for the Burger's equation with $N = 64$ and $\eta = 200$. In Figs. \ref{fig:cost_bur} and \ref{fig:cost_ac}, we show the performance of a wide range of EXPRB and EPIRK integrators for a couple of representative problems with variable step size implementation. Let us clearly state that, here, we do not investigate the performance of the different integrators. We simply show the applicability of the different integrators available in \lexint.
\section{Impact \& future aspects}
With the ever increasing need for high-resolution large-scale simulations in computational physics, there is a demand for ever more efficient and enhanced numerical algorithms. Efficiently integrating PDEs in time go a long way in this regard. Exponential integrators have shown remarkable progress and promise in the last couple of decades. Various classes of exponential integrators have been shown to have superior performance to the traditional implicit and explicit methods for a wide range of problems usually considered in the mathematical literature. Additionally, their superiority have also been demonstrated for the MHD problems \cite{Tokman02, Einkemmer17, Deka22b}, kinetic plasma simulations \cite{Tuckmantel10, Dimarco11, Frenod15, Crouseilles18, Crouseilles20}, atmospheric and meteorological studies \cite{Clancy13, Gaudreault16, Mengalo18, Luan19, Schreiber19, Shashkin20, Brachet22, Pudykiewicz22}, and in different fields in engineering \cite{Rambeerich09, Michels14, Wang15, Tokman17a, Chen18, Chimmalgi19, Hammoud22}.
Computing the exponential of a matrix constitutes a vital element for exponential integrators. Several approaches have been outlined and the pros and cons of each method have listed in the reviews by Moler and Loan \cite{Moler78, Moler03}. The Krylov-subspace algorithm has become increasingly popular over the last few years owing to their ability to treat large system of matrices effectively. \texttt{EXPOKIT} \cite{Sidje98}, \texttt{EXPODE} \cite{Jansing2011}, and \texttt{phipm} \cite{Niesen12} are some the publicly available Krylov-based \texttt{MATLAB} software for efficiently computing the matrix exponential as well as computing the $\varphi_l(z)$ functions for exponential integrators. Further research in this field have shown that the method of polynomial interpolation \cite{Caliari04, Caliari07b} is highly competitive with, if not better than, the Krylov-based methods. \texttt{expleja} \footnote{\url{https://www.mathworks.com/matlabcentral/fileexchange/44039-matrix-exponential-times-a-vector}} is one of the first Leja-interpolation based \texttt{MATLAB} software that computes the matrix exponential times a vector or a matrix.
With the increasing popularity of exponential integrators in various fields in computational science, we aim to provide an accessible framework for an efficient implementation of these methods. Whilst methods like Pad\'e approximation, squaring and scaling, or diagonalising the corresponding matrix and computing the exponential of the resulting eigenvalues work well for small matrices, these methods become prohibitive for large systems. Libraries based on (parts of) these methods are already available in \texttt{Python}. We provide a library based on the the Leja polynomial interpolation method that is highly favourable for the computation of the exponential-like functions of large systems of matrices. As part of the software package, we present a multitude of (Leja-based) exponential integrators (from the literature) for temporal integration of nonlinear PDEs with constant and variable step sizes. The \texttt{Matlab} version of the Leja interpolation method for exponential integrators has been appended to the Krylov-based \texttt{EPIC} library. Using \texttt{EPIC}, it has recently been shown by Gaudreault et al. \cite{Gaudreault18}, that an algorithm based on incomplete orthogonalisation of the basis vectors (\texttt{KIOPS}) may help in achieving a reasonable amount of improved performance over the state-of-the-art \texttt{phipm} algorithm. This publicly available package is maintained by Prof. Mayya Tokman and can be obtained from \url{https://faculty.ucmerced.edu/mtokman/#software}. We have used this package in our (ongoing) study of performance comparison of the Leja method with the \texttt{KIOPS} algorithm \cite{Deka22c}.
Our goal of releasing the present package is to provide an effective implementation of the Leja-based method and to get people started on exploring such a method in a user-friendly environment (i.e. \texttt{Python}). In the future, we will develop a parallel implementation of \lexint and include other exponential integrators that are designed specifically for parallel computing and potentially for high-performance computing (such as GPUs). \lexint will be then implemented as a part of large software packages. As an example, in the near future, this package will be appended to the \texttt{PICARD} code \cite{Kissmann14} to solve the time-dependent cosmic-ray transport equation \cite{Strong07}.
\section*{Acknowledgements}
This work is supported, in part, by the Austrian Science Fund (FWF) project id: P32143-N32. We would like to thank Marco Caliari for providing us with the code to compute Leja points.
\bibliographystyle{elsarticle-num}
\bibliography{ref}
|
Title:
Uncorrelated Compensated Isocurvature Perturbations from kSZ Tomography |
Abstract: Compensated isocurvature perturbations (CIPs) are relative density
perturbations in which a baryon-density fluctuation is accompanied by a
dark-matter-density fluctuation such that the total-matter density is
unperturbed. These fluctuations can be produced primordially if multiple fields
are present during inflation, and therefore they can be used to differentiate
between different models for the early Universe. Kinetic Sunyaev-Zeldovich
(kSZ) tomography allows for the reconstruction of the radial-velocity field of
matter as a function of redshift. This technique can be used to reconstruct the
total-matter-overdensity field, independent of the galaxy-density field
obtained from large-scale galaxy surveys. We leverage the ability to measure
the galaxy- and matter-overdensity fields independently to construct a
minimum-variance estimator for the primordial CIP amplitude, based on a
mode-by-mode comparison of the two measurements. We forecast that a
configuration corresponding to CMB-S4 and VRO will be able to detect (at
$2\sigma$) a CIP amplitude $A$ (for a scale-invariant power spectrum) as small
as $A\simeq 5\times 10^{-9}$. Similarly, a configuration corresponding to SO
and DESI will be sensitive to a CIP amplitude $A\simeq 1\times 10^{-7}$. These
values are to be compared to current constraints $A \leq {\cal O}(0.01)$.
| https://export.arxiv.org/pdf/2208.02829 |
\preprint{}
\title{Uncorrelated Compensated Isocurvature Perturbations from kSZ Tomography}%
\author{Neha Anil Kumar}
\email{[email protected]}%
\affiliation{%
William H.\ Miller III Department of Physics and Astronomy, Johns Hopkins University
Baltimore, MD 21218, USA
}%
\author{Selim C. Hotinli}
\affiliation{%
William H.\ Miller III Department of Physics and Astronomy, Johns Hopkins University
Baltimore, MD 21218, USA
}%
\author{Marc Kamionkowski}
\affiliation{%
William H.\ Miller III Department of Physics and Astronomy, Johns Hopkins University
Baltimore, MD 21218, USA
}%
\section{\label{sec:Introduction}Introduction\protect}
Improving our understanding of the statistical characteristics of the primordial density fluctuations of our Universe is one of the primary goals of upcoming large-scale structure surveys and cosmic-microwave-background (CMB) experiments. The current observations of the small-amplitude $[{\cal O}(10^{-5})]$ temperature and polarization fluctuations in the CMB are consistent with Gaussian adiabatic fluctuations, as predicted by single-field models of inflation. Nevertheless, the search for small deviations from adiabaticity or Gaussianity remains a promising direction of research that can allow us to effectively distinguish between different models of inflation and determine the number of degrees of freedom governing the dynamics of the early Universe ~\citep[e.g.,][]{Baumann:2011nk, Assassi:2012zq, Chen:2012ge, Noumi:2012vr, Arkani-Hamed:2015bza, Lee:2016vti, Kumar:2017ecc, An:2017hlx, An:2017rwo, Baumann:2017jvh, Kumar:2018jxz, Anninos:2019nib}.
One such deviation that is particularly difficult to probe with CMB data alone is the class of isocurvature perturbations that leave the total matter density unchanged \cite{Gordon:2002gv,Gordon:2009wx,Holder:2009gd}. These compensated isocurvature perturbations (CIPs) may arise in various models of inflation with multiple fields \cite{Linde:1996gt,Sasaki:2006kq,Gordon:2009wx,Lyth:2002my,Gordon:2002gv,Langlois:2000ar,He:2015msa} and also during baryogenesis \cite{DeSimone:2016ofp}. In the multi-field models, the CIP fluctuations may be fully correlated with the adiabatic perturbation, completely uncorrelated, or (most generally) somewhere in between. Specifically, uncorrelated CIPs are a characteristic of the baryogenesis model \cite{DeSimone:2016ofp}.
Because CIPs leave the total matter distribution unchanged, they give rise to no CMB fluctuations at linear order. Instead, they induce higher-order effects on the CMB power spectrum \cite{Munoz:2015fdv,Heinrich:2016gqe,Smith:2017ndr,Valiviita:2017fbx,Planck:2018jri}, and the CMB trispectrum \cite{Grin:2011tf,Grin:2011nk,Grin:2013uya}. On small distance scales, the effects of CIPs may be manifest in CMB spectral distortions \cite{Haga:2018pdl,Chluba:2013dna} or the recombination history \cite{Lee:2021bmn}. Because these higher-order effects are harder to measure, they are rather poorly constrained by the CMB data, with the recent constraints allowing for fairly large-amplitude CIPs. However, there are various other prospects to probe different models of CIPs. For example, the effects of CIPs on baryon acoustic oscillations have been studied in Ref.~\cite{Soumagnac:2016bjk,Soumagnac:2018atx,Heinrich:2019sxl}. The effects of CIPs on 21-cm fluctuations were considered in Ref.~\cite{Gordon:2009wx}, and their implications for the velocity acoustic oscillations \cite{Munoz:2019rhi,Munoz:2019fkt} in the 21-cm power spectrum are discussed in Ref.~\cite{Hotinli:2021xln}. Finally, Refs.~\cite{Barreira:2019qdl,Barreira:2020lva} assessed the sensitivity of the galaxy clustering to the amplitude of CIPs through the measurement of the scale-dependent galaxy bias induced due to CIPs.
Here, we study the prospects to use kinetic Sunyaev-Zeldovich (kSZ) tomography to seek uncorrelated CIPs. kSZ tomography \cite{Zhang:2000wf, Ho:2009iw, Shao:2016uzt, Zhang:2010fa, Munshi:2015anr,Smith:2018bpn, Cayuso:2021ljq} allows for the reconstruction of the line-of-sight component of the peculiar-velocity field in a 3-dimensional volume. This is accomplished by cross-correlating the peculiar-velocity-induced temperature fluctuation (the kSZ effect \cite{Sunyaev:1980nv, Zeldovich:1969ff, Zeldovich:1969sb, Sunyaev:1980vz, Sazonov:1999zp}), in a CMB map, with a large-scale galaxy survey, allowing for a measurement of the kSZ contribution as a function of redshift. Given that the total-matter field can be reconstructed from the velocity field, kSZ tomography provides the ideal arena for testing models, like those with CIPs, in which baryons and dark matter may be set apart from each other. In recent work \cite{Hotinli:2019wdp}, the improvement coming from the inclusion of this independent tracer was explored for models of correlated CIPs in which the CIP is fully correlated with the adiabatic perturbation.
The above methodology allows us to compare the kSZ-tomography-based matter reconstruction field to galaxy survey data to obtain excellent constrains on the amplitude of the CIP power spectrum. In fact, because these two tracers of the large scale matter distribution are obtained independently, we can construct an estimator that compares the amplitude of the galaxy-density fluctuation with that of the matter-density fluctuation for each Fourier amplitude. This estimator is thus not cosmic-variance limited and can, in principle (in the limit of perfect measurements), probe an arbitrarily small CIP amplitude. Given that the estimator works on a mode-by-mode basis, it also works for correlated CIPs, although it does not capitalize upon additional effects induced by the correlation~\cite{Hotinli:2019wdp}.
In this paper we explain the construction of this estimator and make forecasts on the sensitivity of kSZ tomography to the CIP power-spectrum amplitude $A$. We construct the estimator assuming that the CIP is a primordial perturbation field. We note that the CIP amplitude is, strictly speaking, degenerate with a CIP bias that relates the CIP perturbation to the galaxy-density perturbation it induces. This CIP bias is, however, expected to be of order unity and can be obtained from simulations \cite{Barreira:2019qdl,Barreira:2020lva}. Furthermore, in the event that the effects of CIPs are detected in the CMB, their effects in kSZ tomography can then be used to establish the CIP bias.
In our forecasts, we consider two baseline experiment configurations: `baseline 1' matching the expected specifications of the Vera Rubin Observatory (VRO) \cite{LSSTScience:2009jmu} and CMB-S4 \cite{CMB-S4:2016ple}, and `baseline 2' corresponding to the Dark Energy Spectroscopic Instrument (DESI) \cite{DESI:2016fyo} and Simons Observatory (SO) \cite{SimonsObservatory:2018koc, SimonsObservatory:2019qwx}. We find that baseline 1 results in a sensitivity of $\sigma_{\hat{A}} \approx 2.3 \times 10^{-9}$, where the errors represent the root-variance with which the CIP power spectrum amplitude $A$ can be determined. Similarly, we forecast that the expected sensitivity of baseline 2, based on our minimum variance estimator, is $\sigma_{\hat{A}} \approx 5.4 \times 10^{-8}$. These results indicate that it may be possible to probe CIP perturbations with an amplitude comparable to the amplitude of the primordial power spectrum $A_s$. More specifically, we find a relative uncertainty of $\sigma_{\hat{A}} / A_s \approx 1.0$ and $\sigma_{\hat{A}} / A_s \approx 25$ for each of the baselines, respectively, where we use the value of $A_s$ quoted by the \textit{Planck} 2018 CMB analysis \cite{Planck:2018jri}.
This paper is organized as follows: In Section \ref{sec:cip} we introduce our parameterization of the CIP model and in Section \ref{sec:mve} we derive the minimum-variance estimator for the CIP amplitude. We detail the relevant models for the noise and power-spectra used in our analysis in Section \ref{sec:NoiseModels}. We then present our results in Section \ref{sec:results}. For all our analysis we adopt the $\Lambda$CDM Cosmology as the fiducial model with the following parameter values, taken from \textit{Planck} 2018 \cite{Planck:2018jri}: reduced Hubble constant $h = 0.67$, baryon density parameter $\Omega_b = 0.049$, cold-dark-matter density parameter $\Omega_{\rm{cdm}} = 0.264$, spectral index $n_s = 0.965$ and amplitude of the primordial scalar power spectrum $A_s = 2.2\times10^{-9}$. These forecasts represent a considerable improvement over current constraints $A\lesssim 0.01$ from the CMB \cite{Planck:2018jri,Grin:2013uya} and galaxy clusters \cite{Holder:2009gd,Grin:2013uya}, although should be viewed as complementary to the cluster constraint which probes wavenumbers primarily around $k\sim0.1$~Mpc$^{-1}$ as opposed to Hubble scales $k\sim10^{-4}$~Mpc$^{-1}$.
\section{Compensated isocurvature perturbations}
\label{sec:cip}
\subsection{Definitions and conventions}
We define the CIP field $\Delta(\vec x)$ to be the primordial fractional baryon overdensity through
\begin{equation}
\rho_b(\vec x,z) = \bar\rho_b(z)\left[ 1+\Delta(\vec x) \right],
\end{equation}
which is then accompanied by a compensating dark-matter underdensity,
\begin{equation}
\rho_c(\vec x,z) = \bar \rho_c(z) \left[ 1 - f_b \Delta(\vec x) \right].
\end{equation}
Here $\bar \rho_b(z)$ and $\bar\rho_c(z)$ are respectively the mean baryon and dark-matter densities at redshift $z$, and $f_b$ is the ratio $\Omega_b/ \Omega_c$ today. These defining relations are understood to be valid at sufficiently early times such that the dark matter and baryons have not moved significantly, either due to non-linear evolution at late times, or before recombination due to tight-coupling of baryons to photons. Therefore, this set-up leads to a modulation of the relative fraction of baryons and dark matter on large scales, while keeping the total matter density fixed.
The CIP perturbation $\Delta(\vec x)$ is a realization of a random field with power spectrum $P_{\Delta\Delta}(k)=A F(k)$, which we have written in terms of an amplitude $A$ and fiducial $k$ dependence $F(k)$. A canonical choice for the $k$ dependence is the scale-invariant power spectrum $F(k)=1/k^3$. In this case the CIP variance, smoothed in spheres of radius $R$, is \cite{Smith:2017ndr}:
\begin{equation}
\Delta_{\rm rms}^2(R) = \frac{1}{2\pi^2} \int\, k^2\, dk\, \left[ 3 j_1(kR)/(kR) \right]^2 P_{\Delta\Delta}(k),
\end{equation}
where $j_1(x)$ is the spherical Bessel function. If we take $R$ to be the CMB scale considered in Ref.~\cite{Smith:2017ndr}, then $\Delta_{\rm rms}^2 \simeq A/4$. The current constraints to this amplitude (for uncorrelated CIPs) are $\Delta_{\rm rms}^2 = 0.0037^{+0.0016}_{-0.0021}$ from Planck \cite{Planck:2018jri}, $\Delta_{\rm rms}^2 \lesssim 0.012$ (95\% CL) from the WMAP trispectrum \cite{Grin:2013uya}, and $\Delta_{\rm rms}^2 \lesssim 0.006$ from baryon fractions in galaxy clusters \cite{Holder:2009gd,Grin:2013uya}.
\subsection{CIPs and the galaxy perturbation}
Following Ref.~\cite{Barreira:2020lva}, the linear-order expression for the fractional galaxy-density perturbation at comoving position $\bm{x}$ and redshift $z$ can be written
\begin{equation}
\delta_g(\bm{x},z) = b_g(z) \delta_m(\bm{x},z) + b_{\rm CIP}(z) \Delta(\bm{x}),
\label{eqn:deltag}
\end{equation}
where $b_g(z)$ is the usual linear galaxy bias, and $b_{\rm CIP}(z)$ is a CIP bias that parametrizes the contribution of the CIP to the galaxy-density perturbation. The fractional matter-density perturbation $\delta_m(\bm{x},z)$ is taken to be the large-scale matter perturbation which grows proportional to the linear-theory growth factor. Given that the CIP generates no gravitational-potential perturbation, $\Delta(\bm{x})$ will remain approximately constant on large distance scales and so has no redshift dependence.
The relation between $\Delta(\bm{x})$ and $\delta_g(\bm{x},z)$, parameterized by the CIP bias $b_{\rm CIP}(z)$, can be obtained through simulations. This bias is determined by two competing effects: (1) the effect on the halo mass function, which decreases with increasing $\Delta$; and (2) the ratio of the stellar mass to the halo mass, which increases with $\Delta$. Simulation results for $b_{\rm CIP}$ depend on whether the galaxies are selected by halo mass or stellar mass. Further details can be found in Refs. \cite{Barreira:2019qdl,Barreira:2020lva}.
Given Eq.~(\ref{eqn:deltag}), the galaxy power spectrum for uncorrelated CIPs will be
\begin{equation}
P_{gg}(k,z) = b_g^2(z) P_{mm}(k,z) + b_{\rm CIP}^2(z) P_{\Delta\Delta}(k).
\label{eq: Pgg_Pmm_Pdeltadelta}
\end{equation}
Thus, the CIPs show up as an additional contribution to the galaxy power spectrum. In principle (and in practice), the CIP contribution $b_{\rm CIP}^2(z) P_{\Delta\Delta}(k)$ to the galaxy power spectrum can be inferred by comparing the observed galaxy power spectrum to the matter power spectrum obtained from the peculiar-velocity field determined from kSZ tomography. However, the measurements of both of the power spectra, $P_{gg}(k)$ and $P_{mm}(k)$, are cosmic-variance limited i.e.; they are both independently limited by the number of Fourier modes of the galaxy and velocity fields that can be obtained with high signal to noise. Therefore, using the above model to constrain the CIP amplitude will be limited by the effects of cosmic variance on each of the measured power spectra.
\section{Minimum-variance estimator}
\label{sec:mve}
With kSZ tomography, the CIP perturbation amplitude can be obtained on a mode-by-mode basis, under (relative) cosmic-variance cancellation. In Fourier space, the estimator for the amplitude $\Delta_{\bm{k}}$ is then
\begin{equation}
\widehat{\Delta_{\bm{k}}} = \left(\widehat{\delta_{g,\bm{k}}} - b_g \widehat{ \delta_{m,\bm{k}}} \right)/b_{\rm CIP},
\end{equation}
where the overhat denotes an estimator, and we have dropped any redshift dependence for ease of notation. This estimator has a variance (under the null hypothesis $\Delta=0$),
\begin{equation}
P_{\Delta\Delta}^N(\bm{k}) = b_{\rm CIP}^{-2} \left[ \VEV{\left|\Delta_{\bm{k}} \right|^2} \right]= b_{\rm CIP}^{-2}\left[N_{gg}(k) + b_h^2 N_{mm}(k)\right],
\end{equation}
where $N_{gg}(k)$ and $N_{mm}(k)$ are the noise contributions to the galaxy and matter power spectra, respectively.
The detectability of CIPs can be assessed by determining the error $\sigma_{\widehat A}$ with which the amplitude $A$ for the CIP power spectrum can be measured. The minimum-variance estimator $\widehat A$ for the amplitude is then obtained by adding the estimators from each Fourier mode with inverse-variance weighting:
\begin{equation}
\widehat A = b_{\rm CIP}^2 \sigma_{\widehat A}^2 \sum_{\bm{k}} \frac{ \left| \widehat{\delta_{g,\bm{k}}} - b_g \widehat{ \delta_{m,\bm{k}}} \right|^2/F(k)}{ 2 \left[ P^N_{\Delta\Delta}(\bm{k})/F(k) \right]^2 }.
\end{equation}
Here,
\begin{equation}
\sigma_{\widehat A}^2 = b_{\rm CIP}^{-4} \left[\frac12 \sum_{\bm{k}} \left[ F(k)/P_{\Delta\Delta}^N(\vec k) \right]^2 \right]^{-1},
\label{eqn:variance}
\end{equation}
is the variance with which the CIP amplitude $A$ can be determined. Since this method relies on measurements of $\widehat{\delta_{g,\bm{k}}}$ and $\widehat{\delta_{m,\bm{k}}}$, we no longer have two independent terms carrying the cosmic-variance limitations. Therefore, using this estimator method, we can decrease the effects of sample variance and increase sensitivity to the CIP amplitude, in comparison to the methodology presented below Eq.~\eqref{eq: Pgg_Pmm_Pdeltadelta}.
\section{Noise Models}
\label{sec:NoiseModels}
We model the noise in the galaxy auto-power spectrum assuming that the primary contribution comes from galaxy shot noise along with photo-$z$ errors. Photo-$z$ errors can be implemented by a convolution of the galaxy density field with a Gaussian kernel in the radial direction. The galaxy noise power spectrum is then given by:
\begin{equation}
N_{gg}(k, \mu) = \frac{1}{W^2(k, \mu) n_{\rm gal}},
\end{equation}
where $n_{\rm gal}$ is the average galaxy number density of the specific survey, and Gaussian kernel $W(k, \mu)$ is defined as
\begin{equation}
W^2(k, \mu) = e^{-k^2\mu^2\sigma^2(z)/H^2(z)},
\end{equation}
with redshift scattering $\sigma(z)$.
The noise in the independently-calculated matter-overdensity field is derived from the kSZ velocity reconstruction noise. As shown in Ref. \cite{Smith:2018bpn}, the noise in the kSZ-tomography-based reconstruction of the velocity field is given by
\begin{equation}
N_{vv}(k_L, \mu_L) = \mu_{L}^{-2}\frac{2\pi \chi_*^2}{K_*^2}\Bigg[\int dk_S \frac{k_SP_{ge}^{\rm NL}(k_S)^2}{P_{gg}^{\rm NL}(k_S)\ C_{\ell=k_S\chi_*}^{\text{tot}}}\Bigg]^{-1},
\label{eq:NvvFullExpression}
\end{equation}
where $\chi_{*}$ refers to the comoving distance to the redshift of consideration $z_{*}$, $k_L$ refers to the long-wavelength mode, $k_S$ refers to the short-wavelength mode, and $\mu_L$ refers to the angle of the large-scale mode with respect to the line of sight, i.e., $\mu_L = \hat{\bm{k}}_L\cdot \hat{\bm{n}}$. Furthermore, $P_{gg}^{\rm NL}(k_{S}, \mu_{S})$ refers to the small-scale galaxy-galaxy auto-power spectrum and $P_{ge}^{\rm NL}(k_{S}, \mu_{S})$ is the small-scale galaxy-electron power spectrum. Finally, in the above equation we use the radial weight function $K_{*}$ given by
\begin{equation}
K_* \equiv -T_{\text{CMB}}\sigma_T\bar{n}_{e,0}e^{-\tau(\chi_*)}(1+z_*)^2,
\end{equation}
where $\bar{n}_{e,0}$ is the mean electron density today, and $\tau$ is the optical depth. It is important to note that the velocity reconstruction noise is independent of of the magnitude of $k_L$.
Using the late-time, linearized, continuity-equation-based relation between the peculiar-velocity field and matter-overdensity field, we can write the noise in the matter reconstruction as
\begin{equation}
N_{mm}(k_L, \mu) = \frac{k_L^2}{(faH)_*^2}N_{vv}(k_L, \mu),
\label{eq:MatReconNvv}
\end{equation}
where $f$ refers to the linear growth rate $d \ln{G}/ d \ln{a}$, $H$ is the Hubble parameter, and $a$ is the scale factor at the redshift of interest. Since $N_{vv}$ is independent of the magnitude of $k_{L}$, the above relation implies that the noise in the reconstructed matter power spectrum is proportional to $k_{L}$; i.e., the noise is lowest on the largest scales.
The small-scale galaxy-galaxy and galaxy-electron power spectra appearing in Eq.~\eqref{eq:NvvFullExpression} are calculated within the halo model including the halo occupation distribution (HOD) \cite{Leauthaud:2011rj, Leauthaud:2011zt}. The specific modelling assumptions and parameter values used to construct the small-scale spectra can be found in Appendix A of Ref. \cite{AnilKumar:2022flx}. To ensure that the computed small-scale spectra under the HOD model are consistent with the assumed experiment specifications, we use the following prescription. In the HOD model, the galaxy sample is specified by imposing a particular threshold stellar mass $m_\star^{\rm thresh}$ of observable galaxies. For each configuration, we choose an $m_\star^{\rm thresh}$ such that the total predicted number-density of observed galaxies matches the number density for the given experiment.
Finally, in order to complete the model of the velocity-reconstruction noise, we define the CMB contribution as follows. The total CMB contribution $C_\ell^{\rm tot}$, appearing in Eq.~\eqref{eq:NvvFullExpression}, is assumed to be
\begin{equation}
C_{\ell}^{\text{tot}} = C_{\ell}^{TT} + C_{\ell}^{\text{kSZ-late-time}} + N_{\ell},
\label{eq:Cll_contributions}
\end{equation}
where $C_{\ell}^{TT}$ is the lensed CMB temperature power spectrum, $C_{\ell}^{\text{kSZ-late-time}}$ is the low-redshift contribution to kSZ, and finally $N_{\ell}$ is the instrumental-noise power spectrum of the CMB map, which is modelled as
\begin{equation}
N(\ell) = s^2\text{exp}\Bigg[\frac{\ell(\ell + 1)\theta_{\rm FWHM}^2}{8\ \text{ln}2}\Bigg].
\label{eq:CMBNoise}
\end{equation}
Here, $s$ labels the sensitivity of the instrument and $\theta_{\rm FWHM}$ is the resolution. We do not include a contribution from atmospheric noise since it is expected to be subdominant to the instrument and kSZ contributions at the relevant high multipoles of $\ell > 3000$.
\section{Results}
\label{sec:results}
In this section, we provide forecasts for two different experimental configurations, choosing a fixed, fiducial set of values for the survey parameters to model the noise expected in each case. We then present the dependence of $\sigma_{\hat{A}}$ on the survey parameters by varying each independently, to better establish direction for improvements to future surveys.
\subsection{Baseline Forecasts}
We forecast future sensitivity to the amplitude of the CIP by evaluating Eq~\eqref{eqn:variance} for two experimental configurations: (1) a high galaxy number density, photometric survey similar to VRO \cite{LSSTScience:2009jmu} along with a CMB experiment with specifications that match CMB-S4 \cite{CMB-S4:2016ple}, and (2) a low galaxy number density, spectroscopic survey like DESI \cite{DESI:2016fyo} with a CMB experiment like SO \cite{SimonsObservatory:2018koc}. The set of experimental survey parameters used in our calculation have been taken from Ref. \cite{AnilKumar:2022flx}, and are summarized in Table \ref{tab:baselineSpecs}.
\begin{table}%
\caption{\label{tab:baselineSpecs}%
Baseline configurations for the cross-correlated CMB and LSS experiments. Values for baseline 1 match the specifications of the VRO survey and CMB-S4. The values for baseline 2 are similar to those expected for DESI and SO. The chosen values for the CIP bias are taken from Table 1 of Ref. \cite{Barreira:2020lva}. The survey volumes are the same across the two configurations to emphasize the dependence of the results on galaxy number density and photo-$z$ errors.
}
\begin{ruledtabular}
\begin{tabular}{cccc}
& & \textrm{baseline 1} & \textrm{baseline 2}\\
\colrule
redshift & $z$ & 1.0 & 1.0\\
survey volume & $V$ & 100 $\ \text{Gpc}^{3}$ & 100$\ \text{Gpc}^3$\\
halo bias & $b_h$ & 1.6 & 1.6\\
galaxy density & $n_{\rm{gal}}$ & $10^{-2}\ \ \text{Mpc}^{-3}$ & $2\times10^{-4}\ \ \text{Mpc}^{-3}$\\
photo-$z$ error & $\sigma_z$ & 0.06 & - \\
threshold mass & $m_\star^{\rm thresh}$ & $10^{9.5}\ M_\odot$ & $10^{11}\ M_\odot$\\
CIP bias & $b_{\rm CIP}$ & 0.32 & 0.40\\
CMB resolution & $\theta_{\rm FWHM}$ & 1.5 arcmin & 1.5 arcmin\\
CMB sensitivity & $s$ & 1 $\ \mu \rm{K}-$arcmin & 5 $\ \mu \rm{K}-$arcmin\\
\end{tabular}
\end{ruledtabular}
\end{table} %
It is important to note that the CIP bias $b_{\rm CIP}$ is degenerate with the CIP amplitude. Despite this degeneracy, in our constructed estimator $\hat{A}$ and the associated variance $\sigma_{\hat{A}}$ we continue to treat $A$ and $b_{\rm CIP}$ as separate parameters to clearly establish the dependence of $\sigma_{\hat{A}}$ on the chosen value of the bias. The exact value $b_{\rm CIP}$ can be computed using simulations, and is expected to be of order unity, as presented in Refs. \cite{Barreira:2019qdl,Barreira:2020lva}. To remain consistent with our previous definitions, for these forecasts, we fix the value of $b_{\rm CIP}$, assuming that the galaxy samples are selected by a threshold stellar mass $m^{\rm thresh}_{\star}$. The $m^{\rm thresh}_{\star}$ values are chosen to match the predicted galaxy number density of each survey and are consistent with the small-scale galaxy power spectra used to compute the velocity reconstruction noise for each experimental configuration. The assumed value of $m_\star^{\rm thresh}$ for each survey along with the corresponding value of $b_{\rm CIP}$, estimated from the results in \cite{Barreira:2019qdl,Barreira:2020lva}, have also been included in Table \ref{tab:baselineSpecs}.
Instead of discretely summing over the Fourier modes, to compute $\sigma_{\widehat A}$, we evaluate Eq.~\eqref{eqn:variance} in the continuous limit as follows:
\begin{equation}
\begin{split}
\sigma_{\widehat A}^2 &= b_{\rm CIP}^{-4}\left[\frac{V}{2}\int \frac{dk^3}{(2\pi)^3}\left(\frac{F(k)}{P_{\Delta\Delta}^N(\bm{k})}\right)^2\right]^{-1}, \\
&= b_{\rm CIP}^{-4}\left[\frac{V}{2}\int_{k_{\rm min}}^{k_{\rm max}} \int_{-1}^{1} \frac{k^2dk\ d\mu}{(2\pi)^2}\left(\frac{F(k)}{P_{\Delta\Delta}^N(\bm{k})}\right)^2\right]^{-1},
\end{split}
\label{eq: sigmaA_integral}
\end{equation}
where we have accounted for the fact that the variance $P_{\Delta\Delta}^N$ is only dependent on $k$ and $\mu$, with the latter being induced by the kSZ based velocity reconstruction and the inclusion of photo-$z$ errors. For our forecasts, we adopt the canonical choice $F(k) = 1/k^3$. The integral over Fourier modes is performed from a lower limit $k_\text{min} \equiv \pi/V^{1/3}$, restricted by the survey volume $V$, to an upper limit $k_\text{max} \approx 10^{-1} \ \text{Mpc}^{-1}$.
Through our analysis we find that for the configuration of VRO and CMB-S4, $\sigma_{\hat{A}} \approx 2.3\times10^{-9}$ which corresponds to a relative sensitivity of $\sigma_{\hat{A}}/A_s \approx 1.0$, where $A_s$ is the amplitude of the primordial power spectrum. Similarly, for the configuration of DESI and an SO-like CMB experiment, we find that $\sigma_{\hat{A}} \approx 5.4 \times 10^{-8}$ with a relative uncertainty of $\sigma_{\hat{A}}/ A_s \approx 25$. For these relative uncertainty estimates, we use the value $A_s = 2.2 \times 10^{-9}$ determined by the most recent \textit{Planck} 2018 CMB analysis \cite{Planck:2018jri}.
\subsection{Experiment Parameter Variations}
In order to assess which experimental limitations have the most significant impact on our ability to measure the CIP power spectrum amplitude, we isolate the effects of certain experimental parameters from Table \ref{tab:baselineSpecs} by varying each individually and holding all other elements of the configuration constant. The results of these variations are discussed below.
First, to highlight the scales that most prominently contribute to the signal, we plot the value of $\sigma_{\hat{A}}$ as a function of the smallest measurable Fourier mode $k_{\rm min}$. This variation corresponds to changing the largest recoverable wave number from survey volume $V$, and directly impacts the lower limit of `summation' evaluated via Eq.~\eqref{eq: sigmaA_integral}. The results for both baselines have been presented in Fig. \ref{fig:error_dep_kmin_ngal} (left). The displayed results indicate that the inclusion of larger scales increases survey sensitivity to the CIP power spectrum amplitude. This is an expected result, not only because the CIP signal is largest at small $k$ [since we have chosen $F(k)\sim 1/k^3$ for this analysis], but also because the noise in the reconstructed matter overdensity field is smallest on largest scales [see Eq.~\eqref{eq:MatReconNvv}]. At lower values of $k_{\rm min}$, baseline 1 performs better than baseline 2, likely due to the lowered shot noise (higher $n_{\rm gal}$). However, baseline 2 performs better at higher $k_{\rm min}$, where the effects of shot noise are minimised and the photo-$z$ errors become dominant in the baseline 1 estimates.
Next, we focus on highlighting the effects of increasing galaxy number density $n_{\rm gal}$ on the value of $\sigma_{\hat{A}}$. The results for this variation for each of the baselines (holding all other experimental parameters constant, for each individual baseline) can be seen in Figure \ref{fig:error_dep_kmin_ngal} (right). The results displayed show that an increasing galaxy number density allows for higher survey sensitivity to the CIP power spectrum amplitude $A$. This behaviour is a direct result of the fact that a higher galaxy number density equates to a lower shot noise, which not only allows for the measurement of the larger scale galaxy modes but also decreases the matter reconstruction noise (through a lower overall $N_{vv}$). The two curves are relatively parallel for $10^{-4}\ {\rm Mpc}^{-3} < n_{\rm gal} < 10^{-2}\ {\rm Mpc}^{-3}$, with baseline 2 performing better in this region due to spectroscopic redshift measurements. However the baseline 1 (VRO+CMB-S4) performs better at a higher galaxy number density, while the results from baseline 2 (DESI+CMB-SO) plateau, likely due to the difference in CMB-resolutions.
What is more interesting to analyze is the effect of galaxy number density on the relation between $\sigma_{\hat{A}}$ and $k_{\rm min}$. Figure \ref{fig:fig:error_dep_kmin_diff_ngal} displays $\sigma_{\hat{A}}$ as a function of $k_{\rm min}$ for different values of $n_{\rm gal}$. For these curves we assume the baseline 1 configuration for all other survey parameters and keep the value of $b_{\rm CIP}$ fixed. The displayed results indicate that a higher galaxy number density results in a steeper decrease of $\sigma_{\hat A}$ with decreasing $k_{\rm min}$ i.e., a higher $n_{\rm gal}$ allows for a greater order-of-magnitude improvement in $\sigma_{\hat A}$ with a fixed increase in survey volume. This effect is particularly evident for $10^{-3}\ {\rm Mpc}^{-1} < k_{\rm min} < 10^{-2}\ {\rm Mpc}^{-1}$. The black dashed line, labelled `No Noise', portrays the dependence of $\sigma_{\hat{A}}$ on $k_{\rm min}$ in the absence of shot noise ($n_{\rm gal} \rightarrow \infty$) and photo-$z$ errors. In this ideal case, we see that $\sigma_{\hat{A}}$ approximately scales as $k_{\rm min}^{3.5}$. This behaviour is explained by the chosen model for $P_{\Delta\Delta}(k)$ [with $F(k) = 1/k^3$] along with the $k^2$ scale dependence of the matter reconstruction noise [Eq.~\eqref{eq:MatReconNvv}].
On the contrary, assuming the same baseline configuration as above, we found that varying $k_{\rm min}$ between $10^{-3}\ {\rm Mpc}^{-1}$ and $10^{-2}\ {\rm Mpc}^{-1}$ has a minimal impact on the steepness of the dependence of $\sigma_{\hat{A}}$ on the galaxy number density. That is, even though a decreased $k_{\rm min}$ improves sensitivity to the CIP power-spectrum amplitude, a fixed increase in the galaxy number density consistently leads to a fixed order-of-magnitude improvement in $\sigma_{\hat {A}}$ for $10^{-3}\ {\rm Mpc}^{-1} < k_{\rm min} < 10^{-2}\ {\rm Mpc}^{-1}$. The results only significantly diverge for $n_{\rm gal} > 10^{-1.75}\ {\rm Mpc}^{-3}$, which is likely due to the shot noise becoming sub-dominant at these higher values of galaxy number density.
Finally, to highlight the effects of CMB noise on survey sensitivity to $A$, for each of the discussed baseline configurations, we varied the CMB telescope sensitivity $s$ and resolution $\theta_{\rm FWHM}$ individually, holding all other experimental parameters constant. We found that, once again, the difference in galaxy number density across the two baselines severely impacts the order of magnitude improvement in $\sigma_{\hat{A}}$, given a fixed improvement in CMB noise parameters. We vary the CMB sensitivity from $0.25\ \mu$K-arcmin to $10\ \mu$K-arcmin and find a steady increase in $\sigma_{\hat{A}}$ by a factor of 3 for the baseline 1 configuration and a factor of 1.1 for the baseline 2 configuration. Similarly, we vary CMB telescope resolution from 0.1 arcmin to 10 arcmin and find a relatively steady increase in both cases, by a factor of 250 for the baseline 1 configuration and a factor of 40 for baseline 2. This indicates that, at a higher value of $n_{\rm gal}$, surveys are more sensitive to increases in CMB instrument noise.
For completeness, we also varied the photo-$z$ error assumed for baseline 1, holding all other experiment parameters fixed. Varying the value of $\sigma_z$ from 0.0 to 2.0 resulted in an increase in $\sigma_{\hat A}$ by a factor of 3.5. This minimal effect from increasing photo-$z$ is expected, given that we are primarily reliant on signal from the largest scales for the measurement. We also varied the assumed value of the Gaussian halo bias $b_{h}$ for both the baseline configurations to conclude that its effect on survey sensitivity to $\sigma_{\hat{A}}$ is negligible.
\section{Discussion}
In this paper, we forecast future survey sensitivity to the amplitude of the CIP power spectrum $A$, assuming that the compensated perturbations are sourced primordially. The compensated nature of these isocurvature perturbations causes CIPs to contribute only at second order to the CMB, leading to poor constraints that allow for the CIP amplitude to be over 5 orders of magnitude larger than that of the primordial adiabatic perturbation. In contrast, the CIP amplitude is expected to contribute at leading order to the galaxy overdensity field [see Eq.~\eqref{eqn:deltag}], making it a valuable statistic to investigate the CIP amplitude. Therefore, in our work, we construct a minimum variance estimator that compares the amplitude of the galaxy density fluctuation to the independently obtained matter overdensity amplitude, on a mode-by-mode basis. We show that leveraging the ability to measure the matter over-density field using kSZ tomography, independently of the galaxy over-density field, allows one to probe CIP amplitudes as small as that of the primordial adiabatic perturbation, under sample variance cancellation.
We use the minimum-variance estimator to forecast that a survey configuration corresponding to CMB-S4 and VRO results in a sensitivity of $\sigma_{\hat{A}} \approx 2.3 \times 10^{-9}$. Similarly, a configuration corresponding to SO and DESI results in a sensitivity of $\sigma_{\hat{A}} \approx 5.4 \times 10^{-8}$. These sensitivities correspond to relative uncertainties of $\sigma_{\hat{A}}/A_s \approx 1.0$ and $\sigma_{\hat{A}}/A_s \approx 25$ for each of the combinations, respectively, where $A_s$ represents the amplitude of the primordial power spectrum. For these forecasts, we assume a fixed value for the CIP bias $b_{\rm CIP}$ for each configuration, drawing from the simulation-based results presented in Refs. \cite{Barreira:2019qdl, Barreira:2020lva}. Although the CIP bias is, strictly speaking, perfectly degenerate with the CIP perturbation amplitude, we choose not to consolidate these two parameters into a single amplitude term to make explicit the dependence of $\sigma_{\hat{A}}$ on the value of $b_{\rm CIP}$. Furthermore, since this dependence is just a factor of scale, it is straightforward to map the sensitivities quoted in this paper to a different value of $b_{\rm CIP}$ or to a constraint on a consolidated amplitude parameter $b_{\rm CIP}^{2} \times A$.
The dramatic improvement in sensitivity to CIPs derives from the possibility, enabled by kSZ tomography, to measure the galaxy and total-matter fields independently and thereby circumvent the cosmic-variance limit in many other probes. Thus, even one very well measured Fourier mode allows the CIP to be probed. Our results indicate, moreover, that the sensitivity comes primarily from measurements at the largest scales, a consequence largely of the $k$ dependence of the relation between the total-matter perturbation and the peculiar velocity probed by the kSZ effect. We thus conclude that in order for the promising statistical errors forecast here to be achieved, systematic effects that might affect measurement of galaxy-density and CMB-temperature perturbations on the largest distance scales must be well under control. We also surmise that relativistic effects will need to be included in the analysis.
The sensitivity to the CIP amplitude we forecast here compare well (within a factor of $\sim4$) with Ref.~\citep{Hotinli:2019wdp}, where authors evaluated the prospects to probe \textit{correlated} CIP fluctuations similarly with kSZ tomography. Most recent upper limits on CIPs amplitude are provided by the scale-dependent mass-to-light ratio from measurements of BAOs~\citep{2016PhRvL.116t1302S,Soumagnac:2018atx}, which are comparable to the constraints from the CMB~\citep{Smith:2017ndr}, of the order $\sigma_A\sim\mathcal{O}(10^{-4})$. These constraints compare also with forecast sensitivities on the BAO phase shift induced by spatially varying correlated CIP fluctuations explored in Ref.~\citep{Heinrich:2019sxl}. More recently, Ref.~\citep{Hotinli:2021xln} proposed using measurements of the velocity acoustic oscillations (VAOs) during cosmic dawn~\citep{Munoz:2019rhi,Munoz:2019fkt} to probe both correlated and uncorrelated CIPs fluctuations at a sensitivity reaching $\sigma_A\sim\mathcal{O}(10^{-5})$ in the foreseeable future. These studies find that the sensitivity of the kSZ tomography studied here and in Ref.~\citep{Hotinli:2019wdp} will likely remain orders of magnitude better compared to that of CMB, BAO and the VAO signals.
Constraining the CIP amplitude at higher order will not only allow for a better understanding of whether baryon and CDM fluctuations trace the matter density but also will help rule out different, non-trivial models of many-field inflation. In fact, to accurately probe signatures of deviations from adiabaticity and Guassianity of the early Universe, accounting for CIPs may be essential. For example, Ref.~\citep{Barreira:2020lva} shows that, depending on the degree of correlation of the CIP with the primordial adiabatic perturbation, the CIP signal may exactly match the scale dependent signal from the $f_{\rm NL}$ term when probing scale dependent bias for signatures of primordial non-Gaussianity in the single field scenario. In the curvaton scenario, depending on the correlation coefficient assumed between the CIP and the inflaton or the curvaton, we would expect similar degeneracies to arise when using the galaxy bias to simultaneously probe $f_{\rm NL}$ and $\tau_{\rm NL}$. Similar degeneracies are may also affect the fidelity of lensing data extracted from the CMB, due to similarities between the effects of lensing and CIPs on the CMB two-point statistics~\citep{Heinrich:2016gqe}.
This emphasizes the importance of considering CIPs to make unbiased measurements of early Universe characteristics. Although we do not consider the effects of non-Gaussianities in our current estimator construction and make a simple set of forecasts under the null hypothesis, we highlight the effectiveness of kSZ tomography as a probe for early universe cosmology. When considering more complicated models including the CIP, we expect cross-correlation tools such as the kSZ tomography, multi-tracer analysis with different populations of galaxies and haloes, CMB lensing and many others to be essential in obtaining tighter constraints under sample variance cancellation and breaking degeneracies across the varying signatures of the inflationary Universe.
\begin{acknowledgments}
We would like to thank Gabriela Sato-Polito for helpful discussions. This work was supported by NSF Grant No.\ 2112699 and the Simons Foundation. SCH is supported by the Horizon Fellowship at Johns Hopkins University.
\end{acknowledgments}
\bibliography{kSZCIP} |
Title:
Two-dimensional particle simulation of the boundary between a hot pair plasma and magnetized electrons and protons: out-of-plane magnetic field |
Abstract: By means of a particle-in-cell (PIC) simulation, we study the interaction
between a uniform magnetized ambient electron-proton plasma at rest and an
unmagnetized pair plasma, which we inject at one simulation boundary with a
mildly relativistic mean speed and temperature. The magnetic field points out
of the simulation plane. The injected pair plasma expels the magnetic field and
piles it up at its front. It traps ambient electrons and drags them across the
protons. An electric field grows, which accelerates protons into the pair
cloud's expansion direction. This electromagnetic pulse separates the pair
cloud from the ambient plasma. Electrons and positrons, which drift in the
pulse's nonuniform field, trigger an instability that disrupts the current
sheet ahead of the pulse. The wave vector of the growing perturbation is
orthogonal to the magnetic field direction and magnetic tension cannot
stabilize it. The electromagnetic pulse becomes permeable for pair plasma,
which forms new electromagnetic pulses ahead of the initial one. A transition
layer develops with a thickness of a few proton skin depths, in which protons
and positrons are accelerated by strong electromagnetic fields. Protons form
dense clumps surrounded by a strong magnetic field. The thickness of the
transition layer grows less rapidly than we would expect from the typical
speeds of the pair plasma particles and the latter transfer momentum to
protons; hence, the transition layer acts as a discontinuity, separating the
pair plasma from the ambient plasma. Such a discontinuity is an important
building block for astrophysical pair plasma jets.
| https://export.arxiv.org/pdf/2208.12075 |
\preprint{AIP/123-QED}
\title[Instability of the discontinuity]{Two-dimensional particle simulation of the boundary between a hot pair plasma and magnetized electrons and protons: out-of-plane magnetic field}
\author{M.~E.~Dieckmann}
\affiliation{Department of Science and Technology (ITN), Link\"oping University, 60174 Norrk\"oping, Sweden}
\email{[email protected]}
\author{D.~Folini}
\author{R.~Walder}
\affiliation{Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis-Laval, France}
\author{A.~Charlet}
\affiliation{Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574, F-69230, Saint-Genis-Laval, France}
\affiliation{Laboratoire Univers et Particules de Montpellier (LUPM), Universit\'e de Montpellier, CNRS/IN2P3, CC72, place Eug\`ene Bataillon, F-34095 Montpellier Cedex 5, France}
\affiliation{Astrophysics Research Center of the Open University (ARCO), The Open University of Israel, P.O. Box 808, Ra’anana 4353701, Israel}
\author{A.~Marcowith}%
\affiliation{Laboratoire Univers et Particules de Montpellier (LUPM), Universit\'e de Montpellier, CNRS/IN2P3, CC72, place Eug\`ene Bataillon, F-34095 Montpellier Cedex 5, France}
\date{\today}%
\section{\label{intro}Introduction}
Some binary systems consisting of an accreting neutron star or black hole and a companion star are sources of pair plasma.~\cite{Mirabel94,Siegert16} The pair plasma is generated by energetic processes near the inner accretion disc or through the interaction of electromagnetic radiation with the strong intrinsic magnetic fields of the compact object or its accretion disc.~\cite{Blandford77,Blandford82,Yuan14} The ejected pair plasma must eventually interact with the wind of the stellar companion. This interaction can channel the pair outflow into a jet that can reach a superluminal speed.\cite{Mirabel94,Fender04} Such pair outflows have been named as a possible source for galactic positrons.\cite{Prantzos11} Hydrodynamic models provide an intuitive description of the jet structure.~\cite{Aloy99,Bromberg11,Perucho08,Charlet22} They apply if the mean free path of the particles, which constitute the fluid, is small compared to the spatial scales of interest. Hydrodynamic models take into account important elementary structures like sound- and rarefaction waves, shocks, and contact discontinuities. Contact discontinuities separate two fluids of different origin, composition, density, and temperature.
Figure~\ref{figure1} is a sketch of a hydrodynamic jet near its front. The plasma in the spine flow has a low mass density and a high bulk velocity. The ambient plasma far from the jet is at rest. A contact discontinuity (CD) separates both fluids. In what follows, we assume that all material enclosed by the CD is pair plasma while all material outside the CD is plasma composed of protons and electrons. We can thus distinguish both fluids by the carrier of positive charge. The streaming pair plasma is slowed down, compressed, and heated up as it approaches the CD forming a layer of hot material near it; the inner cocoon. An internal shock develops between the inner cocoon and the spine flow if the pair plasma's mean speed change exceeds the local sound speed. The thermal pressure of the hot plasma in the inner cocoon pushes the CD away from the spine flow. The moving CD accelerates the nearby ambient plasma. A shock forms at the front of the accelerated ambient material if its speed exceeds the local sound speed. Most of the jet's momentum is transferred to the CD at the jet's head, giving the jet its characteristic elongated shape.
Hydrodynamic models assume that CDs and shocks are thin compared to the scales of interest. The mean free path of particles in the interstellar medium or a stellar wind is, however, not negligibly small compared to the jet size. Therefore, binary collisions are replaced by mechanisms based on the electromagnetic fields induced by the collective motion of the plasma particles, as the means to exchange momentum and energy between particles. Certain properties of shocks and discontinuities in such material, which is known as collisionless plasma, are different from those of their hydrodynamic counterparts with potentially far-reaching astrophysical consequences. It is important to determine if and how electromagnetic fields can sustain discontinuities in a collisionless plasma and if the discontinuities remain thin compared to the spatial scales of interest.
Particle-in-cell (PIC) simulations can resolve all structures in collisionless plasma. Most previous PIC simulation studies related to jets in collisionless plasma have focused on pair plasma shocks\cite{Nishikawa03,Chang08} and electron-ion shocks,\cite{Frederiksen04,Spitkovsky08a} which correspond to the internal and external shocks in Fig.~\ref{figure1}. One finding was that magnetic fields in the transition layers of relativistic shocks can have energy densities that exceed those expected from compression of the upstream magnetic field. This magnetic energy can be released through magnetic reconnection.~\cite{Melzani14,Marcowith16} Another result is that collisionless shocks can accelerate a small fraction of plasma particles to cosmic ray energies.~\cite{Marcowith16} The onset of such an acceleration has been studied with PIC simulations.~\cite{Spitkovsky08b} Discontinuities between pair plasma and electron-proton plasma like the CD in Fig.~\ref{figure1} have not been explored to the same extent. Such a discontinuity has been observed in two-dimensional PIC simulations of a pair plasma that propagated through a magnetized electron-proton plasma~\cite{Dieckmann19,Dieckmannetal20} and studied in one spatial dimension~\cite{Dieckmann20} assuming that it is planar. In the two-dimensional simulation,~\cite{Dieckmannetal20} the discontinuity became unstable to a magnetic Rayleigh-Taylor-like instability.~\cite{Winske96,Stone07,Bret11,Liu19,Hillier16} The wave vector of the perturbation was parallel to the magnetic field and the instability increased magnetic tension, which eventually quenched the instability.
Here we examine the interface between an expanding unmagnetized pair plasma and a magnetized electron-proton plasma at rest using a two-dimensional particle-in-cell (PIC) simulation. We let the interface, which takes the role of the CD in Fig.~\ref{figure1}, grow self-consistently. The ambient electron-proton plasma is permeated by a spatially uniform magnetic field, which is oriented perpendicularly to the expansion direction of the pair cloud. Its magnetic pressure matches the electron thermal pressure. We use the same plasma conditions as in a previous simulation\cite{Dieckmannetal20} apart from a lower mean speed of the pair cloud and a magnetic field direction, which is now normal to the simulation plane. We obtain the following results. The expanding pair plasma expels the magnetic field and piles it up at its front. The moving magnetic field traps the electrons of the ambient plasma and pushes them into the pair plasma's expansion direction. Their current induces an electric field, which accelerates the protons to a speed comparable to the interface's speed. In what follows we refer to this interface as the electromagnetic pulse (EMP). It separates positrons from protons and resembles the one observed previously~\cite{Dieckmannetal20} at early simulation times.
The magnetic field points out of the simulation plane and can be rearranged by a perturbation without bending field lines. Interchange modes of the magnetic Rayleigh-Taylor instability, which grow in such a plasma configuration, do not increase magnetic tension. Hence, they tend to be more unstable and disruptive than the undular mode~\cite{Liu19} studied in Ref.~\cite{Dieckmannetal20} We do not observe these interchange modes, because the EMP is destroyed before the magnetic Rayleigh-Taylor instability can set in. The current sheet, which confines the magnetic field ahead of the EMP, is sustained by ambient electrons that drift in the EMP's field and positrons that leaked through it. This current sheet is disrupted by a streaming instability between these particles and protons. The growing waves have a wavevector parallel to the drift direction of the ambient electrons, which was not resolved in the previous simulations.~\cite{Dieckmann19,Dieckmann20,Dieckmannetal20} These waves grow fast and their saturation lets the EMP become permeable for pair plasma, broadening the transition layer between the pair cloud and the ambient plasma. The speed, at which the thickness of the transition layer between the ambient plasma and the pair plasma increases, is well below the average speed of the pair plasma particles. This, together with the observed momentum transfer from the pair plasma to the ambient plasma, implies that the transition layer still acts as a discontinuity that is thin compared to the typical particle mean free paths near relativistic jets.
The structure of our paper is as follows. Section 2 discusses the initial conditions of our simulation. Section 3 presents the early phase of the plasma collision while the late evolution is discussed in Section 4. Section 5 summarizes our findings.
\section{\label{setup}Initial conditions for the simulation}
Each plasma species in a collisionless plasma is represented by a phase space density distribution, which is a function of independent position and velocity coordinates. A PIC simulation code approximates the phase space fluid by an ensemble of computational particles (CPs), which have position and velocity coordinates and the same charge-to-mass ratio as the plasma species they represent. The current contributions of all CPs are summed up and give the macroscopic current density. This current density, the electric field, and the magnetic field are defined on a numerical grid and are connected via discretized forms of Maxwell's equations. The electromagnetic forces are interpolated to the position of each CP and update its velocity. We specify the time $T_{sim}$, during which we want to evolve the plasma, and the code subdivides it into time steps $\Delta_t$ with a duration that depends on the code's numerical scheme. The numerical cycle of the PIC code EPOCH we use is discussed in detail elsewhere.\cite{Arber2015}
The two-dimensional simulation box is filled with an ambient plasma with the electron density $n_0$ and equally dense protons with the proton-to-electron mass ratio $m_p/m_e=$ 1836. Both species have the temperature $T_0=$ 2 keV. We normalize time to $\omega_{pi}^{-1}$ with the proton plasma frequency $\omega_{pi} = {(e^2n_0/\epsilon_0m_p)}^{1/2}$ ($e,c,\epsilon_0,\mu_0$: elementary charge, speed of light, vacuum permittivity and permeability). Space is normalized to the proton skin depth $\lambda_{i} = c/\omega_{pi}$. The simulation box with periodic boundaries resolves the spatial interval $L_x = 35$ along $x$ by 12000 grid cells and $L_y=8.75$ along $y$ by 3000 grid cells. A magnetic field with the amplitude $B_0$ is aligned with $z$. The electron thermal pressure $P_e = n_0k_B T_0$ ($k_B$: Boltzmann constant) equals the magnetic pressure $P_B=B_0^2/2\mu_0$. The proton gyrofrequency $\omega_{ci}=eB_0/(m_p\omega_{pi})$ is $2.1\times 10^{-3}$. We inject at the boundary $x=0$ a pair plasma, which consists of electrons and positrons with the temperature $50T_0$ (100 keV), has the mean speed $v_d=0.6c$ along increasing $x$, and the respective densities $n_0$ measured in the rest frame of the simulation box.
We initialize the electrons and protons of the ambient plasma by 25 computational particles (CPs) per cell each. We inject 16 CPs per cell per time step to represent the electrons of the pair cloud and use the same number for the positrons. The simulation evolves the plasma until the final time $T_{sim}=200$ equivalent to 200$\omega_{pi}^{-1}$.
In what follows, we present the data on a grid that is shifted relative to the simulation grid. Data from the simulation interval $L_x/2 \le x < L_x$ is moved to the the x-interval between $-L_x/2 \le x <0$. The boundary $x=0$, where we inject the pair cloud, is centered in the data grid. All displayed field components and plasma densities have been averaged over patches of 4 by 4 grid cells to improve the signal-to-noise ratio.
\section{\label{early}Initial evolution}
Figure~\ref{figure2} shows the plasma and field distribution at the time $t=5$.
The front of the injected electrons is located at $x \approx 0.6$ and that of the positrons at $x\approx 0.8$. Injected electrons occupy a smaller interval along $x$ than positrons and their density is higher. Most ambient electrons were expelled by the injected electrons and accumulated in the interval $0.6 \le x \le 0.8$. Pair cloud particles are scattered and reflected by the magnetized ambient plasma. Some return to the periodic boundary and cross it. Electrons in Fig.~\ref{figure2}(a) have expanded to $x\approx -0.45$ and positrons in Fig.~\ref{figure2}(b) to $x=-0.6$. Like in the case of the upward moving pair cloud front, the injected electrons are denser than the positrons and the ambient electrons accumulate ahead of them. The pair cloud, which expands in both directions from the injection boundary, is not symmetric around $x=0$. The injected pair cloud loses energy to the ambient plasma on its way up and the reflected particles interact with newly injected pair cloud particles as they return to the boundary. Hence, the pair cloud in the half-space $x<0$ starts its expansion later, it is closer to thermal equilibrium and it has a lower mean speed than the one in $x>0$.
Positrons fill an interval in Fig.~\ref{figure2}(b) that is 1.5 wide. A particle with the mean speed $v_d$ of the pair cloud should have propagated the distance $5v_d/\lambda_i=3$ at the time shown in Fig.~\ref{figure2}. The slowdown of the pair cloud by the ambient plasma increases its density beyond 1. Figure~\ref{figure2}(d) reveals a pile-up of protons in the interval $0.6 \le x \le 0.9$, which is trailed by a depletion at lower $x$. Another barely visible proton accumulation is located in the interval $-0.6 \le x \le -0.25$. Figure~\ref{figure2}(e) reveals why injected and ambient electrons remain separated. The expanding pair cloud expels the magnetic field and piles it up at its front forming the structure we call EMP. The magnetic field is amplified to about 3$B_0$ in the interval $0.6 \le x \le 0.9$. The normalized gyroradius $r_{g}(v_0) = v_0m_e/(3eB_0\lambda_i)$ of an ambient electron in the amplified magnetic field with a speed $v_0$, which equals the thermal speed corresponding to the temperature 2 keV, is approximately $5.5 \times 10^{-3}$. This gyroradius is well below the thickness $\approx 0.3$ of the EMP. The gyroradius of leptons with $v_0=v_d$ is about $0.07$. The EMP is strong and wide enough to confine the injected pair cloud and trap the ambient electrons magnetically.
Protons will only react to the electric field in Fig.~\ref{figure2}(f) since $\omega_{ci}T_{sim}=0.42$. We can estimate the speed to which they are accelerated once we know the EMP's propagation speed, which we determine with the help of the convective electric field. The magnetic field points along $z$ and the EMP propagates with the speed $v_p$ along $x$, which gives the convective electric field $E_y = v_p B_z$. We average the electric and magnetic field components over $y$ and plot them in Fig.~\ref{figure3}.
The magnetic $B_x$ and $B_y$ components oscillate around zero. The average $B_z$ vanishes for $0 \le x \le 0.5$ and equals $B_0$ for $x>1.1$. The y-averaged scaled $E_y$ component follows closely the EMP up to $x\approx 0.7$; its rear end moves at the speed $v_p\approx c/40$. The propagating EMP drags with it the ambient electrons. Their current drives the electric field in Fig.~\ref{figure2}(f). We estimate the proton velocity change $\Delta_v$ as follows. The average electric field $E_x \approx 1$ in Fig.~\ref{figure3}(b) corresponds to $E_{p,x}=c B_0$ in physical units. Protons are at rest before the EMP with the width $\Delta_p \approx 0.3\lambda_i$ and speed $v_p$ arrives. Their approximate exposure time to its electric field in physical units is $\delta_t = \Delta_p / v_p$ or $12 / \omega_{pi}$. The Lorentz force equation gives us the approximate velocity change in physical units
\begin{equation}
\Delta_v \approx \frac{eE_{p,x}}{m_p} \delta_t = \frac{e c B_0}{m_p} \delta_t = 12 c\frac{\omega_{ci}}{\omega_{pi}}\approx v_p.
\label{estimate}
\end{equation}
Figure~\ref{figure4} considers the time $t=10$. The pair cloud is confined by EMPs on both sides of the boundary $x=0$. The central position along $x$ of each EMP oscillates as a function of $y$. The electric field points orthogonally to the front of the EMP, which is becoming increasingly distorted. It can thus also have a component $E_y\neq 0$. Most plots only show $E_x$ because we are primarily interested in how protons are accelerated along the mean expansion direction of the pair cloud.
Figure~\ref{figure4}(d) evidences that the amplitude of the proton density modulation along $x$ has increased; the one near the EMP in the half-space $x>0$ has reached the amplitude $0.1$. The proton density modulation continues to increase.
Figure~\ref{figure5} sheds light on the mechanism, that deformed the initial EMP in Fig.~\ref{figure4} and will eventually lead to its destruction. We focus on the distributions of the electrons, positrons, and relevant electric field components near the right part of the initial EMP in Fig.~\ref{figure4}. We display the square root of the lepton densities in Fig. ~\ref{figure5}(a-c). Ambient electrons in Fig.~\ref{figure5}(a) were piled up at $x\approx 1$ by the expanding pair cloud.
The magnetic field of the EMP is strong enough to confine the bulk of the pair cloud. The sizeable force the $E_x$ component exerts on particles slows down the electrons of the pair cloud and accelerates its positrons. As a result, the electron density in Fig.~\ref{figure5}(b) decreases sharply at the transition layer and hardly any electrons reach $x>1.2$. Some positrons cross the EMP, and rotate in the homogeneous magnetic field of the ambient plasma until the moving EMP catches up with them. The motion of these particles gives rise to a net current along the negative $y$-direction ahead of the EMP. This current, together with that of the ambient electrons that drift in the electromagnetic field of the EMP and its gradient, gives rise to the sharp decrease of the magnetic field amplitude at the EMP's front.
The temperature of the ambient electrons and their mean speed $v_p \approx c/40$ relative to the EMP are well below the temperature and mean speed $v_d$ of the pair cloud; their gyroradius is thus much smaller than the width of the EMP and we can approximate their trajectory as gyrations around a drifting guiding center. Protons, on the other hand, are practically unmagnetized. We estimate the drift speed $v_{eb} = E_x/B_z$ of ambient electrons using the values $E_x \approx 1$ and $B_z \approx 2.5$ obtained from Fig.~\ref{figure4}. The drift speed $v_{eb}=0.4c$ is about 6 times larger than the initial thermal speed of the ambient electrons. The current due to the drifting electrons has the same direction as the contribution from the gyrating positrons and both add up. The mildly relativistic drift speed of the ambient electrons together with their large density ahead of the EM implies that they contribute most of the current ahead of the EMP.
Such a fast relative drift between ions and magnetized ambient electrons leads to the growth of electrostatic upper-hybrid waves and electron-cyclotron waves.~\cite{Dieckmann00} They cause the electron-scale oscillations of the electric $E_x$ and $E_y$ components ahead of the EMP in Fig.~\ref{figure5}(d, e). These waves are strong enough to bunch positrons, as can be seen in Fig.~\ref{figure5}(c). This implies that the current density ahead of the EMP and, hence, the magnetic field gradient change with $y$. Figure~\ref{figure5} demonstrates that the wave fields are not distributed uniformly along the front of the EMP and that their field amplitude exceeds that of the EMP. These waves are strong and nonuniform and electrostatic drift instabilities are thus the most likely reason for the EMP's deformation. Figure~\ref{figure5}(e) also shows electric field patches, like the one at $y\approx 7.25$ and $x\approx 1$, which are a consequence of the tilt of the EMP.
The modulation of the EMP continues to grow and Fig.~\ref{figure6} reveals the consequences of this deformation. Fingers in the pair cloud have extended far beyond the EMP. The injected electrons have expelled ambient electrons and the pair cloud has piled up the magnetic field at the border of the fingers. The electric field, driven by the current of the expelled ambient electrons, has started to accelerate protons well upstream of the initial EMP. In what follows, we refer with initial EMP to the one that formed first. The term EMP refers to the electromagnetic pulse that marks the border between the pair cloud and unperturbed ambient plasma.
The reduced proton density behind the initial EMPs in Fig.~\ref{figure6}(d) proves their ability to accelerate protons. The initial EMP in the lower half-plane is deformed but it still confines the pair cloud.
Figure~\ref{figure7} presents the phase space density distribution associated with the proton density distribution in Fig.~\ref{figure6}(d).
We find solitary waves at the positions of the initial EMPs. Protons at the crests of the oscillations reach energies of about 300 keV. This energy corresponds to the speed $v_p=c/40$, which confirms the estimate by Eqn.~\ref{estimate} and underlines that proton inertia sets the speed of an EMP. Some protons are reflected by the EMP that moves to increasing $x$. They are accelerated to $2v_p$, which gives them the energy of 1 MeV in the simulation frame. Figure~\ref{figure2} demonstrates that the proton density peak coincides with that of the magnetic pressure, which is characteristic of fast magnetosonic modes.
The ion-acoustic speed is $c_s={(k_B(\gamma_eT_e+\gamma_pT_p)/m_p)}^{1/2}$. Ion-acoustic oscillations are slow on electron time scales, which allows electrons to interact with many such waves during one oscillation and be scattered by plasma thermal noise. Ion acoustic waves accelerate protons only in the direction of the wave vector. It is thus widely assumed that electrons have three degrees of freedom and protons one on the time scales of interest. These degrees of freedom give the adiabatic constants $\gamma_e=5/3$ and $\gamma_p=3$, respectively. The electron and proton temperatures in the ambient plasma are both $T_0$ and $c_s\approx 3.2 \times 10^{-3}c$. The Alfv\'en speed $v_A=B_0/{(\mu_0n_0m_p)}^{1/2}$ in the ambient plasma is $v_A \approx 2\times 10^{-3}c$.
The speed $v_p$ of the EMP is about 6.6 times the fast magnetosonic speed $v_{fms}={(c_s^2+v_A^2)}^{1/2}$. The solitary wave in the proton distribution is thus way too fast to be a fast magnetosonic soliton. It has this speed because it is accelerated continuously by the electric field of the EMP. At early times and near $x=0$, the electric field has a low amplitude and it hardly accelerates protons. In time, the EMP moves away from $x=0$ and its electric field and the proton velocity change increase, which results in the observed proton energy profile close to the boundary at $x=0$ in Fig.~\ref{figure7}.
\section{\label{late}Long term evolution}
It is important to know if the transition layer, which forms after the collapse of the initial EMP, is able to maintain a separation of the pair plasma and the ambient plasma or at least slow down their mixing. If this is the case, the transition layer still acts as a discontinuity. In what follows, we track the evolution of the interfaces between the pair cloud and the ambient plasma. We consider first the interface in the half-space $x>0$.
\subsection{Forward expansion}
Figure~\ref{figure8} presents the plasma and field data at $t=100$.
The plasma and field distributions show three domains. Domain 1 with $0 \le x \le 3$ is characterized by a dense pair cloud with a per species density of about $6$ and a low density of the ambient plasma. The amplitude of the initial EMP grew in time and it became strong enough to accelerate and expel protons at $x\approx 0.5$. Hardly any protons are left in the interval $0.5 \le x \le 2$. Domain 3 is the ambient plasma, which has not yet been affected by the expanding pair cloud. The electromagnetic fields in domains 1 and 3 are not zero. The initial magnetic field is still present in the unperturbed ambient plasma. Statistical fluctuations of the charge density give rise to electric field fluctuations in both outer domains while current density fluctuations are responsible for the magnetic noise in the domain occupied by the pair cloud. Figure~\ref{figure3} demonstrated that the spatial average of the amplitude of these fluctuations is zero in the pair cloud. The fluctuations are strong in the pair cloud and weak in the ambient plasma because the spatially averaged power $B_z^2$ increases with the temperature. Domain 2 is located between the outer two. Pair cloud plasma and ambient plasma coexist in this domain and their interaction drives strong coherent electromagnetic fields and waves.
Figure~\ref{figure9} (Multimedia view) animates the distributions of $B_z$, $E_x$ and the plasma densities in time and shows them at the time $T_{sim}=200$. We find the same subdivision into three domains of the plasma and field distributions. Domain 3, which is ambient plasma that has not yet been affected by the pair plasma, is found at large $x$. Most protons have been expelled from domain 1 in $1 \le x \le 5$. This interval has been filled by a dense pair plasma with a mean positron density of about 6. Fingers, which extend from the boundary $x=0$ into the pair cloud, reach the peak density $8$. Their origin is an instability between the pair cloud and the protons near the boundary. Their large inertia lets protons react slowly to the streaming pair plasma. In time, filaments form in the proton density distribution at $0 \le x \le 1$ in Fig.~\ref{figure9}(d). Pair cloud particles, which stream across these filaments, must maintain quasi-neutrality. They rearrange themselves into filaments, which emerge in the form of fingers. These fingers are confined by an in-plane magnetic field (not shown).
Domain 2 is the transition layer between the pure pair plasma and the ambient plasma. It is characterized by a clumpy proton distribution, which is found in the interval $2.5 \le x \le 9.5$ in Fig.~\ref{figure8} and in the interval $5 \le x \le 15$ in Fig.~\ref{figure9}. Its center along $x$ has thus propagated from $x\approx 6$ to $x\approx 10$ during 100 time units, which yields the speed $0.04c$. It exceeds the speed $v_p = c/40$ of the initial EMP but it remains well below the mean speed $v_d=0.6c$ of the pair cloud. The width of the transition layer increases during this time from 7 to 10, which yields the expansion speed $0.03c$. The transition layer slows down the pair cloud's expansion by a factor of 15 and mixing between both species is slow.
The thickness of the transition layer exceeds by far the gyroradius $\approx 0.2$ of leptons moving at the speed $v_d$ in a magnetic field with the strength $B_0$. Figure~\ref{figure9}(e) demonstrates that the pair plasma created channels, from which the coherent background magnetic field was expelled. The pair plasma streams freely through these channels devoid of magnetic fields. Ambient electrons that were pushed forward by the pair plasma created the electric field at the pair cloud's boundary, changing it into an EMP.
Patches within domain 2, which are filled with a strong magnetic field, coincide with clumps of ambient plasma in Figs.~\ref{figure9}(c-e). Figure~\ref{figure9}~(Multimedia view) reveals that these clumps are ambient plasma, which was displaced and compressed by the expanding fingers of pair plasma. Some patches are what remains from the initial EMP but the expanding pair cloud also creates magnetized clumps of ambient plasma well ahead of the location of the initial EMP. The magnetic field within these patches reaches its peak amplitude on the side that is facing the inflowing pair cloud. Ambient plasma has a residual magnetic field that points in the positive z-direction. It deflects electrons and positrons of the pair cloud into opposite directions and the ensuing net current amplifies the residual field to a peak amplitude that is more than 5 times larger than $B_0$. Figure~\ref{figure8} and Fig.~\ref{figure9} demonstrate that these strong magnetic fields are not driving detectable coherent electric fields and that they cannot fully separate protons from positrons. We note in this context that, unlike the magnetized ambient plasma ahead of the initial EMP, the size of the proton and magnetic field accumulations is not large compared to a gyroradius of a lepton with the speed $v_d$. These magnetic boundaries are thus not EMPs. Given that the magnetic pressure associated with the amplified magnetic fields is an order of magnitude larger than the initial thermal pressure $2P_e$ of the ambient plasma, protons will react to the magnetic pressure gradient force.
\subsection{Backward expansion}
Figure~\ref{figure10} presents the field and plasma distribution at the time $t=100$. The front of the dense part of the pair cloud at $x\approx -3$ is approximately planar. Fingers in the pair cloud density emerge at the boundary $x= 0$ and the longest have reached the position $x=-1$. Their cause is the aforementioned instability between the pair cloud and the protons, which were too close to the boundary to be accelerated by the initial EMP.
The proton density in Fig.~\ref{figure10}(d) peaks at $-4\le x \le -3$. These protons were piled up by the initial EMP.
Although the initial EMP was also destroyed by the streaming instability, the transition layer that emerged out of it remained more compact than the forward-moving one. A strong magnetic field has developed on the side of the proton accumulation that faces the pair cloud flow. It has been amplified by the current of the injected pair particles that drifted in the residual magnetic field of the ambient plasma. Electric fields with strength $-0.5$ have been induced by ambient electrons, which were pushed downwards by the expanding pair cloud. Only weak electromagnetic fields exist in domain 1 above the initial EMP, which contains the pair cloud and only a small number of protons. Pair plasma streamed through the deformed initial EMP and expanded far upstream of domain 1 in the form of two fingers, the larger of which reached $x\approx -6.5$ at $y\approx 8$. As before, ambient electrons were expelled by the moving magnetic field in Fig.~\ref{figure10}(e) that kept them separated from the injected electrons. Consequently, the density of the ambient electrons is reduced inside both fingers and increased to about 1.5-2 at their boundaries. Protons reacted to the electric field induced by the moving ambient electrons and new EMPs grew near the boundaries of the fingers. Domain~2 in $-7 \le x \le -4$ is again characterized by the simultaneous presence of protons, positrons, and strong electromagnetic fields. Pair cloud particles are confined to the fingers while Fig.~\ref{figure8} showed a less orderly distribution of these particles in the transition layer at the front of the forward-moving pair cloud. Figure~\ref{figure10}(f) reveals rapid electric field oscillations near the boundaries of the pair cloud fingers at $x\approx -6.5$ and $y\approx 7$ and at $x\approx -6$ and $y\approx 1.8$. Their electrostatic nature and short wavelength suggest that they arise from the same streaming instability that destroyed the initial EMP.
Figure~\ref{figure11} shows the plasma and field distribution at $T_{sim}=200$. The front of the dense part of the pair cloud (domain 1) propagated from $x=$ -3 to -6. Unlike the case shown by Fig.~\ref{figure9}, we observe periodic stable structures in domain 2, which is now located in the interval $-9 \le x \le -6$. These structures can be seen in all displayed plasma and field components.
Figure~\ref{figure11}~(Multimedia view) evidences their stability. The lower pressure exerted by the pair cloud on the ambient plasma in the half-space $x<0$ leads to a less turbulent structure of domain~2 compared to the one in Fig.~\ref{figure9}. We use again the distribution of proton clumps to quantify the mean speed and expansion speed of the transition layer. We find such clumps in the interval $-6 \le x \le -1.5$ at $t=100$ and in the interval $-9 \le x \le -4$ at $t=200$, giving a mean speed modulus $2.75 \times 10^{-2}c$ and expansion speed $1.25 \times 10^{-2}c$.
Figure~\ref{figure12} shows how $P_{mag}(x,t) = B_z^2(x,t)/2\mu_0P_e$ and the densities of positrons and protons evolve in time. All quantities were averaged over $y$.
Before $t\approx 15$, the positron density in Fig.~\ref{figure12}(b) has well-defined fronts on both sides of $x=0$; the pair cloud is confined on both sides by planar initial EMPs. The proton reaction to the expanding pair cloud becomes strong at $t\approx 15$. After $t=15$, we observe qualitatively different interactions between the pair cloud and the ambient plasma on both sides of $x=0$. A diffuse transition layer exists in the half-space $x>0$. It broadens rapidly during $15 \le t \le T_{sim}$ and extends up to $x\approx 15$ at the time $T_{sim}$, which gives its front the mean expansion speed $\approx v_d/8$. Positrons in the interval $x>0$ reached their peak density in those intervals, from which the protons were expelled. Narrow peaks of $P_{mag}$ and proton density exist in the half-space $x<0$. They have reached the position $x\approx -6$ at $t=200$, giving them a propagation speed $\approx v_d/20$. The speeds of the fronts of the forward and backward moving pair clouds correspond to the sums of the mean speeds and expansion speeds of the transition layers, which we estimated from the distributions of the proton clumps. The initial EMP, which propagates to the left at an almost constant speed, is effective at swiping out the protons. Even though the spatially uniform initial magnetic field has been expelled by the expanding pair cloud, we get a value $P_{mag} \approx 0.5$ in both domains~1 around $x=0$ due to the strong incoherent thermal fluctuations.
\subsection{Particle distribution functions}
We determine the energy, to which protons were accelerated during the simulation, and assess how their acceleration affected the energy distributions of the leptons. It is useful to give some reference values for the lepton energy. Leptons that move with mean speed $v_d$ of the pair cloud have an energy of about 130 keV. A pair cloud particle, which moves at the thermal speed $\approx 0.45c$ in the rest frame of the pair cloud, has the speed 0.82 $c$ and energy 750 keV in the simulation frame.
Figure~\ref{figure13}~(Multimedia view) and Fig.~\ref{figure14}~(Multimedia view) follow the energy distributions of electrons and positrons in time. Both lepton species have a diffuse and spatially uniform energy distribution for $-5\le x \le 5$ at $T_{sim}=200$, reaching peak energy of just above 1 MeV. Their energy range is the one expected for the injected pair cloud particles. Electrons and positrons have a similar energy spread. We identified the interval $-5 \le x \le 5$ as domain 1, in which positrons contribute most of the positive charge. The energy spread of electrons and positrons increases in the transition layers near both fronts of the positron cloud. Significant numbers of electrons reach energies of about 2 MeV while positrons reach and exceed the maximum of the displayed energy range.
We attribute this difference to the electric field of the EMPs near proton accumulations in the transition layer or at its front. This electric field accelerates protons and positrons and decelerates injected electrons in the expansion direction of the pair cloud, which caused also the different density distributions of both pair cloud species in Fig.~\ref{figure5}.
Figure~\ref{figure15} compares the energy distributions of the electrons and positrons, which were integrated over the box.
The peak in the electron distribution at low energies does not have a positronic counterpart; these are the ambient electrons outside the transition layer. Both curves follow each other closely between about 100 keV and 1 MeV. These are mostly pair cloud particles that have not interacted yet with the ambient plasma and are close to thermal equilibrium. Both species show an exponential fall-off at large energies with electrons showing a faster decrease. These energetic particles were accelerated in the transition layers.
The solitary waves, which we observed in Fig.~\ref{figure7} before the initial EMPs were destroyed, do not exist anymore at $T_{sim}=200$ in Fig.~\ref{figure16}. They have been replaced by broad layers, in which protons have been accelerated. Despite their different structure, both transition layers accelerate protons to about the same peak energy of about 4 MeV.
Figure~\ref{figure12} demonstrated that the fronts of the transition layers expanded at the respective speeds $v_d/8$ and $v_d/20$. Protons moving at these speeds have the kinetic energies of 2.6 MeV and 420 keV. The fastest protons in Fig.~\ref{figure16} outrun both transition layers provided that their velocity vector is aligned with the expansion direction of the transition layer. These protons will eventually interact with the ambient plasma either through beam instabilities or by means of a fast magnetosonic shock thereby creating an outer cocoon filled with hot protons. Fast magnetosonic shocks form and evolve over time scales $\omega_{ci}^{-1}\gg T_{sim}$, which we cannot resolve with a 2D simulation.
Figures~\ref{figure8}-\ref{figure11} revealed a domain structure similar to that of the hydrodynamic jet sketched in Fig.~\ref{figure1}. Domain 1 filled with unmagnetized, dense, and slowly expanding pair plasma corresponds to the inner cocoon. The mean speed of the pair cloud is set by the propagation speed $v_p \ll v_d$ of its front, where pair cloud particles are reflected. The unperturbed ambient plasma in the simulation (domain 3) and in the hydrodynamic jet model are the same; an outer cocoon bounded by an external shock could not form during the simulation time.
\section{Discussion}
We examined with a 2D simulation the expansion of a pair cloud into a magnetized ambient plasma. The pair plasma expelled the magnetic field and piled it up ahead of it. The piled-up magnetic field trapped ambient electrons and pushed them into the expansion direction of the pair cloud. On the short time scale resolved by the simulation, the protons could not react to the magnetic field. They were thus unable to balance the current of the moving ambient electrons. An electric field was induced that accelerated protons into the expansion direction of the pair cloud. We referred with electromagnetic pulse (EMP) to the combination of a magnetic field, which confines the pair cloud, and the electric field that accelerates the protons. Our periodic boundaries allowed pair plasma to cross it and also expand into the ambient plasma on the other side of the simulation box. This setup allowed us to study with one simulation the expansion into the ambient plasma of two pair clouds and EMPs with different bulk properties.
Initially, the EMPs at the fronts of both expanding pair plasma clouds were planar due to the uniform injection of pair plasma. The propagating EMPs grew from a small amplitude and their electric fields could not accelerate protons at early times and near the boundary. After a few inverse proton plasma frequencies, the electromagnetic field of both EMPs became strong enough to couple them to the protons. Their reaction limited the speed of the EMPs. Since we continuously injected pair plasma with a mean speed far greater than that of the EMP, the pair plasma density behind each EMP increased. Eventually, a balance was established between the pressure of the pair cloud and the ram pressure the protons exerted on the moving EMP. The EMP and the accelerated protons moved at a few percent of the speed of light.
The rapid drift of the ambient electrons at the EMP's front and their interaction with the protons resulted in a streaming instability. Electrostatic waves grew in the current sheaths ahead of the EMPs and interacted with the ambient electrons, positrons that had leaked through the EMP, and protons. The interaction of their nonuniform wave fields with the plasma gave rise to a spatially varying dissipation of current density ahead of the initial EMP and, hence, to a spatially varying magnetic field gradient; the EMP could not remain planar. Its deformation grew in time and could not be stabilized by increasing magnetic tension. Such instabilities have also been observed at discontinuities between an expanding plasma and an ambient plasma in space plasmas~\cite{Winske89} and in simulations of laser-generated plasma.~\cite{DieckmannTD}
The following picture emerged. Close to the injection boundary, positrons contributed most of the positive charge. Protons took that role far from the injection boundary. In a jet model, these two domains would correspond to the inner cocoon and the unperturbed ambient plasma. Our simulation was too short to capture the growth of an outer cocoon, which forms on time scales in excess of a few inverse proton gyrofrequencies.~\cite{Dieckmann20} Apart from the magnetic field, which was generated by the instability between the pair plasma and the protons near the injection boundary, the plasma in our inner cocoon was free of any detectable coherent magnetic field. This was expected because we injected unmagnetized pair plasma. Protons and positrons interacted in a transition layer between both domains. Interactions were mediated by EMPs as well as by strong magnetic fields, which separated the pair plasma from accumulations of ambient plasma. Their origin was a residual magnetic field, which was amplified by the drift current of the pair cloud. The width of the transition layer was of the order of a few proton skin depths.
How does this width compare to the size of relativistic jets? Let us assume that the relativistic jet moves through a stellar wind like the one emitted by our sun. At the Earth's orbit, the solar wind density is about 5 $\mathrm{cm}^{-3}$. The particle's mean free path, which sets the thickness of a contact discontinuity, is about 1 astronomical unit.~\cite{Goldstein05} The simulation time $T_{sim}=200$ corresponds to 68 milliseconds and the spatial unit to 100 km. Albeit the thickness of the transition layer $\sim 10$ is large in our simulation, it is 6 orders of magnitude less than the mean free path of the solar wind and, hence, well capable of forming a discontinuity that is infinitesimally thin on jet scales.
Two important questions could not be addressed by our simulation and must be left to future work. Firstly, the transition layer should be susceptible to a magnetized Rayleigh-Taylor instability because a pair plasma is pushing much heavier protons. This instability can grow even if ions are unmagnetized~\cite{Winske96} and if the thickness of the transition layer is comparable to the wavelength of its growing modes.~\cite{Brown88,Hillier16} We do not observe a Rayleigh-Taylor instability here, which may be a result of the spatially nonuniform plasma distribution within the transition layer or the steady increase of its thickness. Secondly, given that the long-term evolution of EMPs is different if the magnetic field in the ambient plasma is oriented in or orthogonal to the simulation plane and the interplay of these modes,~\cite{Hillier16} it would be important to study the structure of the transition layer in a 3D geometry.
\section*{CONFLICT OF INTEREST}
The authors have no conflicts to disclose.
\section*{ACKNOWLEDGEMENTS}
The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the NSC and on the centers of the Grand Equipement National de Calcul Intensif (GENCI) under grant number A0090406960. The first author also acknowledges financial support from a visiting fellowship of the Centre de Recherche Astrophysique de Lyon.
\section*{DATA AVAILABILITY}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\section*{REFERENCES}
\bibliography{manuscript}%
|
Title:
What do gravitational wave detectors say about polymer quantum effects? |
Abstract: We compute the expected response of detector arms of gravitational wave
observatories to polymerized gravitational waves. The mathematical and
theoretical features of these waves were discussed in our previous work. In the
present manuscript, we find both perturbative analytical, and full
nonperturbative numerical solutions to the equations of motion of the detector
arms using the method of geodesic deviations. These results show the
modifications to both frequency and amplitude of the signal measured by the
detector. Furthermore, we study the detectability of these signals in LISA by
analyzing the modes in the frequency space.
| https://export.arxiv.org/pdf/2208.09739 |
\title{What do gravitational wave detectors say about polymer quantum effects?}
\author{Angel Garcia-Chung}
\email{[email protected]}
\affiliation{Departamento de F\'isica, Universidad Aut\'onoma Metropolitana - Iztapalapa, \\ San Rafael Atlixco 186, Ciudad de M\'exico 09340, M\'exico.}
\affiliation{Tecnol\'ogico de Monterrey, Escuela de Ingenier\'ia y Ciencias, Carr. al Lago de Guadalupe Km. 3.5, Estado de Mexico 52926, Mexico.}
\author{Matthew F. Carney}
\email{[email protected]}
\affiliation{Department of Physics and McDonnell Center for the Space Sciences,
Washington University, St. Louis, MO 63130, USA}
\author{James B. Mertens}
\email{[email protected]}
\affiliation{Department of Physics and McDonnell Center for the Space Sciences,
Washington University, St. Louis, MO 63130, USA}
\author{Aliasghar Parvizi}
\email{[email protected]}
\affiliation{Department of Physics, University of Tehran, North Karegar Ave., Tehran 14395-547, Iran.}
\affiliation{School of Physics, Institute for Research in Fundamental Sciences (IPM),
P.O. Box 19395-5531, Tehran, Iran}
\author{Saeed Rastgoo}
\email{[email protected]}
\affiliation{Department of Physics, University of Alberta, Edmonton, Alberta T6G 2E1, Canada}
\affiliation{Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta T6G 2G1, Canada}
\affiliation{Theoretical Physics Institute, University of Alberta, Edmonton, Alberta T6G 2E1, Canada}
\author{Yaser Tavakoli}
\email{[email protected]}
\affiliation{Department of Physics,
University of Guilan, Namjoo Blv.,
41335-1914 Rasht, Iran}
\affiliation{School of Astronomy, Institute for Research in Fundamental Sciences (IPM), P. O. Box 19395-5531, Tehran, Iran}
\date{\today}
\section{Introduction}
We are living in the exciting era of multimessenger observatories where we are able to obtain signals from high energy phenomena via electromagnetic waves, neutrinos, and particularly, gravitational waves (GW). The most recent of these messengers, GWs, have opened up an unprecedented window of opportunity to study phenomena that could not be investigated experimentally prior to the discovery of these waves. Gravitational waves produced by the merger of compact objects [citation] have certainly revealed much about the properties of the objects that produce them, but these waves have the potential to reveal other aspects of the cosmos as well.
Perhaps the most exciting aspect of GWs for theoretical and fundamental physics is the possibility they provide for testing quantum gravity effects \citep{Addazi:2021xuf,LISA:2022kgy,LISACosmologyWorkingGroup:2022jok}. Indeed, the lack of experimental evidence for quantum gravity is and has been one of the most important issues in fundamental physics. However, recent and upcoming GW observatories such as LIGO, VIRGO, and LISA, give us the exciting possibility of finding phenomenological signatures of quantum gravity such as the quantum nature of spacetime, and potentially, the physics of the very early universe in the quantum gravity regime, to name a few. These instruments are thus very welcome additions to our multimessenger observatory arsenal and are crucial in advancement of the research in quantum gravity (for a nonexhaustive list of possibilities in phenomenology of quantum gravity with GWs see \cite{Amelino-Camelia:1998mjq,Giddings:2016tla,Arzano:2016twc,Addazi:2018uhd,Calcagni:2019ngc,Calcagni:2019kzo,Calcagni:2020tvw,Calcagni:2020ume,Bojowald:2007cd,Grain:2009eg,Dapor:2020jvc,Addazi:2018uhd,Barrau:2018rts,Maselli:2018fay,LISA:2022kgy,LISACosmologyWorkingGroup:2022jok}).
As mentioned above, GWs may be messengers by which we can study the possible fine/quantum structure of spacetime. This can be done either in a semiclassical regime or a fully quantum one. The semiclassical regime can be divided into two approaches. In the first semiclassical approach, the background spacetime over which the GW is propagating is quantized/discretized while the GW itself is considered as a classical wave. This classical wave, then, can be used to probe the fine structure of the (background) spacetime. In the second semiclassical approach, the background spacetime is classical while the GW is quantized. This is legitimate as a semiclassical approach since the GW is part of the metric or spacetime itself and hence its quantization may yield some information about the quantum nature of spacetime. This is the approach we use in this work and in our previous ones \citep{Garcia-Chung:2020zyq,Garcia-Chung:2021doi}. In fact, this method can also be used for the semiclassical approach in which Gamma Ray Bursts (GRB) propagate over spacetime \citep{Bonder:2017ckx}. A full quantum treatment can also be divided into two approaches. In the first approach both the background spacetime and the perturbations are quantized (while before quantization, one has divided the classical spacetime into a background and a perturbation). In the second approach, one first quantizes the whole spacetime nonperturbatively, obtains a semiclassical limit, and then in this limit divides the effective metric into a background and a foreground, and then studies the propagation of the latter on the former. This last approach is one we will consider in a future study.
There are many ways that one can quantize the background spacetime or the perturbations. In this work we use a quantization method, known as the polymer quantization. This is a nonperturbative method of quantization, in which usually either the configuration variable or its momentum does not admit a representation on a Hilbert space. Instead, a certain form of them resembling an exponential of the classical variables exists on this space. More precisely, on the Hilbert space, one of the canonical pairs is represented as the member of the algebra of the theory and the other one as the group member. This means that one loses the infinitesimal transformation in one of the variables and consequently the existence of only finite transformations leads to the discretization or quantization of the system (for more details and several examples, see e.g., \citep{Ashtekar:2002sn,Tecotl:2015cya,Morales-Tecotl:2016ijb}). %
Polymer quantization itself goes hand in hand with loop quantum gravity (LQG) \citep{Thiemann:2007pyv, Ashtekar:2004eh, Rovelli:2004tv} which is also a nonperturbative method of quantizing the classical spacetime. There have been several studies exploring the potential observational prospects of the quantized spacetime structure in LQG \cite{Brizuela:2016gnz,Agullo:2015tca,Parvizi:2021ekr,Dapor:2020jvc,Garcia-Chung:2020zyq} %
Following what we described in previous paragraphs, our goal in this work is to predict LQG-inspired polymer effects that may be observed in GW detectors. In this approach, we consider a polymer quantized GW propagating on a classical spacetime and study the dynamics of the detector hands interacting with such a wave. %
More precisely, we assume that upon the creation of the GWs as perturbations in the gravitational field due to high energy phenomenon such as the merger of black holes, the quantum nature of spacetime leaves its imprints on the waves as LQG-polymer signatures. Mathematically this is translated into the Fourier modes of GWs being polymer quantized. Then, when these waves travel the large astronomical or cosmological distances towards our planet, their dynamics is governed by an effective polymer description. In both of the above stages the dynamics is governed by equations with non-linear corrections which depend on the polymer parameters. Once these GWs reach our detectors, interact with the detector's arms. As mentioned above, in this work, we study the dynamics of the detector hands interacting with such waves. We will also analyze the detectability of these effects in LISA and will estimate the quantum gravity (or polymer) scale needed for such a detection.
This paper is organized as follows. In Sec.~\ref{sec:GRW-H} we present the Hamiltonian formalism of GWs in classical theory. We also give a brief introduction to polymer quantization and present the quantum Hamiltonian corresponding to the GWs and the associated effective Hamiltonian. We will review the effective theory for the wave propagation which we will need in the consequent sections. In Sec.~\ref{sec:phenomenology} we study the detector response to, and hence the resulting strain of, an effective polymerized GW. We first present the perturbative (in detector deviation) analytical solutions from which one can read off the modifications to the amplitude, frequency, and the speed of propagation of these effective GWs, up to the highest order in polymer (i.e., quantum gravity) scale. We then move on to the full nonperturbative (in detector deviation) solution to show that these effects are indeed nonperturbatively present, and no unexpected nonperturbative, secular type effects arise.
In Sec.~\ref{sec:LISA} we analyze a black hole-black hole binary (BHB) system in our model and study the frequency space of the resulting signal. This allows us to discuss the detectability of the polymer quantum gravity effects in such waves in LISA. Finally in Sec.~\ref{sec:Discussion} we conclude and discuss potential future directions.
\section{Hamiltonian formalism for GWs}
\label{sec:GRW-H}
In this section, we review the classical theory of GWs propagating on a flat spacetime background following \cite{Garcia-Chung:2020zyq}, polymer quantize this Hamiltonian, and compute the effective evolution equations for such waves.
\subsection{Classical theory}
GWs are the result of weak field approximation to Einstein's field
equations. On a flat background, these are generated by
a small metric perturbation to the Minkowski background spacetime. Given the unperturbed Einstein-Hilbert gravitational action
\begin{equation}
S_{\rm grav}\ =\ \frac{1}{2\kappa^2}\int d^4x \sqrt{-g}\, \mathcal{R} \, ,
\label{Eq:EH-Action}
\end{equation}
with $\kappa^2\equiv 8\pi G/c^4$,
the general perturbed metric is written as
\begin{equation}
g_{\mu\nu} = \mathring{g}_{\mu\nu} +\, h_{\mu\nu}\, =\ \eta_{\mu\nu} +\, h_{\mu\nu}\, ,
\label{Eq:metric-pert}
\end{equation}
where $\mathring{g}_{\mu\nu}=\eta_{\mu\nu}$ is the background metric, in this case the Minkowski metric,
and $h_{\mu\nu}$ denotes a small perturbation over $\eta_{\mu\nu}$. Moreover, we have
\begin{equation}
h^{\mu\nu}\, =\, \eta^{\mu\lambda}\eta^{\nu\tau}h_{\lambda\tau}.
\end{equation}
In order to reduce the number of terms in the linearized Einstein field equations,
it is convenient to express the Einstein tensor
in terms of the {\em trace-reversed} metric perturbation
\begin{equation}
\bar{h}_{\mu\nu}\, :=\, h_{\mu\nu} - \frac{1}{2}\eta_{\mu\nu}h\, ,
\end{equation}
where,
$h=h^{~\mu}_{\mu}=\eta^{\mu\nu}h_{\mu\nu}$.
Using the {\em Lorentz gauge}
\begin{equation}
\partial_{\mu}\bar{h}^{\mu\nu}=0,
\label{LorentzGauge}
\end{equation}
the linearized Einstein field equations in terms of $\bar{h}_{\mu\nu}$ are expressed as a wave equation. Additionally, by imposing the \emph{(synchronous) transverse-traceless} gauge
\begin{equation}
\bar{h}=0,\quad \quad \bar{h}_{0\mu}=0, \quad \quad {\rm and} \quad \quad \nabla_{i}\bar{h}^{ij}=0,
\end{equation}
the metric perturbation looks like a transverse wave.
In other words, we consider only spatial, transverse and traceless perturbations propagating on the unperturbed flat background.
A wave traveling along, say, the $x^{3}$ direction, can be separated into two polarizations of scalar modes $\bar{h}_{+}(x)$ and $\bar{h}_{\times}(x)$ as
\begin{equation}
\bar{h}_{ij}(x) \, =\, \bar{h}_{+}(x) e_{ij}^{+} + \bar{h}_{\times}(x)e_{ij}^{\times},
\label{polarizedmetric1}
\end{equation}
where,
\begin{align}
e^{+}= \left(\begin{array}{cc}
1 & 0\\
0 & -1
\end{array}\right) \quad \quad \text{and} \quad \quad
e^{\times}= \left(\begin{array}{cc}
0 & 1\\
1 & 0
\end{array}\right).
\end{align}
At second order in linear perturbation, in a traceless-transverse gauge, the perturbed action corresponding to this system becomes \citep{Bardeen:1980kt}
\begin{align}
S_{\rm GW}\ \simeq \frac{1}{8\kappa^2}\int d^4x \sqrt{-\eta}\,
\bar{h}_{ij}\, \mathring{\Box}\, \bar{h}^{ij} \, ,
\label{Eq:EH-Action2}
\end{align}
where $\mathring{\Box}\equiv \eta^{\mu\nu}\partial_\mu \partial_\nu$. The equations of motion associated to this action are,
\begin{equation}
\mathring{\Box}\, \bar{h}_{ij}(x) =0.
\label{eq:GWs}
\end{equation}
By substitution the Eq.~(\ref{polarizedmetric1}) into the perturbed action (\ref{Eq:EH-Action2}),
the Lagrangian density at second order in linear perturbations
becomes
\begin{equation}
{\cal L}_{\check{h}}=\frac{1}{2}\sum_{\lambda =+,\times} \check{h}_{\lambda}\mathring{\Box} \check{h}_{\lambda}+{\cal O}(\check{h}_\lambda^{2}),
\label{eq:Lagrangian-Perturb}
\end{equation}
where,
\begin{equation}
\check{h}_\lambda(x) \coloneqq \frac{\bar{h}_\lambda(x)}{2\kappa}\, .
\end{equation}
The effective action of the independent polarization modes, provided by the Lagrangian density \eqref{eq:Lagrangian-Perturb}, is that of two massless scalar fields.
Thus, the equations of motion for the (scalar) perturbation $\check{h}_{\lambda}(x)$, with a fixed $\lambda$, is simply the familiar Klein-Gordon equation,
\begin{equation}
\mathring{\Box}\, \check{h}_{\lambda}(x) =0.
\label{Eq:Field}
\end{equation}
Our aim henceforth, will be to study the polymer quantum theory of scalar perturbations $\check{h}_{\lambda}(x)$ --satisfying the Klein-Gordon equation (\ref{Eq:Field})-- propagating on a flat spacetime.
From the Lagrangian density \eqref{eq:Lagrangian-Perturb}, one can obtain the momentum $ \check{\pi}_{\lambda}$ conjugate to the field $\check{h}_{\lambda}(x)$.
The classical solutions of the equation of
motion (\ref{Eq:Field}) can then be expanded on a spatial hypersurface $x^0=$constant, in Fourier modes as
\begin{subequations}
\label{eq:H-lambda0}
\begin{align}
\check{h}_{\lambda}(x^0,\mathbf{x})\, &=\, \frac{1}{\ell^{3/2}}\sum_{\mathbf{k}\in\mathscr{L}}\mathfrak{h}_{\lambda,\mathbf{k}}(x^0)e^{i\mathbf{k}\cdot\mathbf{x}},\label{eq:H-lambda}\\
\check{\pi}_{\lambda}(x^0,\mathbf{x})\, &=\, \frac{1}{\ell^{3/2}}\sum_{\mathbf{k}\in\mathscr{L}}\Pi_{\lambda,\mathbf{k}}(x^0)e^{i\mathbf{k}\cdot\mathbf{x}},\label{eq: pi-lambda}
\end{align} \label{eq:lambda-tot}
\end{subequations}
where the wave vector $\mathbf{k}=(k_{1},k_{2},k_{3})\in(2\pi\mathbb{Z}/\ell)^{3}$ spans to a
three-dimensional lattice $\mathscr{L}$ \citep{Ashtekar:2009mb}. We assume that the allowed Fourier components are those with the wavevectors in the reciprocal space of an elementary cubical cell $\mathcal{V}$, equipped with coordinates $x^j\in(0, \ell)$, and denote by $V_o=\ell^3$ the volume of the $\mathcal{V}$. Then, all integrations in the Fourier expansion will be restricted to this volume. This assumption helps us overcome the factitious infinities that will
arise in $\mathbb{R}^3$ topology in integrals due to infinite volumes. In other words, it naturally gives us a theoretical infrared cutoff in our framework, although it makes our results cutoff-dependent, but it is physically relevant for the gravitational system we are going to study in this paper, at the end, we will have freedom to choose the scale of cutoff regarding the given system under study. The advantage of choosing a three-dimensional lattice $\mathscr{L}$ is to avoid the discussion of boundary conditions for the fields. More precisely, it is an assumption on boundary conditions and compactness of the sources, which simplifies calculations. In another study, we will use our framework in the context of inflation and extend our analysis slightly to incorporate the $\mathbb{R}^3$ topology.
The Fourier coefficients
are canonically conjugate satisfying the commutation relations $\{\mathfrak{h}_{\lambda,\mathbf{k}},\Pi_{\lambda,\mathbf{k}^{\prime}}\}=\delta_{\mathbf{k},-\mathbf{k}^{\prime}}$.
Moreover, the reality conditions on the field $\check{h}_{\lambda}(x^0,\mathbf{x})$ imply that
$\mathfrak{h}_{\lambda,\mathbf{k}}=\mathfrak{h}^{\ast}_{\lambda,-\mathbf{k}}$
and $\Pi_{\lambda,\mathbf{k}}=\Pi^{\ast}_{\lambda,-\mathbf{k}}$
are satisfied for each mode.
These conditions further indicate that not all the modes $\mathfrak{h}_{\lambda,\mathbf{k}}$
of the GWs are independent. In other words, when decomposing each
field mode $\mathfrak{h}_{\lambda,\mathbf{k}}$ and its conjugate
momentum $\Pi_{\lambda,\mathbf{k}}$ as,
\begin{subequations}
\begin{align}
\mathfrak{h}_{\lambda,\mathbf{k}} &\coloneqq \frac{1}{\sqrt{2}}\big(\mathfrak{h}_{\lambda,\mathbf{k}}^{(1)}+i\mathfrak{h}_{\lambda,\mathbf{k}}^{(2)}\big),\label{app-phi-pi-1a}\\
\Pi_{\lambda,\mathbf{k}} &\coloneqq \frac{1}{\sqrt{2}}\big(\Pi_{\lambda,\mathbf{k}}^{(1)}+i\Pi_{\lambda,\mathbf{k}}^{(2)}\big),\label{app-phi-pi-1b}
\end{align}
\end{subequations}
the reality conditions on $\mathfrak{h}_{\lambda,\mathbf{k}}$ and $\Pi_{\lambda,\mathbf{k}}$ enable
us to split the lattice $\mathscr{L}$ into positive and negative sectors $\mathscr{L}_{+}$ and $\mathscr{L}_{-}$, respectively.
Thereby, any summation over $\mathbf{k}\in\mathscr{L}$ can be decomposed into
its positive (for $\mathbf{k}\in \mathscr{L}_{+}$) and negative (for $\mathbf{k}\in \mathscr{L}_{-}$) parts.
Associated with these separate sectors, we can now define new variables ${\cal A}_{\lambda,\mathbf{k}}$
and ${\cal E}_{\mathbf{\lambda,k}}$ as
\begin{subequations}
\begin{align}
{\cal A}_{\lambda,\mathbf{k}} &\coloneqq \begin{cases}
\mathfrak{h}_{\lambda,\mathbf{k}}^{(1)} & \textrm{for}\quad\mathbf{k}\in\mathscr{L}_{+};\\
\mathfrak{h}_{\lambda,-\mathbf{k}}^{(2)} & \textrm{for}\quad\mathbf{k}\in\mathscr{L}_{-},
\end{cases}\label{def-q}\\
{\cal E}_{\mathbf{\lambda,k}} &\coloneqq \begin{cases}
\Pi_{\lambda,\mathbf{k}}^{(1)} & \textrm{for}\quad\mathbf{k}\in\mathscr{L}_{+};\\
\Pi_{\lambda,-\mathbf{k}}^{(2)} & \textrm{for}\quad\mathbf{k}\in\mathscr{L}_{-},
\end{cases}\label{def-p}
\end{align}
\end{subequations}
which are canonically conjugate,
\begin{equation}
\left\{ {\cal A}_{\lambda,\mathbf{k}},{\cal E}_{\mathbf{\lambda^{\prime},k}^{\prime}}\right\} =\delta_{\mathbf{k}\mathbf{k}^{\prime}}\delta_{\lambda\lambda^{\prime}}.
\label{eq:PB-AE}
\end{equation}
Using the Lagrangian \eqref{eq:Lagrangian-Perturb}, we can now express the Hamiltonian of the perturbation field, in terms of the new variables (\ref{def-q}) and (\ref{def-p}), as
\begin{align}
H\, &=\, \frac{1}{2}\sum_{\lambda=+,\times}\sum_{\mathbf{k}\in\mathscr{L}}\left[{\cal E}_{\mathbf{\lambda,k}}^{2}+k^{2}{\cal A}_{\lambda,\mathbf{k}}^{2}\right] \nonumber \\
\, &\eqqcolon\, \sum_{\lambda=+,\times}\sum_{\mathbf{k}\in\mathscr{L}} H_{\lambda, \mathbf{k}},
\label{eq:Hamiltonian-FLRW-2}
\end{align}
where $k=|\mathbf{k}|$.
Eq.~\eqref{eq:Hamiltonian-FLRW-2} represents the Hamiltonian
of a set of decoupled harmonic oscillators defined by conjugate pairs
$({\cal A}_{\lambda,\mathbf{k}},{\cal E}_{\mathbf{\lambda,k}})$ associated
with a mode $\mathbf{k}$ for a fixed polarization $\lambda$, and satisfying the relation \eqref{eq:PB-AE}. In the next subsection we will provide the effective polymer Hamiltonian associated with the above classical Hamiltonian.
\subsection{Polymer quantum theory: Effective dynamics\label{PolySubSection}}
The polymer quantization of the Hamiltonian \eqref{eq:Hamiltonian-FLRW-2} requires three main ingredients: (i) the Weyl algebra of quantum observables, (ii) the polymer Hilbert space together with the representation of the observables, and (iii) the polymer analog of the momentum operator. The first two ingredients are rather natural for many quantum descriptions (more details about the Weyl algebra is provided further below) but the third ingredient requires some clarification.
Polymer quantum mechanics and loop quantum cosmology (LQC) are very similar quantization schemes at the mathematical level. They are singular representations of the Weyl algebra, in the sense that the quantum states cannot be transformed under infinitesimal translations due to the fact that the associated generators do not exist on the Hilbert space. The Hilbert space only admits the finite generators of such transformations. Because the momentum operator in a mechanical system is usually the generator of infinitesimal translations, these quantization schemes do not provide a representation for the momentum operator. That is why the third ingredient is needed. More details of the polymer quantization and its relation with LQC and LQG can be found in the literature \cite{Ashtekar:2002sn, CorichiVZ, VelhinhoJM, GarciaChung}.
As we mentioned before, the first ingredient is the Weyl algebra of quantum observables. In this context, the Weyl algebra is the set of abstract operators whose multiplication contains the canonical commutation relations but in exponential form, sometimes denoted as
\begin{align} \label{WeylAlgebraMultiplication}
\widehat{W}(a_1,b_1) \widehat{W}(a_2,b_2)= e^{- \frac{i}{2\hbar}(a_1 b_2- b_1 a_2)} \widehat{W}(a_1+a_2, b_1+b_2).
\end{align}
Here, the elements $\widehat{W}(a,b)$ denotes each of the generator of the Weyl algebra labelled with $a,b \in \mathbb{R}$ and are formaly given as
\begin{align}\label{WeylAlgebraGenerator}
\widehat{W}(a,b) := \widehat{e^{\frac{i}{\hbar}(a x + b p)}}.
\end{align}
Note that the operator symbol (hat) is acting over the entire exponential instead of the functions $x$ or $p$. With this, we imply that it is the entire exponential function what should be considered as an abstract operator. Historically, the linear form of the canonical commutation relations, sometimes denoted as e.g., $[\widehat{x}, \widehat{p}] = i \hbar$, is more familiar. However, it is not suitable to explore a possible discrete nature of the space because a discrete space forbids the standard notions of infinitesimal translations. This is the main reason to consider the Weyl algebra in polymer quantum mechanics.
On the other hand, the approach we follow is one in which only one of the fundamental operators will have discrete eigenvalues, whether the position operator or the momentum operator. As a result, in Eq.(\ref{WeylAlgebraGenerator}) instead of considering the entire Weyl algebra generator with the labels $a, b \neq 0$ we can take $a=0$ or $b=0$ and denote the resulting generator as
\begin{align}
\widehat{W}(a,0) = \widehat{V}(a), \qquad \widehat{W}(0,b) = \widehat{U}(b),
\end{align}
\noindent and the canonical commutation relations in Eq.~(\ref{WeylAlgebraMultiplication}) take the form
\begin{align}
\left[ \widehat{U}(b), \widehat{x} \right] = \hbar \, b \, \widehat{U}(b), \qquad \mbox{or} \qquad \left[ \widehat{p}, \widehat{V}(a) \right] = \hbar \, a \, \widehat{V}(a).
\end{align}
Now that we clarified the main aspects regarding the Weyl algebra structure we are ready to adapt these mathematical description to our current model for the canonical variables describing the gravitational waves.
We will consider two cases in this work and refer to them as polarizations. The first case, which we call ``polymer ${\cal E}_{\mathbf{\lambda,k}}$'', is where there is no infinitesimal operator ${\cal E}_{\mathbf{\lambda,k}}$ and the operator ${\cal A}_{\mathbf{\lambda,k}}$ has discrete eigenvalues. The second polarization, called ``polymer ${\cal A}_{\lambda,\mathbf{k}}$'', is the case where no infinitesimal operator ${\cal A}_{\lambda,\mathbf{k}}$ exists and the eigenvalues of ${\cal E}_{\mathbf{\lambda,k}}$ are discrete.
The observables in the polymer ${\cal E}_{\mathbf{\lambda,k}}$ case are given by $\widehat{{\cal A}}_{\lambda,\mathbf{k}}$ and $\widehat{U}_{\mathbf{\lambda,k}}(\mu)$. Here, the operator $\widehat{U}_{\mathbf{\lambda,k}}(\mu)$ is the generator of finite (discrete) translations. Note that when considered in the standard Schr\"odinger representation, this operator resembles the exponential of the momentum operator. The parameter $\mu$ is a dimensionful parameter encoding the discreteness of the operator $\widehat{{\cal A}}_{\lambda,\mathbf{k}}$. In this context, since $\widehat{{\cal A}}_{\lambda,\mathbf{k}}$ is related to the perturbation of the metric tensor, the parameter $\mu$ is thus associated with the discreteness of the spacetime. The commutation relation for these operators reads
\begin{align}
\left[ \widehat{U}_{\mathbf{\lambda,k}}(\mu), \widehat{{\cal A}}_{\lambda,\mathbf{k}} \right] = \hbar \, \mu \, \widehat{U}_{\mathbf{\lambda,k}}(\mu).
\end{align}
The observables for the polymer ${\cal A}_{\lambda,\mathbf{k}}$ case are given by $\widehat{{\cal E}}_{\mathbf{\lambda,k}}$ and $\widehat{V}_{\lambda,\mathbf{k}}(\nu) $. As mentioned, in this case the eigenvalues of $\widehat{{\cal E}}_{\mathbf{\lambda,k}}$ are discrete. Analogous to the previous case, the parameter $\nu$ is the polymer scale related to the discreteness of the canonical conjugate momentum to the metric. The commutator for these observables is of the form
\begin{align}
\left[ \widehat{V}_{\lambda,\mathbf{k}}(\nu), \widehat{{\cal E}}_{\mathbf{\lambda,k}} \right] =- \hbar \, \nu \, \widehat{V}_{\lambda,\mathbf{k}}(\nu).
\end{align}
Although the representation of these operators is given in two different Hilbert spaces, they are very similar at the mathematical level. The Hilbert spaces for the polymer ${\cal E}_{\mathbf{\lambda,k}}$ and the polymer ${\cal A}_{\mathbf{\lambda,k}}$ polarizations are given, respectively, by
\begin{align}
{\cal H}_{{\rm poly}\, {\cal E}} = L^2(\mathbb{R}_d, d {\cal A}_c ), \qquad {\cal H}_{{\rm poly}\, {\cal A}} = L^2(\mathbb{R}_d, d {\cal E}_c ),
\end{align}
In both cases the configuration spaces are given by the real line with discrete topology, denoted by $\mathbb{R}_d$, and the measure is given by the countable measure on these discrete real lines. This results in a violation of the Stone-von Neumann theorem conditions \cite{Ashtekar:2002sn, VelhinhoJM, CorichiVZ}, and yields a polymer Hilbert space unitarily inequivalent to the usual Hilbert space of standard Schr\"odinger representation. Consequently, the polymer theory gives rise to new physics compared to the standard quantization scheme. Since the standard quantum mechanics predictions fit very well with the experiments of the systems with finite degrees of freedom, the predictions of the polymer quantum mechanics should also fit very well with the experiments for those systems. This criterion leads to bounds on the scale at which the polymer effects take over which in practice means bounds on the polymer scales parameters $\mu$ or $\nu$. Clearly, such a restriction is
not required in the polymer quantization of GWs where %
no measurement on the
quantum nature of GWs have been performed.
As usual, there is always a margin
of error or discrepancy between the theory and experiment, even if
the theory and experimental results match to a very high degree of
accuracy. This leaves room for possible new physics. In our case this
new physics is the polymer quantum theory. Hence it is worth investigating
the polymer quantum mechanics predictions for the GWs detectors, in
the hope that for certain values of the polymer parameters, the predictions
of the polymer model fit with the data to a degree higher than that
provided by the standard non-polymer models.
At the core of our analysis lays the assumption that the polymer effects
are small deviations when compared to the main contributions described
by the standard non-polymer models. Such models fit, to a high degree
of accuracy (more than $5\sigma$), with the detectors data using
the classical description of the Fourier modes of the GWs. Hence,
we expect $\mu$ or $\nu$ to be small such that in the limit, when
$\mu,\nu\rightarrow0$, we recover the contributions of the standard
quantum mechanical models. Since the polymer parameters have to be
considered very small, an effective description provides the minimum
insight we need to begin with. In other words, instead of moving towards
the full quantum polymer description we move directly to the effective
(classical) description already presented in \cite{Garcia-Chung:2020zyq}.
In this effective description, the Hamiltonian of each mode and polarization
in Eq. \eqref{eq:Hamiltonian-FLRW-2} is modified in order to incorporate
the first order contribution of the polymer parameters. This results
in two polymer effective (classical) Hamiltonians, one for each of
the two representations of the polymer model. The polymer ${\cal E}_{\mathbf{\lambda,k}}$
Hamiltonian is of the form
\begin{align}
H_{\lambda,{\bf k}}^{({\cal E})}=\frac{2}{\mu^{2}}\sin^{2}\left(\frac{\mu\,{\cal E}_{\mathbf{\lambda,k}}}{2}\right)+\frac{1}{2}{\bf k}^{2}\,{\cal A}_{\mathbf{\lambda,k}}^{2},\label{PolymerE}
\end{align}
whereas the Hamiltonian for the polymer ${\cal A}_{\mathbf{\lambda,k}}$
is
\begin{align}
H_{\lambda,{\bf k}}^{({\cal A})}=\frac{1}{2}{\cal E}_{\mathbf{\lambda,k}}^{2}+\frac{2}{\nu^{2}}\sin^{2}\left(\frac{\nu\,{\cal A}_{\mathbf{\lambda,k}}}{2}\right).\label{PolymerA}
\end{align}
Using these Hamiltonians, we can find the equations of motion (EoM)
as usual using Poisson brackets in each case (i.e., the polymer $\mathcal{E}$
and polymer $\mathcal{A}$ cases). We summarize these equations and
their solutions (without loss of generality only for the $+$ polarization)
in the following:
\begin{itemize}
\item[a)] In polymer $\mathcal{E}$ case, the EoM read
\begin{align}
\frac{d{\cal A}_{+,\mathbf{k}}}{dt}= & \frac{\hbar}{\mu}\,\sin\left(\frac{\mu}{\hbar}\,\mathcal{E}_{+,\mathbf{k}}\right), & \frac{d{\cal E}_{+,\mathbf{k}}}{dt}= & -k^{2}\mathcal{A}_{+,\mathbf{k}}.\label{eq:EoM-eff-E-2}
\end{align}
Combining these, the second order effective EoM for the GWs become
\begin{equation}
\ddot{\mathcal{A}}_{+,\mathbf{k}}=-k^{2}\mathcal{A}_{+,\mathbf{k}}\cos\left(\frac{\mu}{\hbar}\,\mathcal{E}_{+,\mathbf{k}}\right).\label{eq:EoM-eff-E-nonH}
\end{equation}
Now we consider a situation in which the $(\mu/\hbar)\mathcal{E}_{+,\mathbf{k}}$
is small. By expanding the sine and cosine functions up to order $\mathcal{O}((\mu\mathcal{E}_{+,\mathbf{k}})^{2}/\hbar)$,
and after the re-scaling $\mathcal{A}\to(\hbar/\mu)\bar{\mathcal{A}}$,
Eq.~(\ref{eq:EoM-eff-E-nonH}) is approximated by
\begin{equation}
\ddot{\bar{\mathcal{A}}}_{+,\mathbf{k}}+k^{2}\bar{\mathcal{A}}_{+,\mathbf{k}}\,\approx\,\dfrac{k^{2}}{2}\bar{\mathcal{A}}_{+,\mathbf{k}}\,\dot{\bar{\mathcal{A}}}_{+,\mathbf{k}}^{2}\,.\label{eq:EoM-eff-E-pert}
\end{equation}
By using the Poincare-Lindstedt method \cite{amore2005improved},
we can compute a perturbative solution without any secular (growing)
term at a given order. This yields
\begin{align}
\bar{\mathcal{A}}_{+,\mathbf{k}}^{(\mathcal{E})}(t)\approx & \bar{\mathcal{A}}_{I}\left[\left(1-\frac{\bar{\mathcal{A}}_{I}^{2}k^{2}}{32}\right)\cos\left(3kc\sqrt{1-\frac{\bar{\mathcal{A}}_{I}^{2}k^{2}}{8}}\,t\right)\right.\nonumber \\
& \quad\quad\quad\quad\left.-\frac{\bar{\mathcal{A}}_{I}^{2}k^{2}}{64}\cos\left(3kc\sqrt{1-\frac{\bar{\mathcal{A}}_{I}^{2}k^{2}}{8}}\,t\right)\right].\label{eq:EoM-eff-E-sol-app1}
\end{align}
In terms of the original perturbation scalar, $\bar{h}(t)$, the above
solution is rewritten as
\begin{align}
\bar{h}_{+,\mathbf{k}}^{(\mathcal{E})}(t)\approx & \bar{h}_{I}\left[\left(1-\frac{\bar{h}_{I}^{2}\bar{\mu}^{2}k^{2}}{32\hbar^{2}}\right)\cos\left(kc\sqrt{1-\frac{\bar{h}_{I}^{2}\bar{\mu}^{2}k^{2}}{8\hbar^{2}}}\,t\right)\right.\nonumber \\
& \quad\quad\quad\quad\left.-\frac{\bar{h}_{I}^{2}\bar{\mu}^{2}k^{2}}{64\hbar^{2}}\cos\left(3kc\sqrt{1-\frac{\bar{h}_{I}^{2}\bar{\mu}^{2}k^{2}}{8\hbar^{2}}}\,t\right)\right],\label{eq:EoM-eff-E-sol-app2}
\end{align}
where we have defined a new polymer parameter $\bar{\mu}\equiv\mu\ell^{3/2}/2\kappa$
with the dimension of length. %
\item[b)] For the polymer $\mathcal{A}$ case, the EoM derived from the Hamiltonian
\eqref{PolymerA} read
\begin{align}
\frac{d{\cal A}_{+,\mathbf{k}}}{dt}= & \mathcal{E}_{+,\mathbf{k}}, & \frac{d{\cal E}_{+,\mathbf{k}}}{dt}= & -\frac{\hbar k^{2}}{\nu}\,\sin\left(\frac{\nu}{\hbar}\mathcal{A}_{+,\mathbf{k}}\right),\label{eq:EoM-eff-A-2}
\end{align}
thereby, the second order effective EoM for $\mathcal{A}_{+,\mathbf{k}}$
becomes
\begin{equation}
\ddot{\mathcal{A}}_{+,\mathbf{k}}+\frac{\hbar k^{2}}{\nu}\,\sin\left(\frac{\nu}{\hbar}\mathcal{A}_{+,\mathbf{k}}\right)=0.\label{eq:EoM-eff-A-nonH}
\end{equation}
For small $(\nu/\hbar)\mathcal{A}_{+,\mathbf{k}}$, equation above
up to $\mathcal{O}((\nu\mathcal{A}_{+,\mathbf{k}})^{3}/\hbar)$ becomes
\begin{equation}
\ddot{\bar{\mathcal{A}}}_{+,\mathbf{k}}+k^{2}\bar{\mathcal{A}}_{+,\mathbf{k}}=\frac{k^{2}}{6}\bar{\mathcal{A}}_{+,\mathbf{k}}^{3},\label{eq:EoM-eff-A-pert}
\end{equation}
where, again we have used the re-scaling $\mathcal{A}\to(\hbar/\nu)\bar{\mathcal{A}}$.
\noindent Using the Poincare-Lindstedt method, the solution to Eq.~(\ref{eq:EoM-eff-A-pert})
is approximated as
\begin{equation}
\bar{\mathcal{A}}_{+,\mathbf{k}}(t)\,\approx\,\bar{A}_{I}\left(1-\frac{\bar{A}_{I}^{2}}{96}\right)\cos\left(kc\sqrt{1-\frac{\bar{A}_{I}^{2}}{8}}\,t\right)-\frac{\bar{A}_{I}^{3}}{192}\cos\left(3kc\sqrt{1-\frac{\bar{A}_{I}^{2}}{8}}\,t\right),\label{eq:EoM-eff-A-sol-app}
\end{equation}
which again, written in terms of the original variable $\bar{h}_{+,\mathbf{k}}$,
reads
\begin{align}
\bar{h}_{+,\mathbf{k}}^{(\mathcal{A})}(t)\, & \approx\,\bar{h}_{I}\left[\left(1-\frac{\bar{h}_{I}^{2}\bar{\nu}^{2}}{96\,\hbar^{2}}\right)\cos\left(kc\sqrt{1-\frac{\bar{h}_{I}^{2}\bar{\nu}^{2}}{8\hbar^{2}}}\,t\right)\right.\nonumber \\
& \quad\quad\quad\quad\left.-\frac{\bar{h}_{I}^{2}\bar{\nu}^{2}}{192\hbar^{2}}\cos\left(3kc\sqrt{1-\frac{\bar{h}_{I}^{2}\bar{\nu}^{2}}{8\hbar^{2}}}\,t\right)\right].\label{eq:EoM-eff-A-sol-app2}
\end{align}
Here, similar to the previous case we have defined a new dimensionless
(in natural units) polymer parameter $\bar{\nu}\equiv\nu\ell^{3/2}/2\kappa$.
\end{itemize}
It is instructive to calculate the speed of GWs in both polymer $\cal{A}$ and polymer $\cal{E}$ cases. To do so, we first calculate the dispersion relation of GWs including the polymer corrections. To the leading order these are given by
\begin{align}
\omega^{(\mathcal{A})} & \approx kc\left(1 - \frac{\bar{h}_I^2 \bar{\nu}^2}{8\hbar^2} \right)^{1/2}, \label{rel:disperionA}\\
\omega^{(\mathcal{E})} & \approx kc\left(1 - \frac{\bar{h}_I^2 \bar{\mu}^2 k^2}{8\hbar^2} \right)^{1/2}. \label{rel:disperionE}
\end{align}
The speed of propagation of GWs is given by the relation $v = d\omega/dk$. Thus, for our dispersion relations (\ref{rel:disperionA}) and (\ref{rel:disperionE}) we get
\begin{align}
v^{(\mathcal{A})} &\approx c \left(1 - \frac{\bar{h}_I^2 \bar{\nu}^2}{16\hbar^2} \right), \\
v^{(\mathcal{E})} &\approx c \left(1 - \frac{3 \bar{h}_I^2 \bar{\mu}^2 }{16\hbar^2}\, k^2 \right). \label{rel:speedE}
\end{align}
The relation \eqref{rel:speedE} represents (in polymer $\cal{E}$ case) a phenomenological aspect of the effect of polymer quantization on the propagation of GWs. It shows that different modes of GWs will travel with different speeds under such effects. In particular, all the modes propagate subluminally, and the higher the energy of a mode, the lower its speed.
In the next section we will study another phenomenological aspect of the polymer GWs, namely, the effective geodesic deviation equation describing the detector's arms motion. This equation is coupled to the background perturbation whose dynamic is given by solutions (\ref{eq:EoM-eff-E-sol-app2}) and (\ref{eq:EoM-eff-A-sol-app2}) and reveals the behavior of the detector's arms in this model.
\section{Effective dynamics of the arm length
\label{sec:phenomenology}}
In this section, to investigate the consequence of polymer quantization on propagation of GWs, we will first present the geodesic deviation equation and find the effective evolution of the detector's arms. Using that, we will study the corresponding solutions to the detector's arm length, using analytical and numerical techniques.
\subsection{Geodesic deviation equation}
\label{sec:GeodesicDeviation}
The GW detector arms can be modeled as two free-falling (identical) masses whose geodesic separation is sensitive to the Riemann tensor induced by gauge-invariant metric perturbations (or strain) $\mathcal{A}_{\lambda, \mathbf{k}}$, i.e., the incident GWs herein our setting. Geodesic equations of such masses are given by the action \cite{maggiore2008gravitational}
\begin{align}
S_{\xi}
&= - m \int_{\gamma_{A}(t)}d\tau - m \int_{\gamma_{B}(t^{\prime})}d\tau^{\prime}, \nonumber\\
&= -m \int_{\gamma_{A}(t)} \sqrt{-g_{\mu\nu}\,dx^\mu\,dx^\nu} -m \int_{\gamma_{B}(t^{\prime})} \sqrt{-g^{\prime}_{\mu\nu}\,{dx^{\prime}}^\mu\,{dx^{\prime}}^\nu}\,, \label{def:action-Masses-1}
\end{align}
where, $\gamma_A(t)$ and $\gamma_B(t^{\prime})$ are timelike geodesics of the particles $A$ and $B$, respectively. We introduce a Fermi normal coordinates along the geodesics $\gamma_A(t)$ of particle $A$, %
situated at time $t$, at the point $P=(t,\mathbf{x}=0)$,
and whose geodesic is parameterized by time $t$. In such a frame,
the Fermi normal coordinates of particle $B$ (moving on geodesic
$\gamma_{B}$) are given by $\xi^{\mu}=(t,\xi^{i}(t))$ in the vicinity
of the point $P$. Thus $\xi^{i}$ represents the deviation parametrized
by $t$, i.e., $\xi^{i}$ connects two points with the same value
of $t$ on the two geodesics. In this configuration we can recast
the action \eqref{def:action-Masses-1} in terms of the deviation
variables and only focus on the particle with geodesic $\gamma_{B}$
while ignoring the dynamics of the particle with geodesic $\gamma_{A}$.
This is because, as is well-known, in these coordinates on the worldline
of particle $A$ we have $g_{\mu\nu}|_{\gamma_{A}}=\eta_{\mu\nu}$.
We then write the above action as
\begin{eqnarray}
S_{\xi}=-m\int_{\gamma_{B}}dt\sqrt{-g_{\mu\nu}^{\prime}\left(\xi^{i},t\right)\,\dot{\xi}^{\mu}\,\dot{\xi}^{\nu}}\,.\label{def:action-Masses-2}
\end{eqnarray}
around point $P$, the metric $g_{\mu\nu}^{\prime}$ in the neighborhood
of $\gamma_{A}$ can be expanded as \cite{2007reto.book.....P}
\begin{align}
ds^{2}\simeq & -\left(1+R_{0i0j}\,\xi^{i}\xi^{j}+\mathcal{O}(\xi^{3})\right)dt^{2}-\left(\frac{4}{3}R_{0jik}\xi^{j}\xi^{k}+\mathcal{O}(\xi^{3})\right)\,dtdx^{i}\nonumber \\
& +\left(\delta_{ij}-\frac{1}{3}R_{ikj\ell}\,\xi^{k}\xi^{\ell}+\mathcal{O}(\xi^{3})\right)dx^{i}dx^{j}.\label{def:metric-fermi}
\end{align}
In the proper detector frame on the Earth, the metric \eqref{def:metric-fermi} has other contributions coming from Newtonian forces such as, Newtonian gravity, Coriolis and centrifugal forces, suspension mechanism and Sagnac effect. These effects are many order of magnitude larger than GWs but change very slowly. To detect the GWs, we need to use a higher frequency window in which noises from other sources are very small and the main contribution in the metric \eqref{def:metric-fermi} comes from GWs, thus we can obtain the geodesic deviation induced mainly by GWs.
Replacing the metric \eqref{def:metric-fermi} into Eq.~\eqref{def:action-Masses-2}, the action for the geodesic deviation becomes,
\begin{equation}
S_{\xi} \simeq \int_{\gamma_{B}} dt \left[ \frac{m}{2} \dot{\xi}^{i^2} + \frac{m}{4} \ddot{\bar{h}}_{ij} (\mathbf{0},t) \xi^i \xi^j \right] \, , \label{GDAction-Ali}
\end{equation}
where we have dropped the non-dynamical terms. To the first order in metric perturbations, in the TT gauge, we have $R_{0i0j}(t,\mathbf{0})=- \ddot{h}_{ij} (t,\mathbf{0})/2$, in which the Riemann tensor is evaluated at the point $P$.
The Hamiltonian of the geodesic deviation can be obtained with the help of the Legendre transformation of the action \eqref{GDAction-Ali} as
\begin{align}
H(t) = \frac{1}{2m} \sum_{i=1}^{2}P^2_{\xi^i} - \frac{m}{4} \ddot{\bar{h}}_{ij} (\mathbf{0},t) \xi^i \xi^j, \label{eq:Hamiltonian-interaction}
\end{align}
leading to the EoM of the form
\begin{align}
\dot{\xi}_i &= \frac{1}{m} P_{\xi^i},\label{HamiltonEqs0} \\
\dot{P}_{\xi^i} &= \frac{m}{2}\, \ddot{\bar{h}}_{ij}(\mathbf{0},t) \, \xi^j .\label{HamiltonEqs}
\end{align}
By using the Hamilton's equations (\ref{HamiltonEqs0}) and (\ref{HamiltonEqs}),
and replacing $\bar{h}_{ij}(t,\mathbf{0})$ with its Fourier mode decomposition \eqref{eq:H-lambda},
\begin{equation}
\bar{h}_{ij}(t,\mathbf{0})\, =\, \frac{2 \kappa}{\ell^{3/2}}\sum_{\lambda, \mathbf{k}}\mathcal{A}_{\lambda,\mathbf{k}}(t) e^{\lambda}_{ij},
\label{variables-new}
\end{equation}
we obtain the geodesic deviation equation as
\begin{align}
\ddot{\xi}^i = \frac{\kappa}{\ell^{3/2}} \sum_{\lambda}\sum_{\mathbf{k}} \ddot{\mathcal{A}}_{\lambda, \mathbf{k}}\, e^{\lambda}_{ij} \, \xi^j.
\label{arm-EoM}
\end{align}
This equation represents the interaction between the detector and the (effective) perturbed metric. Moreover, it gives the tidal acceleration of $\xi$ in the presence of GWs. For each mode ${\bf k}$, the equations of motion become%
\begin{subequations}
\begin{align}
\ddot{\xi}^1_{\bf k} &= \frac{\kappa}{\ell^{3/2}} \left[ \ddot{\mathcal{A}}_{+, \mathbf{k}}\, \xi^1_{\bf k} + \ddot{\mathcal{A}}_{\times, \mathbf{k}} \, \xi^2_{\bf k} \right], \label{eq:arm1} \\
\ddot{\xi}^2_{\bf k} &= \frac{\kappa}{\ell^{3/2}} \left[ - \ddot{\mathcal{A}}_{+, \mathbf{k}}\, \xi^2_{\bf k} + \ddot{\mathcal{A}}_{\times, \mathbf{k}} \, \xi^1_{\bf k} \right]. \label{eq:arm2}
\end{align} \label{eq:arm-tot}
\end{subequations}
In the rest of this section, our aim will be to analyze the solutions of $\ddot{\xi}^1_{\bf k}$ and $\ddot{\xi}^2_{\bf k}$ when the behaviour of the strain $\mathcal{A}_{\lambda, \mathbf{k}}$ is known. To be more precise, it is provided by the solutions of the $\mathcal{A}_{\lambda, \mathbf{k}}$, given by the effective evolution equations of the polymer effective Hamiltonians (\ref{PolymerE}) or (\ref{PolymerA}).
As mentioned in subsection \ref{PolySubSection}, polymer representations usually come into two polarizations:
either $\hat{\mathcal{A}}$ is not well-defined but $\hat{\mathcal{E}}$ is, or vice versa.
In the polarization where $\hat{\mathcal{A}}$ is not well-defined, the spectrum
of its conjugate variable $\hat{\mathcal{E}}$ becomes discrete. This is basically because there is no $\hat{\mathcal{A}}$ on the Hilbert space to generate infinitesimal transformations in $\hat{\mathcal{E}}$. The inverse of this statement is valid for the case where $\hat{\mathcal{E}}$ is not well-defined. We will consider both cases in what follows. However, note that in LQG, the connection is holonomized/polymerized and the triad is discretized. In our notation, $\mathcal{A}$ corresponds to the metric perturbations [see Eq.~\eqref{def-q}], hence a polarization where ${\cal E}_{\mathbf{\lambda,k}}$ is polymerized, resulting in ${\cal A}_{\lambda,\mathbf{k}}$ becoming discrete, is more in line with LQG \cite{Garcia-Chung:2020zyq}. This is the case that we are more interested about.
\subsection{Perturbative analysis}
\label{solutions:perturbative}
Assuming initial length $\xi_0$ for the detector's arm, we can set $\xi(t) = \xi_0 + \delta \xi(t)$, in which $\delta \xi(t)$ is the displacement induced by the GWs. This, applied to each mode and each polarization, yields
\begin{subequations}
\begin{align}
\xi^1_{\bf k}(t) &= \xi^1_0 +\delta \xi^1_{\bf k}(t), \label{rel:xi1}\\
\xi^2_{\bf k}(t) &= \xi^2_0 +\delta \xi^2_{\bf k}(t). \label{rel:xi2}
\end{align}\label{rel:xi-tot}
\end{subequations}
Using Eqs.~\eqref{eq:arm-tot} to the leading order, we obtain the following equations of motion for the arm's displacements,
\begin{subequations}
\begin{align}
\delta \ddot{\xi}^1_{\bf k} &= \frac{\kappa}{\ell^{3/2}} \left[ \ddot{\mathcal{A}}_{+, \mathbf{k}}\, \xi^1_0 + \ddot{\mathcal{A}}_{\times, \mathbf{k}} \, \xi^2_0 \right], \\
\delta \ddot{\xi}^2_{\bf k} &= \frac{\kappa}{\ell^{3/2}} \left[ - \ddot{\mathcal{A}}_{+, \mathbf{k}}\, \xi^2_0 + \ddot{\mathcal{A}}_{\times, \mathbf{k}} \, \xi^1_0 \right],
\end{align}
\end{subequations}
where $\delta \xi^{1, 2}_{\mathbf{k}}$ and $\mathcal{A}_{\lambda, \mathbf{k}}$ (with $\lambda=+,\times$) are considered as small perturbations. We now integrate these equations to obtain the general solutions
\begin{subequations}
\begin{align}
\delta {\xi}^1_{\bf k}(t) &= \frac{\kappa}{\ell^{3/2}} \left[ {\mathcal{A}}_{+, \mathbf{k}}(t)\, \xi^1_0 + {\mathcal{A}}_{\times, \mathbf{k}}(t) \, \xi^2_0 \right] + v^1_0 \, t, \label{rel:deltaxi1}\\
\delta {\xi}^2_{\bf k}(t) &= \frac{\kappa}{\ell^{3/2}} \left[ - {\mathcal{A}}_{+, \mathbf{k}}(t)\, \xi^2_0 + {\mathcal{A}}_{\times, \mathbf{k}}(t) \, \xi^1_0 \right] + {v}^2_0 \, t , \label{rel:deltaxi2}
\end{align} \label{rel:deltaxi-tot}
\end{subequations}
where ${v}^1_0$ and ${v}^2_0$ are integration constants. Note that in this configuration, one detector's arm is initially at the origin and the other at the position $(\xi^1_0, \xi^2_0)=(\xi_0 \cos\theta, \xi_0 \sin\theta)$. Therefore, GWs propagating in the direction perpendicular to the $\xi^1-\xi^2$ plane, induce the displacements
\begin{subequations}
\begin{align}
{\xi}^1_{\bf k}(t) &= \frac{\kappa}{\ell^{3/2}} \left[ {\mathcal{A}}_{+, \mathbf{k}}(t)\, \xi_0 \cos\theta + {\mathcal{A}}_{\times, \mathbf{k}}(t) \, \xi_0 \sin\theta \right] + \xi_0 \cos\theta, \label{detector-pert1} \\
{\xi}^2_{\bf k}(t) &= \frac{\kappa}{\ell^{3/2}} \left[ - {\mathcal{A}}_{+, \mathbf{k}}(t)\, \xi_0 \sin\theta + {\mathcal{A}}_{\times, \mathbf{k}}(t) \, \xi_0 \cos\theta \right] + \xi_0 \sin\theta , \label{detector-pert1}
\end{align} \label{detector-pert-tot}
\end{subequations}
on the detector's arm. Let us now assume that the metric perturbation has only a ${\mathcal{A}}_{+, \mathbf{k}}(t)$ polarization. Then the solutions (\ref{detector-pert-tot}) reduce to
\begin{subequations}
\begin{align}
{\xi}^1_{\bf k}(t) &= \left[1+ \frac{\kappa}{\ell^{3/2}} {\mathcal{A}}_{+, \mathbf{k}}(t) \right] \xi_0 \cos\theta, \label{rel:xi1Plus} \\
{\xi}^2_{\bf k}(t) &= \left[1 - \frac{\kappa}{\ell^{3/2}} {\mathcal{A}}_{+, \mathbf{k}}(t) \right] \xi_0 \sin\theta .\label{rel:xi2Plus}
\end{align} \label{rel:xiPlus-tot}
\end{subequations}
To study the effects of the polymer quantum dynamics on the arm's length, we will substitute the effective solutions of GWs [cf. Eqs.~(\ref{eq:EoM-eff-E-sol-app1}) and (\ref{eq:EoM-eff-A-sol-app})] into Eqs.~\eqref{rel:xiPlus-tot} to obtain the evolution of detector's arms
\begin{subequations}
\begin{align}
{\xi}^1_{\bf k}(t) &= \left[1+ \frac{1}{2} \bar{h}^{(\mathcal{E} / \mathcal{A})}_{+,\mathbf{k}}(t) \right] \xi_0 \cos\theta, \label{rel:xi1Plus1H} \\
{\xi}^2_{\bf k}(t) &= \left[1 - \frac{1}{2} \bar{h}^{(\mathcal{E} / \mathcal{A})}_{+,\mathbf{k}}(t) \right] \xi_0 \sin\theta ,\label{rel:xi2Plus2H}
\end{align}\label{rel:xiPlus-tot2}
\end{subequations}
and then compare the geometry of the arm's displacements induced by the plus polarization of GWs, with those given by the classical GR.
The displacements in detector's arms, corresponding to the plus polarization of GWs, under the effect of the classical and (the case $\mathcal{A}$) polymer GWs [cf. Eqs.\eqref{rel:xiPlus-tot2} together with the solution (\ref{eq:EoM-eff-A-sol-app2})], are depicted in Fig.~\ref{fig:GWPlusPolyA}.
Fig.~\ref{fig:GWPlusPhasesA} represents the time evolution of the displacements in classical and polymer scenarios, whereas Fig.~\ref{fig:GWPlusPhasesDiffA} shows a comparison between the classical and polymer cases in three instances of time. Note that for the classical solution we used the expression
\[\bar{h}^{\rm class}_{+,\mathbf{k}}(t) = \bar{h}_I \cos\left(kc t \right).\]
Likewise, Fig.~\ref{fig:GWPlusPolyE} depicts the behaviours of the displacements in detector's arm ensued from the plus polarization of the polymer $\cal{E}$ solution [cf. Eq.~\eqref{eq:EoM-eff-E-sol-app2}]. A comparison with the classical case is also made. It is interesting to mention that the polymer corrections in solution \eqref{eq:EoM-eff-E-sol-app2} depend on the mode of the GW, which means that different modes of a GW induce different displacements on the detector's arms. This phenomenological property might have observational consequences, as will be discussed at the end of this subsection and in Sec.~\ref{sec:LISA}. In Fig.~\ref{fig:GWPlusPhasesDiffE} we can see that the case $\cal{E}$ polymer corrections produce larger effects compared to the polymer $\cal{A}$ corrections at the same three time instances with the same control parameters (cf. Fig.\ref{fig:GWPlusPhasesDiffA})
An analysis of polymer corrections in different scenarios is in order. From Eqs.~\eqref{eq:EoM-eff-A-sol-app2} and \eqref{eq:EoM-eff-E-sol-app2} we see that
the amplitude of the GWs attenuates as
$\delta \bar{h}^{(\mathcal{A})} \sim \bar{h}_I^3 \bar{\nu}^2/\hbar^2$,
in the Polymer $\mathcal{A}$ case, while it attenuates as
$\delta \bar{h}^{(\mathcal{E})} \sim \bar{h}_I^3 k^2 \bar{\mu}^2/\hbar^2$,
in the Polymer $\mathcal{E}$ case.
Moreover, if $\bar{\mu} \sim \bar{\nu}$, the attenuation in perturbation $\bar{h}_{+,\mathbf{k}}$ in polymer $\mathcal{E}$ case is about $3\times k^2$ times larger than that in the polymer $\mathcal{A}$ case.
On the other hand, by assuming that in a binary system $\bar{h}_I \sim 10^{-23}$ \cite{Flanagan:2005yc}, we find that the corrections in the amplitude of the GWs due to polymer effects are around $\delta \bar{h}_I^{(\mathcal{A})} \sim \, \bar{\nu}^2$ (in the Polymer $\mathcal{A}$ case) and $\delta \bar{h}_I^{(\mathcal{E})} \sim \, k^2\bar{\mu}^2$ (in the Polymer $\mathcal{E}$ case).
The high energy sources can emit GWs with frequencies around $f \sim 10^4$Hz \cite{Flanagan:2005yc}, thus the GWs with short wavelength in Polymer $\mathcal{E}$ case would decay faster than those with long wavelength.
In the present setting, the length of the detector's arm, $\xi_0$, can amplify tiny amplitudes of GWs. In today's technology, there is a limitation for the arm's length of the GW detectors; in high frequencies $1 \lesssim f \lesssim 10^4$ Hz, it is about $3\, {\rm km}\lesssim \xi_0 \lesssim 4\,{\rm km}$ (see \cite{LIGOScientific:2014pky, VIRGO:2014yos, KAGRA:2018plz} for LIGO, VIRGO and KAGRA collaborations, respectively).
Therefore, it would be better to investigate lower frequencies with larger arm's lengths where GWs have stronger amplitudes. The mission of the LISA space probe is to detect and measure GWs produced by mergers of supermassive black holes \cite{amaro2017laser}, so LISA might be a suitable candidate for the detection of the tiny effects of the polymer corrections in the gravitational waveforms. We will investigate the imprints of the polymer quantization schemes in LISA in the Sec.~\ref{sec:LISA}, however, it is instructive to do an order analysis of the polymer corrections in the lower frequency ranges. If the amplitude of GW is of order $\bar{h}_I \sim 10^{-15}$, then attenuation coming from polymer quantization in case $\mathcal{E}$ will be at the order of $\delta \bar{h}^{(\mathcal{E})}_I \sim 10^{-45} \, \left(\bar{\mu}/\hbar \right)^2$ for $k \sim 1$. In this case, if we assume $\bar{\mu} \sim 10^{10}\hbar$ (remember that in principle, the rescaled parameter $\bar{\mu}$ can be larger than the bare polymer scale $\mu$; see \eqref{eq:EoM-eff-E-sol-app2} for more details), the attenuation will be $\delta \bar{h}^{(\mathcal{E})}_I \sim 10^{-25}$, which is not far from the current observational capability. Exact numerical details in this regard will be presented in Sec.~\ref{sec:LISA}.
\subsection{Numerical analysis}
In this section we examine numerical (i.e., non-perturbative) solutions to the polymer and classical equations of motion in order to validate the treatment presented in the previous section. Here we directly integrate quations \eqref{eq:arm1} and \eqref{eq:arm2} numerically for the same choice of parameters and conditions considered in Sec.~\ref{sec:phenomenology}, for the polymer $\mathcal{E}$ case. We additionally choose the polymer scale $\mu$ and GW amplitude to be unrealistically large so to qualitatively demonstrate the impact of nonlinear effects.
We will restrict our analysis to the case where the GW amplitude $\bar{h}_I$ is small, $\mathcal{O}(10^{-8})$, so that nonlinear gravitational contributions to the behavior should be below the level of numerical roundoff, although note that we will still consider a numerical solution to the full equations of motion, \eqref{eq:arm1} and \eqref{eq:arm2}. This nevertheless reduces any terms quadratic in the amplitude to effectively zero. We then consider the polymer scale to be large, as to qualitatively demonstrate polymer effects on the arm behavior, although in practice we do not expect such large polymer scale values.
As noted in our previous work \citep{Garcia-Chung:2020zyq,Garcia-Chung:2021doi}, polymer GWs will undergo both a frequency and amplitude shift relative to the classical solution. Over large distances, this can appear as an order unity phase shift in the GWs. The linearized equations of motion suggest the detector arm will directly probe the GW (e.g. Eqs.~\eqref{rel:xi1Plus} and \eqref{rel:xi2Plus}), with only a nonlinear coupling term that will be extremely small.
As suggested by the form of Eqs.~\eqref{eq:arm1} and \eqref{eq:arm2}, the solution for the arm separation is directly related to that of the gravitational waveform itself when the amplitudes involved are small. We see precisely this behavior in Figure \ref{fig:SepNumComp}. Note that this is for a very large polymer scale where nonlinear corrections are important; as the polymer scale (and GW amplitude) become smaller as expected for observations of GWs, the two solutions are found to agree, and the perturbative picture is recovered.
\section{How loud polymer effects will be in LISA\label{sec:LISA}}
The Laser Interferometer Space Antenna (LISA) is a proposed space probe for GW signals and will open the mHz band for exploration of GWs. Sensitivity curves can be used for surveying the types of gravitational systems that can be observed by the LISA
mission \cite{amaro2017laser, Moore:2014lga}. Here we use the sensitivity curve to explore the polymer effects in LISA detectors. We will compute the signal to noise ratio for simple binary systems and calculate the order of polymer corrections. We consider sky-averaged sensitivities using the latest quantities and methods described for design parameters in Refs.~\cite{Larson:1999we, Cornish:2001bb}.
In the frequency domain, the strain induced by the amplitude of GWs can be written as
\begin{equation}
\bar{h}_{\mathbf{k}}(f) = {\cal R}^+(f)\, \bar{h}_{+, \mathbf{k}}(f) + {\cal R}^\times(f)\, \bar{h}_{\times, \mathbf{k}}(f) \, , \label{rel:strain}
\end{equation}
in which ${\cal R}^+(f)$ and ${\cal R}^\times(f)$ are the detector response functions for each polarization, and $\bar{h}_{+, \mathbf{k}}$, $\bar{h}_{\times, \mathbf{k}}$ are given by Eqs.~\eqref{eq:EoM-eff-E-sol-app2} and \eqref{eq:EoM-eff-A-sol-app2} for each scheme of polymerization, $\mathcal{E}$ or $\mathcal{A}$. The averaged magnitude of the spectral power of the signal in the detector, $\left\langle \bar{h}_{\mathbf{k}}(f) \bar{h}_{\mathbf{k}}^*(f) \right\rangle$, is related to the magnitude of the spectral power of each polarization, $|\bar{h}_{+, \mathbf{k}}(f)|^2$ and $|\bar{h}_{\times, \mathbf{k}}(f)|^2$, as
\begin{eqnarray}\label{averagedh}
\left\langle \bar{h}_{\mathbf{k}}(f) \bar{h}_{\mathbf{k}}^*(f) \right\rangle = {\cal R}(f)\left(|\bar{h}_{+, \mathbf{k}}(f)|^2 +|\bar{h}_{\times, \mathbf{k}}(f)|^2\right),
\end{eqnarray}
where, ${\cal R}(f)$ is the averaged detector response function. The majority of sources that LISA can detect are binary systems with different mass ratios. For simplicity, we consider spinless binary systems with comparable masses \cite{Cornish:2017vip}. In order to plot dimensionless characteristic strain, given by $h_c(f) = \sqrt{f\;S(f)}$ where $S(f) = 16/5 f \bar{h}_{\mathbf{k}}^2(f)$, we need first to calculate $S(f)$ using the amplitude of the wave, $\bar{h}_{\mathbf{k}}(f)$, in frequency domain (given by Eq.~\eqref{rel:strain}). The orbit of the binary system might have inclination relative to the line of sight, the factor $16/5$ comes from averaging over the inclination and polarizations of GWs \cite{Robson:2018ifk}. %
The coalescence of two black holes has three stages; inspiral (post-Newtonian regime), merger (relativistic regime) and ring-down (relativistic perturbative regime) phases. We only have a theoretical model for the inspiral phase, thus we use this model to compute the effects of the polymerization of GWs during the inspiral phase (a phenomenological template can be found in Ref.~\cite{Ajith:2007qp} for each phase of this gravitational source).
We assume that the source is classical and that it produces GWs with initial amplitude and template for the evolution of the frequency of the inspiral phase. Later, through propagation, the GW waveform receives corrections effectively for each polymer quantization scheme from Eqs.~\eqref{eq:EoM-eff-A-sol-app2} and \eqref{eq:EoM-eff-E-sol-app2}. Thus, the initial amplitude will be \cite{Creighton:2011zz}
\begin{eqnarray}
\bar{h}_I &\equiv& 4\; \frac{\left( G{\cal M}/c^3\right)^{5/3}}{D/c} (\pi f_{\rm merg})^{2/3} \, \left(\frac{f}{f_{\rm merg}}\right)^{2/3} , \label{rel:hI}
\end{eqnarray}
in which
\begin{equation}
{\cal M}= (m_1 m_2)^{3/5}/(m_1 + m_2)^{1/5}\,,
\label{eq:phase1}
\end{equation}
and $D$ is the luminosity distance. The transition frequencies $f_{\rm merg}$ denote the beginning of the merger phase \cite{Robson:2018ifk}. The template for the evolution of the frequency, to the leading order from Newtonian contribution, is \cite{Creighton:2011zz}
\begin{equation}
\dot{f} = \frac{96}{5}\, \left( G{\cal M}/c^3\right)^{5/3} \pi^{8/3} f_{\rm merg}^{11/3} \, \left(\frac{f}{f_{\rm merg}}\right)^{11/3} . \label{rel:fdot}
\end{equation}
According to the effective solutions \eqref{eq:EoM-eff-A-sol-app2} and \eqref{eq:EoM-eff-E-sol-app2}, the phenomenological waveform for propagating GWs will be,
\begin{equation}
\bar{h}_{\mathbf{k}}(t) = \bar{h}^{\rm eff}_{1} \cos \left(\phi^{\rm eff}(t)\right) + \bar{h}^{\rm eff}_{2} \cos \left(3\phi^{\rm eff}(t)\right), \label{rel:waveformeff}
\end{equation}
where the phase of the waveform evolves approximately as $\phi^{\rm eff} (t) \simeq 2 \pi(f t + \tfrac{1}{2} \dot{f} t^2 + {\cal O}(t^3))$.
As stated before, we assume that the classical source produces the initial amplitude of the propagating wave and determines the dynamics of its phase and the wave receives polymer corrections through propagation in spacetime. This is because the EoMs for GWs are homogeneous (source-free equations) and only describe the propagation of the waves, not their production. Consequently, this means that we have assumed that waves are produced by distance sources, and when they travel through quantized spacetime, their dynamics receives corrections. Thus we use \eqref{rel:hI} as the initial amplitude and \eqref{rel:fdot} as the frequency evolution of the source during the inspiral phase.
Using these conditions in the solutions \eqref{eq:EoM-eff-E-sol-app2} or \eqref{eq:EoM-eff-A-sol-app2} we compute the polymer corrections in GWs produced by the two massive black hole binaries (MBHBs).
The waveform \eqref{rel:waveformeff} and the expected order of polymer corrections are shown in Fig.~\ref{fig:ht}. We replaced wave number $k$ with frequency $f$ in solution \eqref{eq:EoM-eff-E-sol-app2}, using the dispersion relation \eqref{rel:disperionE}. Figs.~\ref{fig:WForm} and \ref{fig:ft} depict, respectively, the waveform \eqref{rel:waveformeff} and frequency evolution \eqref{rel:fdot}. Figs.~\ref{fig:WFPolymerE} and \ref{fig:WFPolymerA} demonstrate the general behavior of the difference functions $\delta h^{\cal E}(t)$ and $\delta h^{\cal A}(t)$. As the chirp continues, the polymer corrections in polymer $\cal E$ scheme amplify more than polymer corrections in polymer $\cal A$ scheme. This point is specifically illustrated in Figs.~\ref{fig:WFPolymerELog} and \ref{fig:WFPolymerALog}, that the amplitude of the absolute value of difference functions $\delta h^{\cal E}(t)$ and $\delta h^{\cal A}(t)$ are increasing over time and their rate are different for polymer $\mathcal{A}$ and $\mathcal{E}$. To see how loud will be the polymer corrections in LISA, we need to Fourier transform the solutions \eqref{eq:EoM-eff-A-sol-app2} and \eqref{eq:EoM-eff-E-sol-app2}.
After a few steps we get
\begin{equation}
{\cal F} \left(\bar{h}_{+,\mathbf{k}}(t) \right) = \bar{h}_{+,\mathbf{k}}(f) = \frac{\bar{h}_I(1-\delta)}{2 \sqrt{\dot{f}}} - \frac{\bar{h}_I \delta}{4 \sqrt{3\dot{f}}} ,\label{fourierEA}
\end{equation}
where $\delta$ are the corrections provided by the two polymer schemes, as
\begin{equation}
\delta^{({\cal E})} \equiv \frac{\bar{h}_I ^2 \bar{\mu}^2 k^2}{32 \hbar^2} \quad \text{and} \quad \delta^{({\cal A})} \equiv \frac{\bar{h}_I ^2 \bar{\nu}^2}{96 \hbar^2}.
\end{equation}
Now we can use Eq.~\eqref{fourierEA} to plot the effective strain spectral density of equal mass black hole inspiral binaries in contrast to the sensitivity curve of LISA (to understand how the sensitivity curve of LISA is calculated see Ref.~\cite{Robson:2018ifk}; here, we have used their expression and repository to calculate the LISA curve).
Fig.~\ref{fig:CS} shows the characteristic strain and the order of polymer corrections in LISA for four equal mass black hole binaries at two redshifts $z = 0.3$ and $z=0.03$, in $\cal E$ and $\cal A$ polymerization schemes.
We should note that the characteristic strains in this figure depict only the inspiral phase of binary mergers; there are two other phases after this particular phase. In this figure, a comparison between two polymerization schemes through the inspiral period shows that, the polymer correction in the $\cal E$ scheme will be amplified much more than the one in the $\cal A$ scheme, which implies that the $\cal E$ scheme has more capacity to be observed in LISA. In these analyzes we have considered the largest possible values for the polymer parameters $\mu$ and $\nu$; larger values than what already is reported here would generate waveforms with considerably different shapes and amplitudes, while the corrections induced by the polymer effects are expected to be minuscule compared to the classical waveform. For the localized sources of gravitational waves (e.g., black hole binaries), the scale $\ell$ can be set to the characteristic length of the given gravitational system.
In this regard, we have chosen $\ell = 10^{13}$m for the gravitational mode decompositions \eqref{eq:lambda-tot}, more comments about this point will be elaborated in the discussion section.
For the numerical analysis performed in this section, we need to know the mass and the distance of the binary system before hand. Usually GWs detectors are being used to find these parameters; if we want to compute analytically the expected strains of the classical and the polymer corrected GWs and compare them with detector's observation, other indirect methods can be used for finding these parameters, e.g., pulsar timing arrays \cite{Rosado:2015epa, Jenet:2003ew} or integral-field spectrograph \cite{Voggel:2022alp}.
\section{Discussion and Conclusions ~\label{sec:Discussion}}
GWs are the newest and one of the most important members of the multi-messengers currently used to explore the relativistic and quantum gravitational phenomena. These waves can potentially carry information about the early cosmology and the fine structure of spacetime, among others.
In this manuscript we have considered a model of quantum GWs in polymer quantization scheme \citep{Garcia-Chung:2020zyq}, and have studied some of the consequences it may have on data taken by GW observatories, particularly LISA. Using the geodesic deviation equation we have computed the modification to the classical dynamics and displacement of the detector hands of a GW observatory in such a model. We have studied both the fully nonperturbative/numerical, and also the analytical perturbative solutions to this modified dynamics. These in turn yield information about the modification to the GW strain registered by GW observatories. We find out that this model leads to the modification of the amplitude, frequency and the speed of propagation of the waves. These effects can be detectable by LISA if the polymer parameters have a minimum certain value. %
We also studied the coalescing waveform of two inspiral equal mass black holes. By analyzing the strain of this binary system in the Fourier (frequency) space, we have estimated the values of the polymer parameters needed so that LISA can observe the new physics corresponding to this quantum GW model. Numerical investigations in Sec.~\ref{sec:LISA} show that there is a possibility for observation of polymer corrections in certain conditions. Fig.~\ref{fig:CS} shows that closer binary systems with larger masses have more potential for observation of corrections induced by different schemes of polymer quantization. The analysis performed in Sec.~\ref{sec:LISA} demonstrates that the minimum detectable values for the polymer scales $\mu$ and $\nu$ in the given settings are $10^{-50}\, \rm m^{1/2}$ and $10^{-58}\, \rm m^{-1/2}$ in natural units respectively. It should be emphasized that these values are found based on the presumption that polymer effects are sub-leading order effects and the overall shape of waveform should remain very close to the classical one. Thus, reported values are the largest possible values of $\mu$ and $\nu$ for a localized gravitational system with a given scale $\ell$ in which the polymer effects can be considered as corrections to the classical prediction in the given settings. According to the susceptibility range of LISA and LIGO/VERGO detectors for the frequency of GWs, values in range $10^{9} \rm m - 10^{13} \rm m$ for cutoff $\ell$ are acceptable when considering BHBs or super MBHBs. Choosing $\ell$ in this interval results in values of range $10^{-44} \rm m^{1/2} - 10^{-50} \rm m^{1/2}$ and $10^{-52} \rm m^{-1/2} - 10^{-58} \rm m^{-1/2}$ for polymer scales $\mu$ and $\nu$ respectively. While cosmological sources of gravitational waves can produce waves with wavelength up to the present cosmological horizon (or for the case of primordial gravitational waves, wavelength could be up to the size of last scattering surface), these wavelengths are more likely to be well beyond the reach of any direct detectors for the near future. The scenario in the present work is not cosmological, so the Hubble scale is not relevant for our context. Thus, in our setting it is physically reasonable to ignore or absorb such wavelengths in the homogeneous background. Here, we are dealing with localized gravitational systems, i.e., binaries. In the context of cosmological sources (like primordial GWs), we need to set $\ell$ to Hubble scale, this is what we will do in the upcoming paper in the context of inflation.
Colored curves in Fig.~\ref{fig:CS} depict only inspiral phase of binary merger, which means that if the polymer corrections in inspiral phase of the binary (dashed colored lines) are not in the sensitivity range of LISA, they may come in the detection range during the merger phase, due to the amplification of amplitudes, especially in polymer $\cal{E}$ scheme in which corrections increase more with frequency (compare slopes of dashed lines of Figs.~\ref{fig:CSE1} and \ref{fig:CSE2} with those in Figs.~\ref{fig:CSA1} and \ref{fig:CSA2}).
In a future work, we will put a more strict bound on polymer parameters but performing a statistical analysis considering the data points from LIGO and comparing them with our theoretical results. One also can use the present results to compute the power spectrum of radiations. This we will also pursue in our next project.
\section*{ACKNOWLEDGMENTS}
Y.T. acknowledges the Research deputy of the University of Guilan for financial support.
This article is based upon work from the Action CA18108 -- Quantum gravity phenomenology
in the multi-messenger approach -- supported by the COST (European Cooperation in Science
and Technology). S. R. acknowledges the support of the Natural Science and Engineering Research Council of Canada, funding
reference numbers RGPIN-2021-03644 and DGECR-2021-00302.
\appendix
\section{Geodesic deviation}
Let us use the equations (\ref{def:action-Masses-2}) and (\ref{def:metric-fermi}) to derive the action for the geodesic deviation. According to these two equations, the action for the geodesic deviation takes the form
\begin{eqnarray}
S = - m \int dt \, \left\{ - g_{00}\left( \xi^j \right) - 2 g_{0i}\left( \xi^j \right) \dot{\xi}^i - g_{jk}\left( \xi^l \right) \dot{\xi}^j \dot{\xi}^k \right\}^{1/2}, \label{GDAction}
\end{eqnarray}
\noindent where the components of the metric tensor are given by (\ref{def:metric-fermi}) and at second order in $\xi^j$ are given by
\begin{eqnarray}
g_{00}\left( \xi^j \right) &=& 1 + R_{0i0j} \xi^i \, \xi^j, \label{MC1}\\
g_{0i}\left( \xi^j \right) &=& \frac{4}{3} R_{0jik} \xi^j \, \xi^k, \label{MC2} \\
g_{ij}\left( \xi^j \right) &=& - \delta_{ij} + \frac{1}{3} R_{ikjl} \xi^k \, \xi^l. \label{MC3}
\end{eqnarray}
Replacing these coefficients in (\ref{GDAction}) yields
\begin{eqnarray}
S = - m \int dt \, \left\{ \left[ - 1 + R_{0i0j} \xi^i \, \xi^j + \left( \dot{\xi^i}\right)^2 \right] + {\cal O}(3, \xi) \right\}^{1/2}, \label{GDAction2}
\end{eqnarray}
\noindent where the Riemann coefficient term takes the form
\begin{equation}
R_{0i0j} = - \frac{1}{2} \ddot{h}_{ij}(t,0).
\end{equation}
Assuming the analysis yields the action given in (\ref{GDAction-Ali}) let us continue from it and let us go over the extended phase space consideration. In other words, let us consider an action of the form
\begin{equation}
S = \int dt \left[ \frac{m}{2} (\dot{\xi}^i)^2 + \frac{m}{4} \ddot{h}_{ij}(t)\xi^i \, \xi^j \right]. \label{GDAction3}
\end{equation}
This action can be written as
\begin{eqnarray}
S &=& \int dt \left\{ \frac{m}{2} \left[ (\dot{\xi}^1)^2 + (\dot{\xi}^2)^2 \right] + \frac{m}{4} \left[ \ddot{h}_{11}(t) (\xi^1)^2 + 2 \ddot{h}_{12}(t)\xi^1 \, \xi^2 - \ddot{h}_{11}(t) (\xi^2)^2 \right] \right\}. \nonumber \\ \label{GDAction4}
\end{eqnarray}
\section{Canonical transformations}
In terms of new variables given by Eq.~(\ref{variables-new}), the action (\ref{GDAction-Ali}) takes the form
\begin{equation}
S_{\xi} \simeq \int_{\gamma_{B}} dt \left[ \frac{m}{2} (\dot{\xi}^{i})^2 + \frac{m \kappa}{2\ell^{3/2}} \sum_{\lambda, \mathbf{k}} \ddot{\mathcal{A}}_{\lambda,\mathbf{k}}(t)\, e^{\lambda}_{jk}\, \xi^j \xi^k \right]. \label{GDAction-Ali1}
\end{equation}
or
\begin{align}
S_{\xi} &\simeq \int_{\gamma_{B}} dt \left[ \frac{m}{2} (\dot{\xi}^{i})^2 + \frac{m \kappa}{2\ell^{3/2}} \sum_{\mathbf{k}} \left(\ddot{\mathcal{A}}_{+,\mathbf{k}}\, e^{+}_{jk}\, \xi^j \xi^k
+ \ddot{\mathcal{A}}_{\times,\mathbf{k}}\, e^{\times}_{jk}\, \xi^j \xi^k\right) \right] \nonumber \\
& =
\int_{\gamma_{B}} dt \left[ \frac{m}{2} (\dot{\xi}^{1})^2 + \frac{m}{2} (\dot{\xi}^{2})^2 +
\frac{m \kappa}{2\ell^{3/2}} \sum_{\mathbf{k}} \left(\ddot{\mathcal{A}}_{+,\mathbf{k}}\, (\xi^1)^2 - \ddot{\mathcal{A}}_{+,\mathbf{k}}\,(\xi^2)^2
+ 2\ddot{\mathcal{A}}_{\times,\mathbf{k}}\, \xi^1 \xi^2\right) \right] \nonumber \\
& =
\int_{\gamma_{B}} dt \left[P_{\xi^1}\dot{\xi}^1 + P_{\xi^2}\dot{\xi}^2 - \left(\frac{P_{\xi^1}^2}{2m} + \frac{P_{\xi^2}^2}{2m}\right) -
\frac{m \kappa}{2\ell^{3/2}} \left(\ddot{\mathcal{A}}_{+}\, (\xi^1)^2 - \ddot{\mathcal{A}}_{+}\,(\xi^2)^2
+ 2\ddot{\mathcal{A}}_{\times}\, \xi^1 \xi^2\right) \right],
\label{action-arm2}
\end{align}
where,
\begin{equation}
\dot{\xi}^i = \frac{P_{\xi^i}}{m}\, ,
\end{equation}
and we have defined
\begin{equation}
\mathcal{A}_{\lambda} \equiv \sum_{\mathbf{k}} \mathcal{A}_{\lambda, \mathbf{k}}.
\end{equation}
By employing the time-dependent canonical transformation
\begin{align}
\left(\begin{array}{c} \xi^1 \\ \xi^2 \end{array} \right) = \left(\begin{array}{cc} P_{11} & P_{12} \\ P_{21} & P_{22} \end{array} \right) \left(\begin{array}{c} \chi^1 \\ \chi^2 \end{array} \right), \quad
\left(\begin{array}{c} p_{\xi^1} \\ p_{\xi^2} \end{array} \right) = \left(\begin{array}{cc} P_{11} & P_{21} \\ P_{12} & P_{22} \end{array} \right) \left(\begin{array}{c} P_{\chi^1} \\ P_{\chi^2} \end{array} \right),
\end{align}
with the canonical map $\mathsf{P}$:
\begin{align}
\mathsf{P} = \left(\begin{array}{cc} P_{11} & P_{12} \\ P_{21} & P_{22} \end{array} \right) = \frac{1}{\sqrt{2 \gamma}\, \ddot{\mathcal{A}}_{\times}} \left(\begin{array}{cc} (\gamma + \ddot{\mathcal{A}}_{+}) \sqrt{\gamma - \ddot{\mathcal{A}}_{+}} & (\gamma - \ddot{\mathcal{A}}_{+}) \sqrt{\gamma + \ddot{\mathcal{A}}_{+}} \\ (\gamma - \ddot{\mathcal{A}}_{+}) \sqrt{\gamma + \ddot{\mathcal{A}}_{+}} & -(\gamma + \ddot{\mathcal{A}}_{+}) \sqrt{\gamma - \ddot{\mathcal{A}}_{+}} \end{array} \right),
\end{align}
where, $\gamma(t)$ is given by
\begin{align}
\gamma(t) = \sqrt{\ddot{\mathcal{A}}_{+}^2 + \ddot{\mathcal{A}}_{\times}^2}\, ,
\end{align}
we can rewrite the second term in the bracket in the action (\ref{action-arm2}) as
\begin{align}
\ddot{\mathcal{A}}_{+}\, (\xi^1)^2
+ 2\ddot{\mathcal{A}}_{\times}\, \xi^1 \xi^2 - \ddot{\mathcal{A}}_{+}\,(\xi^2)^2
&=
\begin{pmatrix}
\xi^1 & \xi^2
\end{pmatrix}
\begin{pmatrix}
\ddot{\mathcal{A}}_{+} & \ddot{\mathcal{A}}_{\times} \vspace{1mm} \\
\ddot{\mathcal{A}}_{\times} & -\ddot{\mathcal{A}}_{+}
\end{pmatrix}
\begin{pmatrix}
\xi^1 & \xi^2
\end{pmatrix}^{\rm T} \nonumber \\
&=: \begin{pmatrix}
\chi^1 & \chi^2
\end{pmatrix}
\begin{pmatrix}
\gamma & 0 \\
0 & -\gamma
\end{pmatrix}
\begin{pmatrix}
\chi^1 & \chi^2
\end{pmatrix}^{\rm T}.
\end{align}
Now, in terms of the new canonical conjugate variables, i.e., $(\chi^i, P_{\chi^i})$ (it can be checked that $\{\xi^i, p_{\xi^i}\}=1=\{\chi^i, P_{\chi^i}\}$), the action (\ref{action-arm2}) becomes
\begin{align}
S_{\chi}
&=
\int_{\gamma_{B}} dt \left[ P_{\chi^1}\dot{\chi}^1 + P_{\chi^2}\dot{\chi}^2 - \left(\frac{(P_{\chi^1})^2}{2m} + \frac{(P_{\chi^2})^2}{2m}\right) -
\frac{m \kappa}{2\ell^{3/2}} \gamma(t)\Big((\chi^1)^2 - (\chi^2)^2\Big) \right] + \text{B.T.} \nonumber\\
&=: \int_{\gamma_{B}} dt \left[\left(P_{\chi^1}\dot{\chi}^1 - H_1\right) + \left(P_{\chi^2}\dot{\chi}^2 - H_2\right)\right] + \text{B.T.},
\label{HamilGDAction3a}
\end{align}
where,
\begin{align}
H_1 &:= \frac{(P_{\chi^1})^2}{2m} + \frac{m \kappa}{2\ell^{3/2}} \gamma(t)(\chi^1)^2, \\
H_2 & := \frac{(P_{\chi^2})^2}{2m} - \frac{m \kappa}{2\ell^{3/2}} \gamma(t)(\chi^2)^2.
\end{align}
It turns out that, the action (\ref{HamilGDAction3a}) describes two decoupled time-dependent harmonic oscillators, with Hamiltonians $H_1$ and $H_2$.
To obtain time-independent harmonic oscillators, we have to move to the extended phase space formalism. Of course, this have to be done for each oscillator independently of the other.
Now, we should look for equation of motion for the arm length and solve it.
\begin{align}
\dot{\chi}^1 &= \frac{\partial H_1}{\partial P_{\chi^1}} = \frac{1}{m} P_{\chi^1} , \qquad \dot{P}_{\chi^1} = - \frac{\partial H_1}{\partial {\chi^1}} = - \frac{m \kappa}{\ell^{3/2}} \gamma(t) \, \chi^1, \\
\dot{\chi}^2 &= \frac{\partial H_2}{\partial P_{\chi^2}} = \frac{1}{m} P_{\chi^2} , \qquad \dot{P}_{\chi^2} = - \frac{\partial H_2}{\partial {\chi^2}} = \frac{m \kappa}{\ell^{3/2}} \gamma(t) \, \chi^2,
\end{align}
The second order equations are given by
\begin{align}
\ddot{\chi}^1 = - \frac{ \kappa}{\ell^{3/2}} \gamma(t) \, \chi^1, \qquad \ddot{\chi}^2 = \frac{ \kappa}{\ell^{3/2}} \gamma(t) \, \chi^2.
\end{align}
\bibliography{References}
|
Title:
GRB 171205A: Hypernova and Newborn Neutron Star |
Abstract: GRB 171205A is a low-luminosity, long-duration gamma-ray burst (GRB)
associated with SN 2017iuk, a broad-line type Ic supernova (SN). It is
consistent with being formed in the core-collapse of a single CO star, or in a
widely separated binary, which we have called the Binary driven Hypernova
(BdHN) of type III. The core-collapse of the CO star forms a newborn NS
($\nu$NS) and the SN explosion. Fallback accretion transfers mass and angular
momentum to the $\nu$NS. The accretion energy injected into the expanding
stellar layers powers the prompt emission. The multiwavelength power-law
afterglow is explained by the synchrotron radiation of electrons in the SN
ejecta, powered by energy injected by the spinning $\nu$NS. We calculate the
amount of mass and angular momentum gained by the $\nu$NS, as well as the
$\nu$NS rotational evolution. The $\nu$NS spins up to a period of $58$ ms, then
releases its rotational energy powering the synchrotron emission of the
afterglow. The paucity of the $\nu$NS spin explains the low-luminosity
characteristic and that the optical emission of the SN from the nickel
radioactive decay outshines the optical emission from the synchrotron
radiation. From the $\nu$NS evolution, we infer that the SN explosion had to
occur at most $7.36$ h before the GRB trigger. Therefore, for the first time,
the analysis of the GRB data leads to the time of occurrence of the associated
SN explosion, setting a stringent delay time between the neutrino emission
associated with the SN and the electromagnetic emission of the GRB event.
| https://export.arxiv.org/pdf/2208.02725 |
\title{GRB 171205A: Hypernova and Newborn Neutron Star}
\author{Yu~Wang}
\affiliation{ICRA, Dip. di Fisica, Universit\`a di Roma ``La Sapienza'', Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\affiliation{ICRANet, Piazza della Repubblica 10, I-65122 Pescara, Italy}
\affiliation{INAF -- Osservatorio Astronomico d'Abruzzo,Via M. Maggini snc, I-64100, Teramo, Italy}
\author{L.~M.~Becerra}
\affiliation{Escuela de F\'isica, Universidad Industrial de Santander, A.A.678, Bucaramanga, 680002, Colombia }
\affiliation{ICRANet, Piazza della Repubblica 10, I-65122 Pescara, Italy}
\author{C.~L.~Fryer}
\affiliation{Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA}
\affiliation{Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, NM, 87545, USA}
\affiliation{The University of Arizona, Tucson, AZ 85721, USA}
\affiliation{Department of Physics and Astronomy, The University of New Mexico, Albuquerque, NM 87131, USA}
\affiliation{The George Washington University, Washington, DC 20052, USA}
\author{J.~A.~Rueda}
\affiliation{ICRA, Dip. di Fisica, Universit\`a di Roma ``La Sapienza'', Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\affiliation{ICRANet, Piazza della Repubblica 10, I-65122 Pescara, Italy}
\affiliation{ICRANet-Ferrara, Dip. di Fisica e Scienze della Terra, Universit\`a degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara, Italy}
\affiliation{Dip. di Fisica e Scienze della Terra, Universit\`a degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara, Italy}
\affiliation{INAF, Istituto di Astrofisica e Planetologia Spaziali, Via Fosso del Cavaliere 100, 00133 Rome, Italy}
\author{R.~Ruffini}
\affiliation{ICRA, Dip. di Fisica, Universit\`a di Roma ``La Sapienza'', Piazzale Aldo Moro 5, I-00185 Roma, Italy}
\affiliation{ICRANet, Piazza della Repubblica 10, I-65122 Pescara, Italy}
\affiliation{INAF,Viale del Parco Mellini 84, 00136 Rome, Italy}
\email{[email protected], [email protected], \\[email protected], [email protected], [email protected]}
\keywords{gamma-ray bursts: general -- black hole physics -- pulsars}
\section{Introduction} %
\label{sec:introduction}
Swift-BAT triggered and located GRB 171205A at $07:20:43$ UT on December $17$, $2017$. Swift-XRT began to observe $144.7$~s after the BAT trigger \citep{2017GCN.22177....1D}. Soon, \citet{2017GCN.22178....1I} found that the burst was located in a nearby galaxy at redshift $z=0.0368$, which was later confirmed by the VLT/X-shooter telescope \citep{2017GCN.22180....1I}. About $5$ d after, the associated type Ic supernova (SN) started to emerge and was detected by the $10.4$-m GTC telescope \citep{2017GCN.22204....1D} and SMARTS $1.3$-m telescope \citep{2017GCN.22192....1C}.
This source has gained much observational attention since it was the third nearest GRB at the time of its discovery. \citet{2018A&A...619A..66D} performed the multi-wavelength analysis of GRB 171205A using the data from the Swift and Konus-Wind satellites, covering from the optical to the sub-MeV energies. Their cutoff power-law fit gives the peak energy at $125$~keV and the isotropic energy $2.18 \times 10^{49}$~erg, which implies this burst is a low luminosity GRB and is an outlier of the Amati relation. \citet{2018ApJ...867..147W} reported the spectroscopic observation of the SN associated with the GRB, SN 2017iuk, and of the host galaxy. These observations showed that SN 2017iuk is a typical type Ic SN that resembles SN 2006aj, and that the host is an early-type, star-forming galaxy of high mass, low star formation rate, and low solar metallicity. In this source, for the first time, it was observed the polarization in the millimeter and radio bands during the afterglow phase, thanks to the intensive combined use of SMA, ALMA, and VLA, which shows a linear polarization $<1\%$ indicative of Faraday depolarization \citep{2019ApJ...884L..58U, 2020ApJ...895...64L}. The observation continued for years, the ASKAP, ATCA and $\rm \mu$GMRT radio observations lasted till $\sim 1000$~d, the radio afterglow decays following a shallow power-law and no jet break was exhibited \citep{2021MNRAS.503.1847L, 2021ApJ...907...60M}. Figure \ref{fig:LCGRB171205A} shows the multiwavelength light curve of GRB 171205A.
The origin of low-luminosity GRBs is still an open debate, and some interpretations include that these are bursts observed off-axis \citep{2004ApJ...602..886W,2006Natur.442.1014S,2006ApJ...638..930S,2016MNRAS.461.1568K,2019ApJ...871..123F,2020A&A...639L..11I}, shockwave breakout from the progenitor's shell \citep{2006Natur.442.1008C,2016MNRAS.460.1680I,2007MNRAS.375..240L,2008Natur.453..469S}, and emission from a jet-heated cocoon \citep{2015ApJ...807..172N,2017Sci...358.1559K,2018MNRAS.479..588G}. GRB 171205A, as a low luminosity GRB at a low redshift, provides a testing ground for the theoretical models. \citet{2019Natur.565..324I} found thermal X-ray and optical emissions radiated from material whose velocity evolves from $\sim 0.3~c$ to $0.1~c$ in the first $7$ d, and with a chemical composition that differs from that of SN 2017iuk which has a lower velocity ($<0.1~c$) evidenced by the spectroscopic analysis. They proposed the high-velocity material is a portion of the accelerated cocoon, which becomes transparent at $\sim 7$ d, and then the SN dominates the optical emission. \citet{2022ApJ...925..148S} performed hydrodynamic simulations of a powerful jet penetrating the progenitor star and showed that jet-induced chemical mixing can lead to the observed chemical composition of the high-velocity material. \citet{2021ApJ...907...60M} analyzed GRB 171205A with the shockwave breakout and the canonical off-axis jet models and show that both are inconsistent with the $1000$ d observations. Compared to the observation, the shockwave breakout model predicts a longer duration, a lower peak energy, and requires a higher column density. Moreover, the radius ($\sim 10^{13}$~cm) derived from the thermal component is too large for a typical progenitor. For the off-axis model, the discrepancies arise because the burst does not exhibit expected off-axis properties like a low peak energy, a luminosity increasing in the afterglow, and a frequency-independent break in the light curve \citep{2018A&A...619A..66D}. There are alternative models, e.g., \citet{2019ApJ...870...38S} modeled the burst as mild-relativistic spherical ejecta interacting with an ambient wind-like medium producing forward and reverse shocks and forming a thin shell. In their model, the prompt gamma-ray and X-ray emissions are produced when the optical depth of the shell reaches transparency, and subsequently, the radio and X-ray emissions are produced in the shock fronts by synchrotron and inverse Compton processes. They claimed this model can fit the prompt luminosity and duration, as well as the late-time X-ray, optical, and radio light curves.
Therefore, a satisfactory explanation of the multiwavelength data and the evolution with time of GRB 171205A remains an open issue. In this work, we analyze this source from the perspective of the binary-driven hypernova (BdHN) model of long GRBs. The progenitor of the GRB in the BdHN model is a binary system composed of a carbon-oxygen (CO) star and a neutron star (NS) companion. Numerical simulations of the sequence of physical processes occurring in a BdHN has been performed in the last decade and have led to a detailed picture and interpretation of the GRB observables \citep[see, e.g.,][]{2012ApJ...758L...7R, 2012A&A...548L...5I, 2014ApJ...793L..36F, 2015PhRvL.115w1102F, 2015ApJ...812..100B, 2016ApJ...833..107B, 2018ApJ...852...53R, 2019ApJ...871...14B}. The core-collapse of the CO star leads to the formation of a newborn NS ($\nu$NS) at its center and ejects the outer layers of the star in a SN explosion. The ejecta accretes onto the NS companion and due to matter fallback there is also accretion onto the $\nu$NS. Both accretion processes are hypercritical (i.e., highly super-Eddington) in view of the activation of a very efficient neutrino emission \citep{2016ApJ...833..107B, 2018ApJ...852..120B}. For orbital periods of a few minutes, the NS companion reaches the critical mass for gravitational collapse, leading to a Kerr black hole (BH). These BdHN have been called of type I (BdHN I). BdHN I explain the energetic GRBs with isotropic energies $\gtrsim 10^{52}$ erg. The accretion processes are observed as precursors of the prompt emission \citep[see, e.g.,][]{2019ApJ...874...39W}. The gravitomagnetic interaction of the newborn Kerr BH with the surrounding magnetic field induces an electric field. For a sufficiently supercritical magnetic field, the electric field becomes also supercritical leading to an electron-positron ($e^+e^-$) pair plasma. The self-acceleration of this plasma to Lorentz factors $\Gamma \sim 100$ and its transparency explain the ultra-relativistic prompt emission (UPE) phase \citep[see][and references therein]{2021PhRvD.104f3043M}. The electric field accelerates electrons to ultra-relativistic energies leading to synchrotron radiation that explain the observed GeV emission \citep{2019ApJ...886...82R, 2020EPJC...80..300R, 2021A&A...649A..75M, 2022ApJ...929...56R}. There is an additional synchrotron radiation process by relativistic electrons in the ejecta expanding in the $\nu$NS magnetic field. The $\nu$NS also injects energy into the ejecta. This synchrotron radiation explains the afterglow emission in the X-rays, optical, and radio wavelengths \citep[see, e.g.,][]{2018ApJ...869..101R, 2019ApJ...874...39W, 2020ApJ...893..148R}. Finally, the release of nickel decay (into cobalt) in the SN ejecta powers the bump observed in the optical in the late afterglow.
For longer orbital periods, the NS companion does not reach the critical mass, so it remains as a massive, fast rotating NS. These BdHN have been called of type II (BdHN II). BdHN II explain the less energetic GRBs with isotropic energies $\lesssim 10^{52}$ erg. The physical processes and related observables associated with the presence of the BH are clearly not observed in the BdHN II (e.g., the UPE and the GeV emission). The synchrotron afterglow in the X-rays, optical, and radio wavelengths, instead, is present both in BdHN I and II because it is powered by the $\nu$NS and the SN ejecta. When considering less and less energetic GRBs, it arises a natural question in the BdHN picture: do these sources originate from binaries with very long orbital periods, or by a single exploding CO star? From the practical point of view, the BdHN model predicts that for very long orbital periods, the effects associated with the presence of the binary companion become observationally irrelevant. Therefore, the source can show up as a single exploding star, and there is no observable that can discriminate the presence of a binary companion. On the other hand, we expect that the binaries with very long orbital periods get disrupted by the SN explosion \citep[see, e.g.,][and references therein]{2015PhRvL.115w1102F}. Under the above circumstances, and in view of the isotropic energy of only a few $10^{49}$ erg, we here model GRB 1701205A as originated in the core-collapse SN of a single CO star. We shall call these low-luminous sources with energies $\lesssim 10^{49}$--$10^{50}$ erg, as BdHNe III.
In Sec. \ref{sec:2}, we analyze the Swift observations and fit the time-resolved spectra using the MCMC method, then we generate the light-curves for the prompt emission and afterglow, shown in Figs. \ref{fig:LCGRB171205A} and \ref{fig:spectrumXRTBAT}. The special feature of this burst is the presence of a thermal component in the early afterglow, where the temperature drops from about $90$~eV to $70$~eV in the first $300$~s. In Sec. \ref{sec:3}, we describe the physical process of this burst, we suggest that this low-luminosity burst originates from a strong SN (or a hypernova). The fallback accretion after the SN collapse heats up the SN ejecta, accelerating its outermost layer to mild-relativistic and the heated ejecta emits thermal radiation. This process is similar to the cocoon model, but the opening angle for the energy release of the fallback accretion is much larger than the traditional jet. This large opening angle is consistent with the absence of the jet break signal in the afterglow. In the meanwhile, the fallback accretion spins up the central NS, which in turn injects energy to power the afterglow by losing its rotational energy. In Sec. \ref{sec:4}, we establish the analytical solutions for the spin-up of the $\nu$NS due to the mass and angular momentum transfer during the accretion. We derive an analytical solution for the time required for the spin-up process using an accurate PadГЁ approximant in the expression of the angular velocity as a function of time (see Figs. \ref{fig:omvst} and \ref{fig:pade}). The spin period of the NS required by the theory can be obtained from the observation by assuming that the energy of the X-ray afterglow is mainly contributed by the rotational energy of the NS. From the observation of GRB 171205A, we derive that the NS is possibly accelerated to a spin period of $58$~ms, and $0.026~M_\odot$ are accreted by the $\nu$NS via fallback. We show that this process takes $7.36$ h for a $\nu$NS born with zero spin. In Sec. \ref{sec:5}, we model the afterglow in the X-rays, optical, and radio wavelengths as originating from synchrotron radiation in the expanding SN ejecta with the energy injection from the central $58$~ms spinning $\nu$NS pulsar (see Fig. \ref{fig:fit171205A}). The conclusions are given in Sec. \ref{sec:6}.
\section{Spectrum and Light curve}
\label{sec:2}
Swift-BAT and Swift-XRT data are retrieved from UKSSDC \footnote{\url{http://www.Swift.ac.uk}}, the data reduction are performed by Heasoft 6.29 \footnote{\url{http://heasarc.gsfc.nasa.gov/lheasoft/}}, then the exported spectra are fitted by the Multi-Mission Maximum Likelihood framework (3ML) \citep{2015arXiv150708343V}. In order to produce the luminosity light curve, the BAT data are binned following the thresholds that the signal to noise ratio (SNR) is at least 6 and the maximal bin size is at most $50$~s. Then each binned spectrum is fitted by a cutoff power-law (CPL) function and is integrated from $15$~keV to $150$~keV according to the BAT bandwidth to obtain the flux. After having the fitting parameters, the fluxes and by adopting the FRW cosmology\footnote{The Friedman-Lema\^itre-Robertson-Walker metric is used for computing the luminosity distance, Hubble constant $H_0=67.4\pm0.5$~km/s/Mpc, and matter density $\Omega_M = 0.315\pm0.007$ \citep{2018arXiv180706209P}.}, the k-corrected luminosity light-curve is obtained \citep{2001AJ....121.2879B}. We generate the light-curve of XRT in the energy range $0.3$--$10$~keV following a similar procedure, the corresponding binning thresholds change to at least $200$ counts and $10$ s duration for the windows timing (WT) mode, as well as at least $100$ counts and $100$ s duration per bin for the photon counting (PC) mode. All the XRT spectra are fitted by a power-law function\footnote{To have more data points for the light curve, our binning is more concerned with sufficiently short time resolution than with exact spectra. Therefore, the power-law model is used uniformly to fit the spectra, rather than the more accurate power-law plus blackbody model for which the data of each small bin cannot constrain all parameters. This introduces an error of less than $5\%$, which is in an acceptable level.} with the photoelectric absorption models of our Galaxy and the host galaxy. The generated Swift luminosity light-curves are presented in Fig. \ref{fig:LCGRB171205A}. We notice that this burst is seen since $\sim 38$~s before the BAT trigger, hence we set $T_0$ as $38$~s before the BAT trigger time. The XRT light-curve later than $8\times10^4$~s is fitted by a power-law function using \textit{lmfit} \citep{matt_newville_2021_5570790}, a python package for non-linear optimization and curve fitting. We obtain a power-law index $-1.01\pm0.06$. The extrapolation of the power-law function coincides with the initial prompt luminosity.
The $T_{90}$ of the BAT observation lasts $189.19$~s, its time-integrated can be described by a cutoff power-law model with power-law index $\alpha = -1.10\pm0.35$, while the peak energy cannot be precisely constrained $E_p=148.55\pm 121.97$~keV. These parameters are consistent with \citet{2018A&A...619A..66D}, which jointly fitted BAT and Konus-\textit{Wind} data. They obtained $\alpha=0.85^{+0.54}_{-0.41}$ and $E_p = 122^{+111}_{-32}$~keV, where the uncertainty of peak energy has been tightened because Konus-\textit{Wind} covers higher energies than BAT. The integrated flux gives $(1.56\pm 0.31) \times 10^{-8}$~erg~cm$^{-2}$~s$^{-1}$ in the observed $15$--$150$~keV bandwidth, and extrapolated to $(2.63\pm 0.54) \times 10^{-8}$~erg~cm$^{-2}$~s$^{-1}$ in $1$--$10^4$~keV, which corresponds to the isotropic energy $E_{\rm iso} = (1.71\pm 0.35) \times 10^{49}$~erg.
The presence of a thermal component in the afterglow of GRB 171205A has been reported in several articles \citep{2017GCN.22191....1C,2018A&A...619A..66D,2019Natur.565..324I}. Our time-resolved analysis also confirms that the additional thermal component significantly improves the fit to the low-energy band of the XRT ($<1$ keV) till $324$~s with a fitting blackbody temperature that drops from $\sim 90$~eV to $\sim 70$~eV, with an uncertainty of $\sim 10$~eV. Afterward, the thermal spectrum gradually fades out of the XRT band ($0.3$--$10$ keV) as the temperature decreases. The WT data of XRT is unable to constrain the temperature at a times later than $\sim 4000$~s, while the optical telescopes start to capture the thermal component that cools to the optical band \citep{2019Natur.565..324I}.
There is a common time window for BAT and XRT observing the source, from $\sim 151$~s when XRT had slewed to the GRB position, till $\sim 162$~s, the end of the $T_{90}$ of BAT. The BAT data at the end of the prompt emission is adequate to constrain the cutoff energy, hence the model of a power-law of index $\alpha = -2.00\pm0.17$ plus a blackbody component of $kT = 77.53\pm 8.28$~eV is implemented to fit the entire data, as shown in Fig. \ref{fig:spectrumXRTBAT}.
The optical and radio light-curves in Fig. \ref{fig:LCGRB171205A} are reproduced from \citet{2018A&A...619A..66D} and \citet{2021ApJ...907...60M}, respectively. The optical luminosity is unusually bright compared to the X-rays. \citet{2019Natur.565..324I} found that the evolution of the optical spectrum before and after 7 days is dominated by two blackbodies with different evolution laws. The $1000$ days radio light-curve shows a shallow decay without any jet break signature. We refer to \citet{2018A&A...619A..66D, 2019Natur.565..324I,2021ApJ...907...60M} for the detailed analyses and discussion of the optical and radio data, including the SN optical observation.
\section{Physical Picture}
\label{sec:3}
At a given moment, a type Ic SN occurs from the core-collapse of the CO star, forming at the same time a $\nu$NS at its center. The fallback accretion spins up the $\nu$NS (see Sec. \ref{sec:4}), while releasing the accretion energy. From \citet{2019ApJ...871...14B}, the initial accretion rate is up to a few of $10^{-3}~{\rm M}_\odot~{\rm s}^{-1}$ and lasts tens of seconds, then it drops following a power-law depending on the SN density profile. Therefore, in the initial phase of tens of seconds, the total energy generated from the accretion and to be injected into the stellar shells reaches $\sim 10^{52}$~erg, which is comparable to the kinetic energy of SN ejecta inferred from the optical emissions at a later time. Different from the traditional jetted model of GRBs, this amount of energy is emitted in a large opening angle of probably tens of degrees, it propagates in a portion of shells and accelerates the outermost shell to the mild-relativistic velocity. The hydrodynamics can be referred to the simulation in \citet{2018ApJ...852...53R}, where has been simulated the propagation of GRB injected energy in the expanding stellar shells. The Lorentz factor of the shockwave is lower than $5$ when it breaks out the outermost shell at $\sim 10^{12}$~cm. The acceleration of the accretion-powered blastwave is similar to that proposed for the shock-accelerated GRB model~\citep{1974ApJ...187..333C}. In this scenario, a supernova blastwave accelerates as it propagates down the steep density gradient at the edge of a massive star~\citep{1974ApJ...187..333C,2001ApJ...551..946T}. Although these models can produce highly-relativistic ejecta in idealized conditions, the bulk of the material reaches only mildly relativistic velocities. Our model mirrors this evolution, differing only from this picture because the blastwave is propagating through an exploding CO star and is not spherical. Our asphericity has many of the features of the cocoon produced in jet models \citep[see e.g.][]{2001ApJ...556L..37M,2002MNRAS.337.1349R,2004ApJ...608..365Z,2017ApJ...834...28N}, that the jet pushes the stellar shells sideways to form a hot cocoon, a part of the cocoon emerges from the shells and expands outward with a mild-relativistic velocity. Hence, both our picture and the cocoon picture involve some heated high-velocity material originated from the stellar shells expanding and emitting a thermal spectrum. The evolution of such this blackbody spectrum has been indeed observed by Swift-XRT and several optical telescopes, and a mass of $1.1\times 10^{-3} M_\odot$ moving above $10^5$~km~s$^{-1}$ has been inferred; see Fig. \ref{fig:spectrumXRTBAT} and \citet{2019Natur.565..324I}. The difference is that in our picture, we expect a wider opening angle than in a jet, as we consider this low-luminosity GRB originates from a strong SN or hypernova in which the central compact object is the $\nu$NS. From the observations, there is no signature of any jet break in the afterglow till $\sim 1000$~days \citep{2021MNRAS.503.1847L, 2021ApJ...907...60M}, hence preferring a large opening angle description.
At this stage, our system has three energy sources; the accretion, the spinning $\nu$NS, and the high-velocity material. For the prompt emission, this low-luminosity GRB deviates from the Amati relation \citep{2002A&A...390...81A}; its peak energy ($E_p=148.55$~keV, see Fig. \ref{fig:spectrumXRTBAT}) is about one order of magnitude higher than the typical value of a weak GRB with isotropic energy $\sim 10^{49}$~erg \citep{2018A&A...619A..66D}. The deviation indicates this burst could be an extreme case or is formed by a different mechanism. \citet{2019Natur.565..324I} suggests that the jet deposits the majority energy in the creation of the cocoon and only a small fraction of energy emitted in gamma-rays. In our framework, accretion dominates the energy release once the SN explodes, and the majority of energy is injected into the stellar shells, converting to the internal and kinetic energy of the SN ejecta, and producing the fast moving material. The low isotropic energy ($E_{\rm iso} = 2.18\times10^{49}$ erg) of the prompt emission can be either produced by the tail of accretion or by the fast moving material \citep{2018MNRAS.478.4553D}. For the X-ray afterglow, it can be accounted for, at early times, by the synchrotron emission converted from the kinetic energy of the fast moving material, and at times after the plateau, by the release of rotational energy of the $\nu$NS that has been spun up to periods of the order of milliseconds. We have performed the numerical fitting of the spectrum and light-curve using this scenario for several GRBs \citep[see, e.g.,][]{2018ApJ...869..101R,2019ApJ...874...39W,2020ApJ...893..148R}. This is also supported by that the ending time of the plateau coincides with the transparency of the fast moving material at $\sim 10^5$~s. For the optical afterglow, we share the same opinion with \citet{2019Natur.565..324I}, that the fast expanding mass dominates the optical emission before $4$ days, then the dominance is overtaken by photons diffused out from the massive SN ejecta heated by the nickel radioactive decay.
The above picture contains many different physical processes, most of which have been discussed in detail and simulated, as the references mentioned in the text. However, after the birth of $\nu$NS, the fallback accretion, the mass change and the spin-up process have been rarely discussed in GRB studies. Hence, we will focus on modelling the properties of the newborn NS in the next section.
\section{Spin-up and fallback accretion onto the $\nu$NS}
\label{sec:4}
We turn now to estimate the spin-up and the amount of mass that the $\nu$NS has accreted to gain enough rotational energy to power the X-ray afterglow emission, as specified in the BdHN model \citep[see, e.g.,][for the analysis of 380 BdHNe]{2021MNRAS.504.5301R}.
Assuming the X-ray luminosity as a good proxy of the bolometric luminosity of the afterglow, we can estimate the change in the $\nu$NS rotational energy from a time $t_1$ to a time $t_2 > t_1$ from the energy balance equation, i.e.
\begin{equation}\label{eq:Erotvst}
\int_{t_1}^{t_2} \dot{E}_{\rm rot}\,dt = E_{\rm rot}(t_2) - E_{\rm rot} (t_1) \approx -\int_{t_1}^{t_2} L_X dt.
\end{equation}
After infinite time, the $\nu$NS will have lost all its rotational energy, therefore when $t_2 \to \infty$, we have $E_{{\rm rot},\infty} (t_2) \to 0$. So, assuming the time $t_1$ to be a generic time $t$, and the power-law luminosity
\begin{equation}\label{eq:Lx}
L_X = A_X t^{-\alpha_X},
\end{equation}
we obtain from Eq. (\ref{eq:Erotvst}) that the $\nu$NS angular velocity evolves as
\begin{equation}\label{eq:Omvst}
\Omega (t) \approx \sqrt{\frac{2 A_X\,t^{1-\alpha_X}}{(\alpha_X-1) I}},
\end{equation}
where $I$ is the stellar moment of inertia which we have assumed constant with time, and can be estimated, for instance, using the EOS-independent approximate expression \citep{2019JPhG...46c4001W}
\begin{equation}\label{eq:ILS}
I \approx \left( \frac{G}{c^2} \right)^2 M^3 \sum_{i=1}^4\frac{b_i}{(M/M_\odot)^i},
\end{equation}
where $b_1 = 1.0334$, $b_2 = 30.7271$, $b_3 = -12.8839$, and $b_4 = 2.8841$.
In the case of GRB 171205A, the X-ray luminosity is fitted by a power-law at times $t>t_{\rm pl}\approx 8\times 10^4$ s, with $A_X = 1.166\times 10^{48}$ erg s$^{-1}$, and $\alpha_X = 1.01\pm0.06$. Using these values, we estimate from Eq. (\ref{eq:Omvst}) that the rotation period of the $\nu$NS at $t=t_{\rm pl}$ is $P (t_{\rm pl}) \approx 85$ ms. If we assume that the $\nu$NS is spinning down from the $\nu$NS-rise, i.e., from $t = t_{\nu \rm NS} \approx 35$ s, but the emission from it is partially absorbed by the high-velocity material which is opaque before $\sim 10^5$~s, then by extrapolating from $t=t_{\rm pl}$ backward in time to $t = t_{\nu \rm NS}$, we infer that at the $\nu$NS-rise time, the $\nu$NS rotation period was $P (t_{\nu \rm NS}) \approx 58$ ms, i.e., $\Omega (t_{\nu \rm NS}) = 108.33$ rad s$^{-1}$.
We now estimate the mass accreted by the $\nu$NS before the $\nu$NS-rise, so to spin it up to the above rotation rate. The accretion rate onto the $\nu$NS, set by the amount of mass from the inner layers of the expanding matter that fallback onto the $\nu$NS and their infalling speed, proceeds at hypercritical rates \citep[see, e.g.,][]{1996ApJ...460..801F}. The accretion process makes the $\nu$NS to increase its mass-energy and rotation rate from the transfer of baryonic mass and angular momentum. The evolution of the $\nu$NS gravitational mass and angular momentum can be calculated from \citep{2019ApJ...871...14B}
\begin{align}
\dot{M}&=\left( \frac{\partial M}{\partial M_b} \right)_{J} \, \dot{M}_b + \left( \frac{\partial M} {\partial J}\right)_{M_b}\, \dot{J},\label{eq:Mdot}\\
\dot{J}&= \tau_{\rm acc},\label{eq:Jdot}
\end{align}
where $J = I \Omega$ is the angular momentum, $M$ is the gravitational mass, $M_b$ the baryonic mass, $\dot{M}_b$ is the baryonic mass accretion rate, and $\tau_{\rm acc}$ is the accretion torque.
Equation (\ref{eq:Mdot}) must be complemented with the expressions of the two partial derivatives. These relations can be calculated from the fitting formula of the NS binding energy obtained in \citet{2015PhRvD..92b3007C}
\begin{equation}\label{eq:MbMns}
\mu_b - \mu = \frac{13}{200}\mu^2\left(1-\frac{1}{130}j^{1.7} \right),
\end{equation}
where $j\equiv cJ/(GM_\odot^2)$ is the dimensionless angular momentum and $\mu = M/M_\odot$. From it, we redily obtain
\begin{align}
\left(\frac{\partial \mu}{\partial \mu_b} \right)_{j} &= \frac{1}{1+\frac{13}{100}\mu\left(1-\frac{1}{130}j^{1.7}\right)},\\
\left( \frac{\partial \mu} {\partial j}\right)_{\mu_b} &= \frac{\frac{1.7}{2000}\mu^2 j^{0.7}}{1+\frac{13}{100}\mu\left(1-\frac{1}{130}j^{1.7}\right)}.
\end{align}
The numerical simulations of BdHNe performed in \citet{2019ApJ...871...14B} show that the material accreted by the $\nu$NS circularizes around it in a sort of Keplerian disk structure before being accreted. Therefore, we assume that the accreted matter exerts onto the $\nu$NS the torque
\begin{equation}\label{eq:chi}
\tau_{\rm acc} = \chi\,l\,\dot{M}_b,
\end{equation}
where $l$ the specific (i.e. per unit mass) angular momentum of the innermost stable circular orbit around the $\nu$NS, and $\chi \leq 1$ is an efficiency parameter of angular momentum transfer. For the angular momentum of the last stable circular orbit, we use the approximate EOS-independent results presented in \citet{2017PhRvD..96b4046C}
\begin{equation}
l = 2\sqrt{3}\frac{G M}{c}\left[1 \mp 0.107\left( \frac{j}{M/M_\odot} \right)^{0.85}\right].
\label{eq:lISO}
\end{equation}
We can obtain an approximate, analytic solution to Eq. (\ref{eq:Jdot}). For this task, we use the following analytic formula that fits the numerical results of the fallback accretion rate calculated in \citet{2019ApJ...871...14B}
\begin{equation}\label{eq:Mbdot}
\dot{M_b} \approx \dot{M}_0 \left(1+\bar{t} \right)^{-p},
\end{equation}
where $\dot{M}_0 = 7.2 \times 10^{-4} M_\odot$ s$^{-1}$, $t_{\rm acc} = 12$ s, $p = 1.3$, and we have introduced the notation $\bar{t} = t/t_{\rm acc}$.
For the involved rotation rates ($j \sim 0.01$), the contribution of the rotation terms in Eqs. (\ref{eq:MbMns}) and (\ref{eq:lISO}) is negligible, so we can retain only the first term in those equations. With this assumption, and integrating Eq. (\ref{eq:Mbdot}), we have
\begin{align}
\mu_b &= \mu_b(t_0) + \frac{\dot{M}_0 t_{\rm acc}}{p-1}\left[ 1 - \left( 1 + \bar{t} \right)^{1-p} \right],\label{eq:mub}\\
\mu &\approx \frac{100}{13}\left(\sqrt{1+\frac{13}{50}\mu_b} - 1 \right),\label{eq:mu}\\
l &\approx 2 \sqrt{3}\frac{G M_\odot}{c} \mu, \label{eq:japp}
\end{align}
where $\mu_b(t_0) \approx \mu_0 + (13/200) \mu_0^2$, being $\mu_0 = M(t_0)/M_\odot$ the initial $\nu$NS gravitational mass, and we have inverted Eq. (\ref{eq:MbMns}) to write the gravitational mass in terms of the baryonic mass. Equations (\ref{eq:mub}) and (\ref{eq:mu}) implies that in the limit $t\to \infty$ the baryonic mass and the gravitational mass approaches a maximum value
\begin{align}
\mu_{b,\rm max} &= \mu_b(t_0) + \frac{\dot{M}_0 t_{\rm acc}}{p-1} = \mu_b(t_0) + 0.0288,\label{eq:mubmax}\\
\mu_{\rm max} &= \frac{100}{13}\left(\sqrt{1+\frac{13}{50}\mu_{b,\rm max}} - 1 \right).\label{eq:mumax}
\end{align}
We now approximate the angular momentum derivative as $\dot{J} \approx I \dot{\Omega} \approx I_{\rm max} \dot{\Omega}$, where $I_{\rm max} = I (\mu_{\rm max})$, so that Eq. (\ref{eq:Jdot}) becomes
\begin{equation}\label{eq:Omdot}
\dot{\Omega} \approx \beta \mu(t) (1+\bar{t})^{-p},\quad \beta = \frac{2 \sqrt{3} G M_\odot^2 \chi \dot{\mu}_0}{c I_{\rm max}},
\end{equation}
whose solution can be written as
\begin{equation}\label{eq:Omegasol}
\Omega(t) = \Omega(t_0) + \beta\int_{t_0}^t \mu(t) (1+\bar{t})^{-p} dt.
\end{equation}
Making the change of variable $x = (1+\bar{t})^{1-p}$, the integration of Eq. (\ref{eq:Omegasol}) is straightforward leading to
\begin{align}\label{eq:DeltaOmega}
&\Delta \Omega = \Omega(t) -\Omega(t_0) \nonumber\\
&=\omega \left\{ x + \frac{2}{3} k \left[ \left(1+\frac{13\mu_b}{50} \right)^{3/2} - \alpha^{3/2}\right] - 1 \right\},
\end{align}
where we have defined
\begin{align}\label{eq:constants}
\omega &= \frac{100}{13}\frac{\beta\, t_{\rm acc}}{p-1},\quad \Delta \mu_b = \frac{\dot{M}_0 t_{\rm acc}}{p-1} = 0.0288,\\
k &= \frac{50}{13}\frac{1}{\Delta \mu_b} = 133.547\quad \alpha = 1+\frac{13}{50}\mu_{b,0}
\end{align}
and we have set the initial time $t_0 = 0$ since the fallback accretion begins soon after the SN explosion \citep[see, e.g.,][]{2019ApJ...871...14B}. Figure \ref{fig:omvst} compares the approximate analytic solution (\ref{eq:DeltaOmega}) with the solution from the full numerical integration of Eqs. (\ref{eq:Mdot}) and (\ref{eq:Jdot}), in the case of $\mu(t_0)=1.4$, $\Omega(t_0)=0$, and $\chi=0.15$.
Equation (\ref{eq:DeltaOmega}) tells us that in the limit $t \to \infty$ ($x \to 0$), the $\nu$NS reaches asymptotically a maximum angular velocity gain
\begin{equation}\label{eq:DeltaOmegamax}
\Delta \Omega_{\rm max} = \omega \left\{\frac{2}{3} k \left[ \left(1+\frac{13\mu_{b,\rm max}}{50} \right)^{3/2} - \alpha^{3/2}\right] - 1 \right\},
\end{equation}
which as expected is larger for larger values of the angular momentum transfer efficiency parameter, $\chi$. Since we assume that after the $\nu$NS-rise the $\nu$NS is spinning down, we seek for solutions with a spinning up phase that ends with an angular velocity approaching the value that we have inferred at the $\nu$NS-rise, i.e.
\begin{equation}\label{eq:constraint}
\Omega_{\rm max} \approx \Omega(t_{\rm \nu NS}),
\end{equation}
where $\Omega_{\rm max} = \Delta \Omega_{\rm max} + \Omega(t_0)$. We have used the approximate symbol in Eq. (\ref{eq:constraint}) because by definition the value $\Omega_{\rm max}$ is reached only asymptotically. For practical purposes, we seek for solutions in which $\Omega(t_{\rm \nu NS})=0.9\,\Omega_{\rm max}$. Therefore, given values of $M$ and $\Omega(t_{\rm \nu NS})$, the above constraint leads to a specific value of $\chi$ that leads to the self-consistent spin-up phase. For instance, for a $\nu$NS mass $M=1.4 M_\odot$ and $\Omega(t_{\rm \nu NS}) = 108.33$ rad s$^{-1}$, we obtain $\chi = 0.147$.
We can also obtain a simple analytic estimate of the mass accreted by assuming that during the spin up phase, the accretion rate, the gravitational mass, and the moment of inertia are constant and have their maximum values. Under this assumption, Eqs. (\ref{eq:Jdot}) and (\ref{eq:chi}) lead to the accreted mass in a time $\Delta t$,
\begin{equation}\label{eq:deltaMb}
\Delta \mu_b \approx \frac{c I_{\rm max} \Delta \Omega}{2 \sqrt{3} \chi G M_\odot^2 \mu_{\rm max}}.
\end{equation}
For the above parameters, Eq. (\ref{eq:deltaMb}) gives $\Delta \mu_b \approx 0.02570$. This is very close to the value obtained from the full numerical integration, $\Delta \mu_b = 0.02592$, which represents an error of only $0.85\%$. The accuracy of Eq. (\ref{eq:deltaMb}) resides in the fact that the fallback accretion rate decreases as a power-law, see Eq. (\ref{eq:Mbdot}), hence most of the baryonic mass is accreted in the first minutes of the evolution. This explains why the above value of the accreted mass is close to the maximum accreted mass given by Eq. (\ref{eq:mubmax}), i.e., $\Delta \mu_{b, \rm max} = 0.0288$.
We turn to obtain an analytic expression of the time interval $\Delta t$ elapsed since the beginning of the fallback accretion, up to the instant when the $\nu$NS reaches a given angular velocity, or a given angular velocity gain, $\Delta \Omega$. In principle, we can obtain it by inverting Eq. (\ref{eq:DeltaOmega}). However, the equation is highly non-linear, so to obtain a relatively simple expression for it we use an accurate PadГЁ approximant for the quantity involving the baryonic mass, i.e.
\begin{align}
\left(1+\frac{13\mu_b}{50} \right)^{3/2} &= b^{3/2}(\tilde{\alpha} + x)^{3/2} \approx {\cal F},\nonumber\\
{\cal F} &= \frac{\sqrt{2}}{4}b^{3/2} \frac{2 \bar{\alpha}^{5/2}+5\bar{\alpha}^{3/2} X}{2\bar{\alpha}-X},\label{eq:Pade}
\end{align}
where $b=(13/50)\Delta \mu_{b,\rm max}$, $\tilde{\alpha} = \alpha/b$, $\bar{\alpha} = 1+2 \tilde{\alpha}$, and we have introduced the variable $X=1/2-x$. For the same example of Fig. \ref{fig:omvst}, we show in Fig. \ref{fig:pade} the excellence performance of the PadГЁ approximant (\ref{eq:Pade}), which approximate the expression with a tiny error of only $10^{-9}$.
Using the approximant, Eq. (\ref{eq:DeltaOmega}) becomes a second-order polynomial in the variable $X$ whose solution is straightforward, leading to the time interval:
\begin{equation}\label{eq:Deltat}
\Delta t = t_{\rm acc} \left[\left(1-X\right)^{\frac{1}{1-p}}-1\right],
\end{equation}
where
\begin{align}
&X = \frac{- B - \sqrt{B^2 - 4 A\,C}}{2 A}\label{eq:X}\\
&A = 12\, b\, \omega,\\
&B = (8 \alpha^{3/2} - 6 b - 24 \bar{\alpha} b + 10 \sqrt{2} \bar{\alpha}^{3/2} b^{3/2})\omega + 12 b \Delta \Omega, \\
C &= (4 \sqrt{2} \bar{\alpha}^{5/2} b^{3/2}-4 \alpha^{3/2} - 16 \alpha^{3/2} \bar{\alpha} - 5 \sqrt{2} \bar{\alpha}^{3/2}b^{3/2} )\omega \nonumber \\
&- 6 b (1 + 4 \bar{\alpha})\Delta \Omega.
\end{align}
The relevance of this time interval is that it allows to compute the time elapsed to reach the angular velocity at the $\nu$NS-rise, $\Omega (t_{\nu\rm NS})$. Since it is close to the maximum value reachable by the fallback accretion, that time interval gives an estimate of the time elapsed since the SN explosion, $\Delta t_{\rm SN}$. For the present example, we obtain $\Delta t_{\rm SN} = \Delta t (\Delta \Omega) \approx 7.36$ h, where $\Delta \Omega = \Omega(t_{\nu \rm NS}) - \Omega(t_0) = 108.33$ rad s$^{-1}$. The full numerical integration leads to $7.20$ h, which implies that the approximate Eq. (\ref{eq:Deltat}) estimates the time interval with an error of only $2.2\%$.
\section{Synchrotron and pulsar emission}\label{sec:5}
We turn now to the specific modeling of the multiwavelength afterglow of GRB 171205A. In the present scenario, the afterglow originates from the synchrotron radiation in the SN ejecta plus the pulsar emission from the $\nu$NS. The SN ejecta gets also energy injected from the $\nu$NS. Numerical calculations of this model applied to the description of the afterglow of specific GRBs can be found in \citet{2018ApJ...869..101R, 2019ApJ...874...39W, 2020ApJ...893..148R}. An analytic treatment of the model has been presented in \citet{2022arXiv220200316R}, and \citet{2022arXiv220705619W} have applied it to model the afterglow of GRB 180720B. We here follow the latter to estimate for GRB 171205A the emission generated by the synchrotron mechanism in the X-rays, in the optical, and in the radio, and the $\nu$NS pulsar emission.
\subsection{Synchrotron emission by the expanding ejecta}\label{sec:synch1}
The distribution of radiating electrons per unit energy, $N(E,t)$, is obtained from the solution of the kinetic equation \citep{1962SvA.....6..317K}
\begin{equation}\label{eq:kinetic}
\frac{\partial N(E, t)}{\partial t}=-\frac{\partial}{\partial E}\left[\dot{E}\,N(E,t)\right] + Q(E,t),
\end{equation}
where $Q(E,t)$ is the number of injected electrons into the ejecta per unit time $t$, per unit energy $E$, and $\dot E$ is the electron energy loss rate.
Following \citet{2022arXiv220200316R, 2022arXiv220705619W}, we adopt the solution to Eq. (\ref{eq:kinetic})
\begin{align}\label{eq:N3}
&N(E,t)\approx \begin{cases}
\frac{q_0}{\beta B_{*,0}^2 (\gamma-1)}\hat{t}^{2 n} E^{-(\gamma+1)}, & t < t_q\\
\frac{q_0 (t_q/t_*)^{k}}{\beta B_{*,0}^2 (\gamma-1)}\hat{t}^{2 n-k} E^{-(\gamma+1)}, & t_q < t < t_b,
\end{cases}
\end{align}
where $E_b < E < E_{\rm max}$, being
\begin{equation}\label{eq:Eb}
E_b = \frac{\hat{t}^{2 n-1}}{{\cal M} t_*^n},\quad
t_b = t_* ({\cal M} t_*^n E_{\rm max})^{\frac{1}{2n-1}}.
\end{equation}
The model parameters are defined as follows. The ejecta expands self-similarly with the radiating layer being $r=R_* = R_{*,0}\,\hat{t}^n$, $\hat{t} \equiv t/t_*$, $t_* = R_*/v_*$, $v_* = n R_*(t)/t = v_{*,0} \hat{t}^{n-1}$, $n$ is the expansion index, $B_*(t) = B_{*,0} R_{*,0}/r = B_{*,0}\hat{t}^{-n}$ is the magnetic field strength at $r=R_*$, ${\cal M}\equiv \beta B^2_{*,0}/2$, $t_* \equiv R_{*,0}/v_{*,0}$, $\beta = 2e^4/(3 m_e^4 c^7)$. We assume the injection power-law distribution $Q(E,t)=Q_0(t)E^{-\gamma}$ \citep{1962SvA.....6..317K, 1973ApJ...186..249P, 1979rpa..book.....R, 2011hea..book.....L}, where $\gamma$ and $E_{\rm max}$ are parameters to be determined from the observational data, and $Q_0(t)$ can be related to the power released by the $\nu$NS and injected into the ejecta from
$L_{\rm inj}(t)=L_0 (1+t/t_q)^{-k} = \int_{0}^{E_{\rm max}} E\,Q(E,t) dE$, so $Q_0(t) = q_0\left(1+t/t_q\right)^{-k}$, where $q_0 \equiv (2-\gamma)L_0/E_{\rm max}^{2-\gamma}$.
The bolometric synchrotron radiation power of a single electron is given by \citep[see, e.g.,][]{2011hea..book.....L}
\begin{equation}\label{eq:Psyn}
P_{\rm syn}(E,t) = \beta B_*^2(t) E^2 \approx \frac{\beta}{\alpha} B_* \nu,
\end{equation}
where in the last equality we have used the fact that most of the radiation is emitted at frequencies near the so-called critical frequency, $\nu_{\rm crit} = \alpha B_* E^2$, where $\alpha = 3 e/(4\pi m_e^3 c^5)$. Therefore, the synchrotron luminosity radiated at frequencies from $\nu_1$ to $\nu_2 > \nu_1$ can be written as
\begin{align}\label{eq:Lnu}
L_{\rm syn}(\nu_1,\nu_2; t) &= \int_{\nu_1}^{\nu_2} J_{\rm syn}(\nu,t)d\nu\approx \nu J_{\rm syn}(\nu,t),\nonumber \\
&
\approx \frac{\beta}{2} \alpha^{\frac{p-3}{2}} \eta B_{*,0}^{\frac{p+1}{2}}\hat{t}^{\frac{2 l- n(p+1)}{2}}\nu^{\frac{3-p}{2}}.
\end{align}
where $\nu_1=\nu$, $\nu_2=\nu+\Delta\nu$, being $\Delta\nu$ the bandwidth. Here, $J_{\rm syn}$ is the spectral density which is given by $J_{\rm syn}(\nu,t)d\nu\approx P_{\rm syn}(\nu,t) N(E,t)dE$ \citep[see, e.g.,][]{2011hea..book.....L}. In Eq. (\ref{eq:Lnu}), we have made the approximation $\Delta\nu/\nu\ll 1$ because of the power-law character of the spectral density. Despite the synchrotron radiation of a single electron is beamed along the velocity of the particle, we here consider an isotropic distribution of a large number of electrons with an isotropic distribution of pitch angles, hence leading to an isotropic total synchrotron luminosity.
\subsection{Newborn NS evolution and pulsar emission}\label{sec:synch2}
The $\nu$NS is subjected to the angular momentum loss driven by the magnetic field braking. In the point dipole+quadrupole magnetic field model presented in \citet{2015MNRAS.450..714P}, the total magnetic torque is given by
\begin{align}
\tau_{\rm mag} &= \tau_{\rm dip} + \tau_{\rm quad},\label{eq:taumag}\\
\tau_{\rm dip} &= -\frac{2}{3} \frac{B_{\rm dip}^2 R^6 \Omega^3}{c^3} \sin^2\alpha,\\
\tau_{\rm quad} &= -\frac{32}{135} \frac{B_{\rm quad}^2 R^8 \Omega^5}{c^5} \sin^2\theta_1 (\cos^2\theta_2+10\sin^2\theta_2),
\end{align}
where $\alpha$ is the inclination angle of the magnetic dipole moment with respect to the rotation axis, and the angles $\theta_1$ and $\theta_2$ specify the geometry of the quadrupole field. The strength of the magnetic dipole field is $B_{\rm dip}$. The dipole pure axisymmetric mode ($m = 0$) is set by $\alpha = 0$, and the pure $m=1$ mode by $\alpha = \pi/2$. The strength of the quadrupole magnetic field is $B_{\rm quad}$. THe quadrupole $m=0$ mode is set by $\theta_1 = 0$, the $m=1$ mode by $\theta_1 = \pi/2$ and $\theta_2=0$, while the $m=2$ mode is set by $\theta_1 = \theta_2 = \pi/2$. For the fit of the data, we shall adopt the $m=1$ mode for the dipole while the quadrupole can range between the $m=1$ and $m=2$ modes. Therefore, we can write the total magnetic torque (\ref{eq:taumag}) as
\begin{equation}\label{eq:taumagfinal}
\tau_{\rm mag} = -\frac{2}{3} \frac{B_{\rm dip}^2 R^6 \Omega^3}{c^3}\left( 1 + \xi^2 \frac{16}{45} \frac{R^2 \Omega^2}{c^2} \right),
\end{equation}
where $\xi$ is the quadrupole to dipole magnetic field strength ratio defined by
\begin{equation}\label{eq:eta}
\xi \equiv \sqrt{\cos^2\theta_2+10\sin^2\theta_2} \frac{B_{\rm quad}}{B_{\rm dip}},
\end{equation}
and the spindown luminosity as
\begin{equation}\label{eq:Lsd}
L_{\rm sd} = \Omega\,|\tau_{\rm mag}| = \frac{2}{3} \frac{B_{\rm dip}^2 R^6 \Omega^4}{c^3}\left( 1 + \xi^2 \frac{16}{45} \frac{R^2 \Omega^2}{c^2} \right).
\end{equation}
The evolution of the $\nu$NS is obtained from the energy conservation equation
\begin{equation}\label{eq:Erot}
-(\dot{W}+\dot{T}) = L_{\rm tot} = L_{\rm inj} + L_{\rm sd},
\end{equation}
where $W$ and $T$ are, respectively, the $\nu$NS gravitational and rotational energy.
\begin{table}
\centering
\begin{tabular}{l|r}
Parameter & Value \\
\hline
$\gamma$ & $1.55$\\
$k$ & $1.13$\\
$L_0$ ($10^{46}$ erg s$^{-1}$)& $3.00$\\
$E_{\rm max}$ ($10^4 \ m_e c^2$) & $4.00$\\
$t_q$ (s) & $100.00$\\
$n$ & $0.89$ \\
$R_{*,0}$ ($10^{12}$ cm) & $1.00$ \\
$v_{*,0}$ ($10^{8}$ cm s$^{-1}$) & $5.00$ \\
$B_{*,0}$ ($10^{6}$ G) & $1.00$\\
$B_{\rm dip}$ ($10^{13}$ G) & $3.00$ \\
$P$ (ms) & $58.00$\\
\hline
\end{tabular}
\caption{Value of the parameters of the synchrotron model that fit the multiwavelength observational data of GRB 1701205A as shown in Fig. \ref{fig:fit171205A}.}
\label{tab:parameters}
\end{table}
Table \ref{tab:parameters} lists the values of the model parameters that fit the afterglow of GRB 171205A in the X-rays, optical, and radio energy bands, as shown in Fig. \ref{fig:fit171205A}.
The first relevant feature to notice is that the afterglow luminosity fades with time with an approximate power-law $t^{-1}$. This power-law is shallower than in GRBs of higher luminosity in which $t^{-1.3}$ (see, e.g., GRB 130427A or GRB 190114C in \citealp{2018ApJ...869..101R, 2020ApJ...893..148R}). The pulsar emission from magnetic braking predicts a luminosity with a sharper power-law, in a pure magnetic dipole the luminosity falls as $t^{-2}$, and for a pure magnetic quadrupole as $t^{-3/2}$ (see equations of Sec. \ref{sec:synch2} and \citealp{2018ApJ...869..101R, 2020ApJ...893..148R}). Therefore, models based on pulsar emission from magnetic braking alone (even including higher order multipole fields) are unable to fit the afterglow luminosity of GRB 171205A. This is a first indication of the necessity of an additional mechanism, in this case the synchrotron radiation. The second relevant feature is that the afterglow in the X-rays and in the radio bands show the same power-law index (see the red, gray and brown data points), as expected from the synchrotron model.
The optical data shows, instead, a flat behavior followed by the bump that characterizes the peak of the SN emission powered by the decay of nickel in the ejecta \citep{1996snih.book.....A,2019Natur.565..324I}. The predicted synchrotron optical luminosity lies below the data (see blue curve and blue data points), which implies that the observed optical luminosity is the sum of the synchrotron radiation and the emergent SN. In BdHNe III, like GRB 171205A, which are low-luminous sources, the $\nu$NS is not a very-fast rotator so it injects less energy into the ejecta in comparison to the case of BdHNe I (e.g., GRB 130427A, 180720B or 190114C; see \citealp{2018ApJ...869..101R, 2020ApJ...893..148R}) and BdHNe II (e.g. GRB 190829A; see \citealp{2022arXiv220705619W}), so the synchrotron emission is not very luminous and the emergent optical SN is able to outshine the optical synchrotron luminosity. SN 2017iuk is similar to the SNe associated with high-luminous GRBs, indicating that the pre-SN progenitor (i.e., the CO star) leading to the $\nu$NS in its core-collapse event, is similar for all long GRBs irrespective of their energetics (Aimuratov et al., to be submitted).
In the X-rays, the synchrotron luminosity fades off after a few $10^6$ s, when $h \nu_{\rm crit}$ falls below keV. At later times, the power-law behavior continues in the optical and in the radio bands. The pulsar emission is characterized by a plateau followed by a power-law decay (at times longer than the characteristic spindown timescale). For a plateau luminosity comparable (but smaller) to the synchrotron power-law luminosity, the sum of the two contributions can lead to a luminosity with a less sharp power-law behavior than the pure synchrotron. The afterglow of GRB 171205A does not show any sign of change of the power-law of the synchrotron emission (see Fig. \ref{fig:fit171205A}), so we can not obtain a precise value of the magnetic field strength and structure. In Fig. \ref{fig:fit171205A}, we have adopted $58$ ms as initial rotation period of the $\nu$NS and a pure dipole field ($\xi=0$) of $B_{\rm dip} = 3\times 10^{13}$ G to guide the eye of the reader. For magnetic fields $\gtrsim 5\times 10^{13}$ G, the plateau luminosity of the pulsar emission contributes appreciably to the total X-ray luminosity affecting the goodness of the fit. Therefore, we can assume the above estimate as an upper limit to the dipole magnetic field. For the present synchrotron model parameters, X-ray data after times of a few $10^6$ s could help to constrain the presence of the pulsar emission. A sanity check of the model is that the energy injected into the ejecta is $\sim 10^{49}$ erg, of the same order of the rotational energy of the $\nu$NS, for a moment of inertia of a few $10^{45}$ g cm$^2$.
\section{Conclusions}
\label{sec:6}
In this article, we have interpreted GRB 171205A within the BdHN model of long GRBs. This scenario proposes that the GRB originates in a binary system composed of a CO star and a NS companion. The core-collapse of the CO star forms a $\nu$NS at its center and produces a SN explosion. The expanding outermost stellar layers partly accrete on the NS companion, while the innermost layers fallback accreting onto the $\nu$NS. We can identify three subclasses of BdHNe depending on the entity of the above triggering process of the GRB event and its subsequent consequences. In BdHN I, the binary is characterized by a short orbital period of the order of a few minutes, which causes accretion rates onto the NS companion as high as $10^{-2} M_{\odot}$ s$^{-1}$ \citep{2019ApJ...871...14B}, bringing it to the critical mass for gravitational collapse with the consequent formation of a BH. These systems explain the most energetic GRBs with energies $E_{\rm iso}\gtrsim 10^{52}$ erg. In BdHN II, the orbital period is of the order of tens of minutes, so the NS companion does not accrete sufficient mass to become a BH. These systems lead to less energetic GRBs with $E_{\rm iso}\lesssim 10^{52}$ erg. Along this line of reasoning, we must expect in the BdHN scenario systems with even longer orbital periods, perhaps of the order of hours, in which the NS companion does not play any role in the cataclysmic event. Most of these binaries are also expected to be disrupted by the SN explosion \citep{2015PhRvL.115w1102F, 2016ApJ...832..136R, 2018ApJ...859...30R}. Under these circumstances, the GRB event is explained by the sole activity of the $\nu$NS and its interaction with the SN ejecta. This scenario is equivalent to the core-collapse of a single CO star. These systems conforms the third subclass, BdHN III, which explain the low-luminous GRBs with $E_{\rm iso}\sim 10^{49}$--$10^{50}$ erg.
We here show that GRB 171205A is a BdHN III, a low-luminous GRB consistent with it being produced in the core-collapse of a single CO star that forms the $\nu$NS and the type Ic SN. There are several new results related to the sequence of physical phenomena occurring in this system and the related GRB observables:
\begin{enumerate}
\item
The fallback accretion is initially of a few of $10^{-3}M_\odot$~s$^{-1}$ and lasts tens of seconds \citep{2019ApJ...871...14B}. The accretion energy is $\sim 10^{52}$~erg, comparable to the kinetic energy of the SN ejecta. This energy is injected into the ejecta, propagates, and accelerates the outermost shell to the observed mild-relativistic velocity. The hydrodynamics is similar to the case of the expanding SN ejecta with the GRB energy injection presented in \citet{2018ApJ...852...53R}. The Lorentz factor of the shockwave is $\lesssim 5$ when it gets transparency at $\sim 10^{12}$~cm, and emit a thermal spectrum. This scenario explains the prompt emission of GRB 171205A. This is also similar to the cocoon scenario advanced for this source in \citet{2019Natur.565..324I}. Both pictures predict the heating of stellar shells (in one case by the fallback accretion power and in the other by a GRB jet) that get boosted to high-velocity and emit a thermal spectrum. The associated blackbody emission has been indeed observed in GRB 171205A, and it has been inferred that $\approx 10^{-3} M_\odot$ of material expand at velocities above $10^5$~km~s$^{-1}$ (see \citealp{2019Natur.565..324I} and Fig. \ref{fig:spectrumXRTBAT}). The main difference between the two models is that in our picture there is not jet. This solutions seems favoured since the associated jet break expected in the afterglow of jetted GRB models is not observed in the data up to the last observations at $\sim 1000$~days \citep{2021MNRAS.503.1847L, 2021ApJ...907...60M}.
\item
Regarding the afterglow emission, we have first inferred from an energy conservation argument, that the $\nu$NS should have started to lose its rotational energy at $t=35$ s after the GRB trigger, i.e., from what we call the $\nu$NS-rise, with a rotation period of $58$ ms.
\item
We have shown that the afterglow of GRB 171205A can not be explained by the sole pulsar emission of the $\nu$NS by magnetic braking, even including higher multipole fields (e.g., quadrupole).
\item
The multiwavelength afterglow is actually explained by synchrotron radiation emitted by electrons in the expanding SN, which is further powered by energy injected by the $\nu$NS. We have calculated the synchrotron luminosity in the X-rays, the optical and radio wavelengths with an analytic treatment of the above physical situation. We have shown that the X-rays and the radio luminosities follow the expectation from the synchrotron model. However, the observed optical luminosity shows a flat behavior followed by the bump of the optical SN powered by the energy release in the ejecta of the radioactive decay of nickel into cobalt. We have shown that the synchrotron luminosity in those optical wavelengths lies below the luminosity of the emergent SN optical emission. This implies that the observed optical emission contains the contribution of both the synchrotron radiation and the optical SN.
\item
Another remarkable fact to be highlighted is that SN 2017iuk, a SN associated with the low-luminous GRB 171205A, a BdHN III, shows similar properties (e.g., peak luminosity and peak time) to the SNe associated with high-luminous GRBs (BdHN I and II). This suggests that the pre-SN progenitor (i.e., the CO star) is similar for all long GRBs, irrespective of their energetics (Aimuratov et al., to be submitted).
\item
There is a corollary of the above result. In low-luminous GRBs, i.e., in BdHN III like GRB 171205A, the relatively slow rotation ($58$ ms period) of the $\nu$NS implies the less energy injected into the ejecta, hence the low energetics of the associated synchrotron emission. Only under these circumstances, the optical emission of the SN powered by the nickel radioactive decay is able to outshine the optical synchrotron luminosity.
\item
We calculated the evolution of the $\nu$NS mass and angular momentum during the fallback accretion process leading to its spinning up to the $58$ ms rotation period. From this evolution, we have inferred that the SN explosion occurred at most $7.36$ h before the GRB trigger time. This sets a stringent delay time between the neutrino emission associated with the SN and the electromagnetic emission of the GRB event.
\end{enumerate}
\acknowledgements
L.M.B. is supported by the Vicerrector\'ia de InvestigaciГіn y Extensi\'on - Universidad Industrial de Santander Postdoctoral Fellowship Program No. 2022000293.
\bibliographystyle{aasjournal}
\bibliography{171205A}
|
Title:
The physical properties of massive green valley galaxies as a function of environments at $0.5<z<2.5$ in 3D-\textit{HST}/CANDELS fields |
Abstract: To investigate the effects of environment in the quenching phase, we study
the empirical relations for green valley (GV) galaxies between overdensity and
other physical properties (i.e., effective radius $r_{\rm e}$, S\'{e}rsic
indices $n$, and specific star formation rate sSFR). Based on five 3D-{\it
HST}/CANDELS fields, we construct a large sample of 2126 massive ($M_{\star} >
10^{10} M_{\sun}$) GV galaxies at $0.5<z<2.5$ and split it into the higher
overdensity quarter and the lower overdensity quarter. The results shows that
GV galaxies in denser environment have higher $n$ values and lower sSFR at
$0.5< z <1$, while there is no discernible distinction at $1 < z < 2.5$. No
significant enlarging or shrinking is found for GV galaxies in different
environments within the same redshift bin. It suggests that a dense environment
would promote the growth of bulge and suppress star formation activity of GV
galaxies at $0.5< z <1.5$, but would not affect the galaxy size. We also study
the dependence of the fraction of three populations (Blue Cloud, Green Valley,
and Red Sequence) on both environments and $M_{\star}$. At a given $M_{\star}$,
blue cloud fraction goes down with increasing environment density, while red
sequence fraction is opposite. For the most massive GV galaxies, a sharp drop
appears in the denser environment. Coupled with the mass dependence of three
fractions in different redshift bins, our result implies that stellar mass and
environments jointly promote the quenching process. Such dual effect is also
confirmed by re-calculating the new effective GV fraction as the number of GV
galaxies over the number of non-quiescent galaxies.
| https://export.arxiv.org/pdf/2208.10014 |
\title{The physical properties of massive green valley galaxies as a function of environments \\ at $0.5<z<2.5$ in 3D-\textit{HST}/CANDELS fields}
\author{Wenjun Chang}
\affil{Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing 246133, China; [email protected]}
\affil{Department of Physics and Astronomy, University of California, Riverside, 900 University Avenue, Riverside, CA 92521, USA}
\affil{Department of Astronomy, University of Science and Technology of China, Hefei 230026, China; [email protected]}
\affil{School of Astronomy and Space Sciences, University of Science and Technology of China, Hefei 230026, China}
\author{Guanwen Fang}
\affil{Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing 246133, China; [email protected]}
\author{Yizhou Gu}
\affil{Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\author{Zesen Lin}
\affil{Department of Astronomy, University of Science and Technology of China, Hefei 230026, China; [email protected]}
\affil{School of Astronomy and Space Sciences, University of Science and Technology of China, Hefei 230026, China}
\author{Shiying Lu}
\affil{School of Astronomy and Space Science, Nanjing University, Nanjing 210093, People's Republic of China}
\affil{Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, Nanjing 210093, China}
\author{Xu Kong}
\affil{Department of Astronomy, University of Science and Technology of China, Hefei 230026, China; [email protected]}
\affil{School of Astronomy and Space Sciences, University of Science and Technology of China, Hefei 230026, China}
\keywords{Green valley galaxies (683); Galaxy environments (2029); Star formation(1569); Galaxy quenching(2040); Stellar properties (1624)}
\section{Introduction}\label{sec1:intro}
A flood of observations on large samples of galaxies found evident cessation of star formation activities during the evolution of galaxies. The cessation of star formation is widely known as ``quenching'' and is assumed to produce passive galaxies in which the star formation rate (SFR) is very low, commonly resulting in the bimodal distribution of galaxies. In the color--magnitude diagram, the narrow red peak, in which abundant quiescent galaxies (QGs) and a small amount of dusty star-forming galaxies (SFGs) are distributed along with a linear sequence (\citealt{Blanton+2009}), is normally called ``red sequence'' (RS). The blue peak occupying an extended region mainly consists of SFGs (\citealt{Kauffmann+03}), similarly called ``blue cloud'' (BC). An intermediate zone between BC and RS is commonly known as ``green valley'' (GV), initially proposed and described in several GALEX papers (e.g., \citealt{Martin+2007, Salim+07, Wyder+2007}).
GV galaxies (see the review of \citealt{Salim+14}) are often thought to be in a transitional phase. The existence of GV galaxies argues for a continuum of properties from SFGs to QGs (\citealt{Wyder+2007,Salim+14}). Over the past decades, mechanisms of galaxy quenching remains an unsolved problem. Many previous researches emphasized the role of stellar mass ($M_{\star}$) and environment in the cessation of star formation, generally referred to as ``mass quenching'' (\citealt{PengYJ+10, Somerville+2015, Penny+2018}) and ``environment quenching'' (\citealt{PengYJ+10, Darvish+2015}). A plausible explanation of ``mass quenching'' is that the feedback from active galactic nucleus (AGN) plays an important role in regulating star formation and in quenching galaxies (\citealt{Hopkins+2005, Hopkins+2014, Somerville+2015, Penny+2018}). At the same time, a number of authors have considered that the external environment might be the key factor of suppressing the star formation via some mechanisms to consume the gas reservoir, such as major/minor merger (\citealt{Springel+05}), ram pressure stripping (\citealt{GG+72}), and harassment (\citealt{F&S+81}). High environmental density might subsequently increase dust attenuation (e.g., \citealt{Koyama+2013, Sobral+2016}) and enrich interstellar medium of SFGs (\citealt{Sobral+2015, Darvish+2015}).
It is noteworthy that galaxy morphology varies with the environment. Generally, galaxies in the lower density environments (field) are bluer, more star-forming, and more disc-like, while galaxies in higher density environments (cluster) are older, redder, less star-forming, and more elliptical (\citealt{Dressler+1984, Kauffmann+04}). \citet{Dressler+1980} found that there is a definite relationship between local galaxy density and morphological type at $z < 0.06$, which is further confirmed in other studies (\citealt{Guzzo+1997, Goto+2003, Fasano+2015}). The dependence of galaxy morphology on the environment appears not only in the local Universe but also at intermediate and high redshifts ($z\sim1-2$, e.g., \citealt{Dressler+1997, Vanderwel+2007, Allen+2015, Allen+2016}). \citet{Allen+2015} compared the morphology of SFGs and QGs in different environments (cluster vs. field). They found that cluster SFGs had higher S\'{e}rsic indices ($n$) than field SFGs at $0 < z < 2$, while there is no difference in light profile and galaxy size for QGs between field and cluster environments. However, it is not consistent with the results of others that QGs in the cluster have shallower profiles (lower $n$) and larger sizes (\citealt{Bassett+2013, Strazzullo+2013, Yoon+2017}).
Considering that the environment affects both star formation activity and morphology in galaxies at the transitional stage, GV galaxies could be one of the suitable samples to study the environmental effect on physical properties. In our previous series of work, \citet{Gu+18} have assembled a large sample of massive ($M_{\star} > 10^{10}~M_\odot$) RS, GV, and BC galaxies at $0.5\leqslant z \leqslant 2.5$ in five fields of 3D-{\it HST}/CANDELS, and investigated their morphology, dust content, and environments and revealed a mass-dependent ``downsizing'' quenching picture. Then, \citet{Gu+2019} analyzed the mass dependence of morphology and star formation activity for three populations. We found that the structural properties of GV galaxies are intermediate between those of BC and RS galaxies at fixed stellar mass bins at $z<2$, and both GV and BC galaxies have similar sizes and compactness at the high-mass end. It implies that GV galaxies could go through a morphological transformation of bulge buildup at $z<1.5$, which is consistent with our results of \citet{Lu+2021}. We found the effect of the morphological quenching mechanism on star formation activity at $0.5< z <2.5$, after eliminating any possible AGN candidates and considering the stellar mass influence. But we had not studied the effect of the environment on the physical properties of GV galaxies until \citet{Gu+21} defined a dimensionless overdensity (1+$\delta^{'}$) as the environmental indicator by adopting a Bayesian method to consider the contributions of all the $N$th nearest neighbors. Based on this improved environmental method, in this work we set out to explore empirical relations between galaxy environments and other physical properties, including parametric and non-parametric structure, and star-forming parameters, for GV galaxies at $0.5< z <2.5$. To understand the quenching process for massive galaxies, we also analyze the transformation of different fractions for RS, GV, and BC galaxies.
The structure of our paper is organized as follows.
Data and sample selection are described in Section~\ref{sec2:DS}.
We present the environmental dependence of structural parameters and sSFR in Section~\ref{sec3:EDP}. Section~\ref{sec4:FQG} contains our analysis about the different fractions for RS, GV, and BC populations and the distribution of effective fractions for GV galaxies. Finally, a summary is given in Section~\ref{sec5:Sum}. Throughout our paper, we adopt the cosmological parameters as following: $H_0=70\,{\rm km~s}^{-1}\,{\rm Mpc}^{-1}$, $\rm \Omega_m=0.30$, and $\Omega_{\Lambda}=0.70$. All magnitudes adopted in this paper are in the AB system.
\section{Data and Sample selection} \label{sec2:DS}
\subsection{3D-{\it HST} and CANDELS} \label{sec2.1:C3D}
The 3D-{\it HST} and CANDELS programs cover over $\sim$900 arcmin$^{2}$ in five different fields: AEGIS, COSMOS, GOODS-N, GOODS-S, and UDS, observed by a number of space-based and some ground-based telescopes. It thus provides abundant imaging data from ultraviolet (UV) to infrared (IR) bands via the high-quality WFC3 and ACS spectroscopy and photometry \citep{Grogin+11, Koekemoer+11, Skelton+14, Momchheva+16}. In the 3D-{\it HST}/CANDELS, many physical properties of galaxies have been measured by utilizing the homogeneous multi-band data, including the parameters of the stellar population (\citealt{Skelton+14, Whitaker+14, Momchheva+16}) and structure (\citealt{vdW+14}).
In this work, redshifts and rest-frame colors are taken from \citet{Momchheva+16}. It is an updated version of the photometric catalog from \cite{Skelton+14} with the grism redshifts from the fits of G141 grism spectroscopy. We refer to \citet{Momchheva+16} for the full details. Following \citet{WangT+17}, stellar mass and dust attenuation are re-estimated based on the stellar population synthesis models from \citet{Ma+05} via the FAST code \citep{Kriek+09}. The \citet{Ma+05} models are known to give a better description of stellar populations for high-redshift SFGs whereby taking the contribution of the thermally pulsating asymptotic giant branch stars into account. We assume an exponentially declining star formation history with an e-folding time $\tau \sim 0.1-10$ Gyr, a \cite{Kroupa+01} initial mass function (IMF) and solar metallicity. The dust attenuation ($A_V$) varies from 0 to 4 in steps of 0.1 following the \cite{Calzetti+00} law. It is worthy of mentioning that we prefer to take the spectroscopic (or grism) redshifts if available. Otherwise, we will use the photometric redshifts from \citet{Skelton+14} instead.
\subsection{Structural Parameters} \label{sec2.2: Structure}
Quantitative morphological and structural analysis has been developed and complemented in the past decades. There are two main measurements of morphology: parametric modelling of surface brightness profile (\citealt{Sersic1968, PengCY+02, PengCY+10}) and non-parametric morphology (\citealt{Abraham+94, Lotz+04, Conselice+14}).
Parametric models are important in describing light profile to estimate the galaxy size and bulge-to-disk decomposition (\citealt{Buitrago+08, Wuyts+11a, Lang+14, vdW+14}). S\'{e}rsic indices ($n$) and effective radius ($r_{\rm e}$) in CANDELS fields are taken from \cite{vdW+14} which have been measured using GALFIT (\citealt{PengCY+02}). 3D-{\it HST} catalog has been matched with J band and H band of CANDELS by Rainbow Database\footnote{\url{https://arcoirix.cab.inta-csic.es/Rainbow_Database/Home.html}}. The rest-frame optical morphologies are traced by J-band (F125W) imaging at $0.5< z<1.5$ and by H-band (F160W) imaging at $1.5<z<2.5$ in this work.
Non-parametric methods are usually used to identify irregular structures in galaxies, especially for high-redshift galaxies, with high irregularity (\citealt{Bluck+12, Conselice+14}). We have performed the measurements of the Gini coefficient ($G$; \citealt{Lotz+04}) and the second-order moment of the 20\% brightest pixels ($M_{20}$; \citealt{Lotz+04}) on the NIR images using the \textsc{Morpheus} software developed by \cite{Abraham+07}. More detailed definitions and calculations for these non-parametric measurements refer to \citet{Gu+18}. The effect of signal-to-noise ratio (S/N) on the measurement of non-parametric parameters has been tested by previous work \citep{Kong+09, Wang+12, Fang+15, Gu+18, Gu+2019, Lu+2021}. In sample selection (see Section~\ref{sec2.5: sample select}) we adopt a flag of $\tt use\_phot = 1$ that ensures a reliable detection in $H$-band (F160W) with S/N$>$3 and thus reliable non-parametric measurements.
\subsection{Star Formation Rates} \label{sec2.3:SFR}
Young and massive stars produce a large amount of UV photons, which are partly absorbed by interstellar dust and re-emit in IR. Thus, accurate SFR estimation should include both unobscured and obscured parts. The unobscured SFR can be derived from the observed rest-frame UV luminosity, while the obscured one can be estimated through a single mid-infrared measurement, e.g., MIPS 24 $\mu$m \citep{Battisti+2015,Lin+2016}.
In this work, we employ the total SFR by combining UV and IR emissions, provided by \citet{Whitaker+14}. Considering the \cite{Bell+05} conversion and the \cite{Kroupa+01} IMF, the $\rm SFR_{UV+IR}$ can be estimated by:
\begin{equation}
{\rm SFR_{UV+IR}}[M_\sun~{\rm yr}^{-1}]=9.8\times10^{-11}(L_{\rm IR}+2.2L_{\rm UV})/L_{\sun},
\end{equation}
where $L_{\rm UV}=1.5 \times L_{\rm 2800}$ is estimated by the rest-frame continuum luminosity at 2800\AA, and $L_{\rm IR}$ is the integrated luminosity at 8$\sim$1000 $\mu$m converted from the Spitzer/MIPS 24 $\mu$m data using a single luminosity-independent template (\citealt{Franx+08, Wuyts+2008}).
If the MIPS 24 $\mu$m data are unavailable, we assume the dust attenuation curve from \cite{Calzetti+00} and correct the effect of dust attenuation on $\rm SFR_{UV}$. In this case, the corrected $\rm SFR_{UV}$ can be derived as follows:
\begin{equation}
{\rm SFR_{UV, corr}}[M_\sun~{\rm yr}^{-1}]={\rm SFR_{UV}}\times10^{0.4\times 1.8 \times A_V},
\end{equation}
where ${\rm SFR_{UV}}=3.24 \times 10^{-10} \times L_{2800}/L_{\sun}$. %
The factor of 1.8 converts $A_V$ to that at 2800\AA\ when adopting the \cite{Calzetti+00} attenuation curve.
\subsection{Measurements of Environmental Overdensities} \label{sec2.4:MED}
The different definitions of environmental density describe disparate physical properties on different physical scales (\citealt{Muldrew2012, Etherington+15}).
In the paper, we adopt the dimensionless overdensity as the indicator of the relative local environment of galaxies, described in detail in \citet{Gu+21}.
A magnitude-limited sample is selected for the measurements of environmental overdensities. This sample mixes galaxies with spectroscopic, grism, or photometric redshifts and is selected by the following criteria: (1) H-band apparent magnitudes
$ \rm F160W < 25$ that guarantees the uncertainty of photometric redshifts $\sigma_z = 0.02$, (2) a redshift range of $0.5 < z < 2.5$, and (3) flag {$\tt use\_phot = 1$} that ensures reliable photometries of galaxies (see Section \ref{sec2.5: sample select} for more details). About 30\% of galaxies in this sample with spectroscopic or grism redshifts provide a more convinced derivation of galaxies environments, although using purely photometric redshifts to construct the overdensities would not change our results in general. In the following, we summarise the relevant information on the environmental indicator.
The Bayesian metric in \citet{Cowan+2008} is adopted to estimate the local environmental density, which is defined as $\Sigma^{'}_{N}$ $\propto$ 1/($\Sigma^{N}_{i=1} d_i^2$). Here $d_i$ is the projected distance of the $i$th nearest neighbor in the projected two-dimensional space within the individual redshift slice. This Bayesian environmental density considers the contribution of all $N$th nearest neighbors, which can improve accuracy in mapping the probability density distribution compared to the traditional method (\citealt{2005AJ....129.1096I}). Given that photometric redshifts have large uncertainties, we choose the redshift slice as $ \left|\Delta z\right| = 2 \sigma_{z} (1+z)$, where $\sigma_{z} = 0.02$ is the typical uncertainty of photometric redshifts of our sample, and $z$ is the redshift of target galaxy to estimate the environment.
With increasing redshift, the comoving number densities of the observed galaxies are expected to decrease. Thus, we adopt a dimensionless overdensity of \citet{Gu+21}, 1+$\delta^{'}_{N}$, as the indicator of galaxy environment:
\begin{equation}
\label{eq: density}
1+\delta^{'}_{N} = \frac{\Sigma^{'}_{N}}{\langle \Sigma^{'}_{N} \rangle_{\rm uniform}} = \frac{\Sigma^{'}_{N}}{k^{'}_{N} \Sigma_{\rm surface}},
\end{equation}
where $\langle \Sigma^{'}_{N} \rangle_{\rm uniform}$ is the standard value of Bayesian density when galaxies are distributed in the uniform environment. At a given density $\Sigma_{\rm surface}$, $\langle \Sigma^{'}_{N} \rangle_{\rm uniform}$ can be calculated by a linear correction coefficient $k^{'}_{N}$
(see also in the Appendix of \citealt{Gu+21}). Thus, $\log(1+\delta^{'}_{N}) >$ 0 indicates that the environmental density of a galaxy exceeds the density standard in the uniform condition and vice versa.
Due to the Poisson noise and the possible contamination of foreground and background galaxies,
a small value of $N$ may cause fluctuation in the density values.
We therefore adopt the overdensity based on the distances to all 10 nearest neighbors ($\Sigma^{'}_{10}$) as the indicator of galaxy environments in this work. statistically
\subsection{Sample Selection} \label{sec2.5: sample select}
Based on the magnitude-limit sample, we focus on the massive galaxies with $\log (M_{\star}/M_\sun) \geqslant 10$.
The flag of $\tt use\_phot = 1$ is set to choose galaxies with a reliable detection
following these criteria: 1) not a star, or bright enough to be a reliable galaxy; 2) not close to a bright star; 3) well-exposed in the F125W and F160W bands; 4) a S/N of f$\_$F160W / e$\_$F160W $>$ 3; 5) has a passable photometric redshift fit; 6) has a ``non-catastrophic'' stellar population fit, with $\log M_{\star}>$ 0 \citep{Skelton+14}. To ensure reliable measurement of the structural parameters, we also make an additional cut on {\it H}-band, H$_{\rm F160W} <$ 24.5, which could get rid of faint galaxies and guarantee much higher reliable photometric redshifts (\citealt{Momchheva+16}). Finally, the parent sample contains 7850 massive galaxies at 0.5 $<$ z $<$ 2.5 from five 3D-{\it HST}/CANDELS fields, about 58\% of galaxies have spectroscopic or grism redshifts. Considering about half of sample with photometric redshifts, the bias from galaxies spectroscopic redshifts which trend to be brighter might be relieved.
Given that the intrinsic rest-frame colors depend on stellar mass and redshift, we use the extinction-corrected rest-frame U-V colors to divide galaxies into RS, GV, and BC populations as well, as \citet{WangT+17} did. The separation criteria are as follows:
\begin{eqnarray} \nonumber
(U - V )_{\rm rest} - \Delta A_V = 0.126 \log (M_{\star}/M_{\sun}) + 0.58 - 0.286z; \,\, \nonumber \\
\nonumber
(U - V )_{\rm rest} - \Delta A_V = 0.126 \log (M_{\star}/M_{\sun}) - 0.24 - 0.136z, \,\, \nonumber
\end{eqnarray}
where $\Delta A_V=0.47 \times A_V$ is the extinction correction of rest-frame U-V color, the value of 0.47 is the correction factor from the \cite{Calzetti+00} attenuation law. Then we obtain 2566 RS galaxies, 2126 GV galaxies, and 3158 BC galaxies.
In Figure~\ref{fig:f01_sample}, we show the extinction-corrected rest-frame U-V color of GV galaxies as a function of stellar mass in three redshift bins from 0.5 to 2.5. The criteria for each redshift bin from \citet{WangT+17} are separately shown as red and blue bands, where the width of the ribbon corresponds to the range of redshift. GVs are distributed below the upper limit of the first separation and above the lower limit of the second separation. The background grayscales represent the distributions for all massive galaxies in our parent sample, and also show the separation for BC and RS. The number of GV in each bin is also shown in each panel.
Due to the redshift-dependence of the GV definition, we can find that high-redshift galaxies tend to be bluer than low-redshift galaxies in Figure\ \ref{fig:f01_sample}.
\section{Environmental Effect on Physical Properties} \label{sec3:EDP}
In this section, we discuss the effect of the environment on different physical properties of GV galaxies, including parametric ($n$ and r$_e$), non-parametric ($G$ and M$_{20}$) structures, and sSFR. Considering that the dominated quenching mechanism might vary across cosmic time (\citealt{Iovino+2010}), we carry out the research in three redshift bins (0.5 $\leq z <$ 1.0, 1.0 $\leq z <$ 1.5, and 1.5$\leq z \leq$ 2.5). In each redshift bin, GV galaxies are divided into four equal bins according to the local overdensity 1+$\delta^{'}_{\rm 10}$. In the following, we compare the physical properties of GV galaxies in the highest and lowest overdensity bins, namely the highest and lowest density quarters.
To qualify the correlation between stellar mass and different physical properties and the corresponding differences between the highest and lowest density quarters in three redshift bins, we calculate the Spearman correlation coefficients and perform 2-Dimensional Kolmogorov–Smirnov (2D-KS) test (\citealt{Peacock1983, Fasano1987}), respectively. It is assumed that there is a significant difference between two density quarters when the p-value of 2D-KS test is smaller than 0.05. All results are listed in Table\ \ref{tab: corr_KS}.
\subsection{Parametric Structures} \label{sec3.1: EDP-n}
For a galaxy fitted with a single S\'{e}rsic profile, the disc-like galaxies have a characteristic of $n \sim$ 1, whereas the S\'{e}rsic indices of bulge-dominated galaxies are considered to be $n >$ 2.5 (\citealt{Shen+03, Cebri+2014}). Figure~\ref{fig02-all} shows the dependence of S\'{e}rsic index $n$ and effective radius r$_e$ of GV galaxies on stellar mass in two extreme environments. Dark green squares represent the corresponding median values for GV galaxies in the highest density quarter, while light green squares represent those in the lowest density quarter. The error bars represent the 2$\sigma$ uncertainties drawn from the bootstrap method with the 1000 times resamples.
In the top panels of Figure\ \ref{fig02-all}, we find that the difference in $n$ between the highest and lowest environments is significant at $0.5< z <1$ with the p-value of 2D-KS test $\sim$ 0.003, which reveals that S\'ersic indices ($n$) of GV galaxies are larger in denser environments. It is unlikely to see that GV galaxies are disc-like in the densest environments at $0.5<z <1$, regardless of stellar mass. There are no clear distinctions between two opposite environments at higher redshift ($z>1$) with the corresponding 2D-KS p-value $\gg$ 0.05.
Our result indicates that galaxy environments have a significant impact on galaxy structure at $0.5< z <1$.
This difference in S\'{e}rsic indices between two opposite environments is also reported by \citet{PA+2019}, which found that there is an environmental dependence of S\'ersic indices ($n$) in different stellar mass bins when considering the entire sample including both SFGs and QGs, with denser environments having galaxies with higher S\'ersic indices ($n$) at $z \sim 0.84$.
In addition, there is a correlation between $n$ and redshift: the overall indices $n$ of high-redshift galaxies are smaller than these of low-redshift galaxies. In our sample, it is easier to observe more disc-like GV galaxies at high-redshift.
The bottom panels of Figure~\ref{fig02-all} show the dependence of galaxy size on the environment density and stellar mass. The galaxy size increases by a small amount with stellar mass in given redshift bins, with the Spearman coefficients $>$ 0.3, regardless of the environments. This is consistent with the expectation about the general mass--size relation: more massive galaxies tend to be characterized by a larger radius than their less massive counterparts on average (\citealt{Shen+03, vdW+14}). Our result provides a specific mass--size relation for massive GV galaxies at $0.5<z<2.5$ in two extreme environments. We find no significant difference in size between galaxies in the highest density and lowest density for three redshift bins with KS p-value $\gg$ 0.05, suggesting that a denser environment is not effective enough for inhibiting or stimulating galaxy size.
\subsection{Non-parametric Structures} \label{sec3.2:NM}
We show the $G$ and $M_{20}$ distributions as functions of stellar mass in different redshift and environment bins in Figure~\ref{fig03-G-M20}. For massive GV galaxies, the average $G$ increases with the cosmic time, while M$_{20}$ generally decreases with decreasing redshifts. A large $G$ value hints that the light distribution of a galaxy tends to be not uniform but concentrated. This could explain why $n$ has a similar redshift evolution as shown in Figure~\ref{fig02-all}. \citet{Peth+16} have approved that the $G$ is sensitive to the $n$. Different from $G$, the decrease of $M_{20}$ is associated with the disappearance of sub-structures in galaxies, such as bars, spiral arms, and bright cores \citep{Lotz+2004}.
In other words, these non-parametric measurements of massive GV galaxies show substantial variations with redshift, i.e, these galaxies at higher redshift tend to have more uniform light distribution globally but more sub-structures locally.
The results of median (or KS) test between the highest and lowest environments show that p-values for $G$ in all redshift bins are larger than 0.05, indicating no environment-related variation. Figure~\ref{fig03-G-M20} also shows the dependence of $M_{20}$ on both stellar mass and at environment overdensity quantiles at $0.5 < z < 1.0$, with 2D-KS p-value $<$ 0.05. The corresponding Spearman coefficients for higher and lower density quarters are about -0.476 and -0.497, respectively, suggesting that $M_{20}$ is negatively dependent on stellar mass. GV galaxies with higher mass tend to have less prominent substrctures.
\subsection{Specific Star Formation Rate} \label{sec3.3:SFR}
In Figure~\ref{fig04-sSFR}, we show the sSFR vs. $M_{\star}$ distributions in two extreme environments in three redshift bins. The negative Spearman correlation coefficients ($<$ -0.1) shown in Table\ \ref{tab: corr_KS} reveal that there is a steady drop of sSFR with increasing stellar mass at $0.5 < z < 1$ and $1 < z <1.5$. It is consistent with results of \cite{Bauer+2005}, which reported that sSFR decreases with increasing galaxy stellar mass at $0 < z < 1.5$. It shows the mass dependence for massive GV galaxies, which decreases at high redshift.
In each redshift bin, the dark green squares represent the sSFR--$M_\star$ relation in the highest density quarter, while the light ones are in the lowest density quarter. At $1.0 < z < 2.5$, there is no significant difference of sSFR for GV galaxies in two opposite environments, with the corresponding 2D-KS p-value $>$ 0.05. While at 0.5 $\leq z < 1$, it could be clearly seen the difference of sSFR between the highest and lowest environmental density, where p-value $\sim$ 0.003, especially for the most massive GV galaxies: a sharp drop at the high-mass end for GV galaxies in the highest overdensity.
A possible explanation for this sharp decline is that a denser environment would suppress the star formation activities for more massive GV galaxies at lower redshift. It hints the effect of ``environment quenching''. We would discuss the quiescent fraction in Section~\ref{sec4:FQG} to explain the quenching process more.
\begin{table*}\centering
\ra{1.3}
\tablecaption{Outline of CAE Architecture and Layer Configuration. \label{tab_CAE}}
\begin{tabular}{@{}rrrrcrrrcrrr@{}}\toprule
& \multicolumn{3}{c}{$ 0.5 < z < 1.0$} & \phantom{abc}& \multicolumn{3}{c}{$1.0 < z < 1.5$} &
\phantom{abc} & \multicolumn{3}{c}{$1.5 < z < 2.5$}\\
\cmidrule{2-4} \cmidrule{6-8} \cmidrule{10-12}
& Spearman & Spearman & 2D KS && Spearman & Spearman & 2D KS && Spearman & Spearman & 2D KS\\
& (high) & (low) & && (high) & (low) & && (high) & (low) & \\
\midrule
$n$ & 0.292 & 0.244 & 0.003 && 0.056 & 0.084 & 0.567 && -0.046 & -0.031 & 0.528\\
r$_{\rm e}$ & 0.498 & 0.584 & 0.417 && 0.429 & 0.447 & 0.274 && 0.373 & 0.322 & 0.266\\
$G$ & 0.109 & 0.174 & 0.123 && 0.155 & 0.087 & 0.528 && 0.049 & 0.071 & 0.251\\
M$_{\rm 20}$ & -0.476 & -0.497 & 0.015 && -0.214 & -0.155 & 0.407 && -0.137 & -0.081 & 0.755\\
sSFR & -0.258 & -0.275 & 0.003 && -0.327& -0.167 & 0.260 && -0.085 & -0.181 & 0.192\\
\bottomrule
\end{tabular}
\caption{Spearman correlation coefficients and 2-D KS test for physical properties between different environment at $0.5 <z <1.0$, $1.0 < z < 1.5$, and $1.5 <z <2.5$.
\label{tab: corr_KS}}
\end{table*}
\section{Discussion} \label{sec4:FQG}
During the sample selection in Section~\ref{sec2.5: sample select}, we have divided galaxies into three different populations (BC, GV, and RS) according to the corrected rest-frame U-V color. The fraction of each population could give an insight into the quenching timescale, which is defined as the number of each population over the number of the whole sample, i.e., $f_{\rm i} = N_{\rm i}/(N_{\rm BC}+N_{\rm GV}+N_{\rm RS})$, i = BC, GV, and RS. If the quenching timescale is large, BC galaxies will pass through the GV phase slowly and then become quiescent, resulting in a large $f_{\rm GV}$. Otherwise, BC galaxies go through the transition and become quiescent quickly, without a significant rise of $f_{\rm GV}$ (\citealt{Jian+2020}). Therefore, comparing the distributions of $f_{\rm GV}$ as a function of environments with those of $f_{\rm BC}$ and $f_{\rm RS}$ might help us understand the quenching process.
\subsection{Fractions of Three Populations} \label{sec4.1: frac_three}
Figure~\ref{fig_frac_three} shows the fractions of galaxies belonging to BC (top), GV (middle), and RS (bottom) populations as functions of overdensity in different stellar mass and redshift bins. The vertical dot-dashed lines separate the environment into low ($\log(1+\delta^{'}_{10})< 0.1$), medium ($0.1<\log(1+\delta^{'}_{10})< 0.6$), and high overdensity ($\log(1+\delta^{'}_{10})>0.6$) from the left to right. Regardless of redshifts, $f_{\rm BC}$ keeps a steady decline with the rise of local density, while it is the opposite for the RS fraction, which decreases gradually.
At the low-mass end ($10.0 < \log M_{\star}/M_{\odot} < 10.4$), $f_{\rm BC}$ and $f_{\rm RS}$ are much more dependent on the overdensity compared to $f_{\rm GV}$. While at the high-mass end ($\log M_{\star}/M_{\odot} >$ 10.8), it becomes that $f_{\rm GV}$ and $f_{\rm RS}$ are both sensitive to the overdensity, while $f_{\rm RS}$ is relatively stable. The comparison of $f_{\rm GV}$ in the high- and low-mass bins implies that the quenching timescale of the massive GV galaxies is larger than that of the less massive GV galaxies, because more massive GV galaxies are undergoing the quenching processes, especially in the field at the high-mass end.
So it can reveal the real fact of the influence of stellar mass on the galaxy quenching, called the ``mass quenching''. %
In addition, when we consider the effects of both the environment and stellar mass, it is expected that more massive galaxies in the high overdensity are easier to be quenched into the RS population, resulting in a higher $f_{\rm RS}$ in the denser environment at the high-mass end as shown in the panels f) and i) of Figure~\ref{fig_frac_three}. What is interesting in Figure~\ref{fig_frac_three} is the general pattern of decline in the GV fraction with the increasing overdensity at $0.5<z<1$, even for the less massive galaxies. Meanwhile, galaxies in denser environments have a higher $f_{\rm RS}$, regardless of $M_\star$ and redshifts, which may imply that denser environments have additional effects on the acceleration of quenching. In summary, we find that GV and BC galaxies would be more likely to transform into RS galaxies in the denser environments for all the massive galaxies. Our results also display the effect of stellar mass on the quenching process as previous researches (\citealt{Iovino+2010, PengYJ+10}), thus suggesting that the joint role of stellar mass and environment promotes the quenching process, as evidence of ``mass quenching'' and ``environment quenching'' (\citealt{Vulcani+2012, Darvish+2017}).
\subsection{Effective Fraction of GV Galaxies} \label{sec4.2:Feff}
In general, BC galaxies could be considered as the progenitors of GV galaxies. With the cosmic time, the abundance of RS galaxies and the shortage of BC galaxies naturally leads to the drop-down of GV population in Figure~\ref{fig_frac_three}, especially for galaxies at the high-mass end. Recently, \citet{Jian+2020} have pointed out that the contamination of RS galaxies might bias the relative fraction of GV galaxies. Therefore, the effective fraction, defined as the relative number of GV galaxies over the number of non-quiescent galaxies (i.e., BC and GV galaxies), $f_{\rm eff, GV} = N_{\rm GV}/(N_{\rm BC}+N_{\rm GV})$, can be a better indicator to reflect the galaxy transitional phase more realistically, after removing the effect due to the dominance of the quiescent galaxies.
Based on the new normalized fraction, Figure~\ref{fig_f_eff_gv} shows $f_{\rm eff, GV}$ as a function of overdensity in bins of redshift and stellar mass. After eliminating the RS galaxies, $f_{\rm eff, GV}$ at $0.5< z <1$ in the low overdensity environment is lower than those in the environments with higher overdensity , and the relations between GV fraction and environment reverse compared to those shown in Figure~\ref{fig_frac_three}. This result suggests that the observed negative correlations between $f_{\rm GV}$ and overdensity are mostly due to the reduction of the non-quiescent galaxies with increasing environment densities. The steady increasing trend of $f_{\rm eff, GV}$ with $\log(1+\delta'_{10})$ in the lower redshift bin might support a scenario that environment indeed boosts the beginning of quenching process for SFGs in intermediate-redshift Universe. At $1< z <2.5$, there is no slow upward trend of $f_{\rm eff, GV}$ with the increase of local overdensity, implying that environment has no impact on the beginning of the quenching, although it plays an important role in the complete cessation of the star formation activities as seen in the bottom panels of Figure~\ref{fig_frac_three}.
On the other hand, the overall of $f_{\rm eff, GV}$ at fixed redshift and $\log(1+\delta'_{10})$ significantly increases with $M_\star$, which reveals a strong mass dependence of $f_{\rm eff, GV}$. For more massive GV galaxies at $1<z <1.5$, a sharp drop of the effective fraction of GV from middle to high overdensity is observed, which might be the consequence of the combination of effects from both stellar mass and environment, which is the same as the explains in Section~\ref{sec4.1: frac_three}.
\section{Summary} \label{sec5:Sum}
By utilizing the multi-wavelength data from five 3D-{\it HST}/CANDELS fields, we construct a sample of 7850 massive ($M_{\star} > 10^{10} M_\sun$) galaxies at $0.5< z < 2.5$. Based on the extinction-corrected rest-frame U-V color, we separate the parent sample into BC, GV, and RS galaxies (\citealt{Gu+18, Gu+2019}), resulting in a total number of 2126 GV galaxies. We provide empirical relations for GV galaxies between environment and different physical properties, including $n$, $r_{\rm e}$, $G$, $M_{20}$, and sSFR. We also analyze the fractions of three populations as functions of environments and redshift. Our conclusions are summarized as follows:
1. At $0.5 < z < 1$, GV galaxies have larger S\'{e}rsic indices ($n$) in the denser environment.
The environment seems to have a significant impact on $n$ at $0.5 < z < 1$. It is unlikely to observe disc-like GV galaxies in the densest environment, which suggests that a denser environment would promote the growth of bulge of low-redshift GV galaxies.
We find no significant difference in galaxy size (r$_{\rm e}$) between galaxies in environments with the highest and the lowest overdensity at fixed redshift, indicating that a denser environment is not effective enough to influence galaxy size.
2. Non-parametric measurements of massive GV galaxies show substantial variations with redshift that GV galaxies are more disc-like at higher redshift. Neither stellar mass nor environments have significant impact on $G$ in different redshift bins, while there is a dependence of $M_{20}$ on both stellar mass and environment at $0.5 < z < 1$.
3. At $0.5<z <1.0$, there is a decrease of sSFR from the lowest to the highest environmental density, especially at the high-mass end. A plausible explanation is that a denser environment would suppress the star formation activity of GV galaxies, especially for massive GV galaxies at low redshift.
4. We also discuss the effect of the environment on the fraction for three different populations (BC, GV, RS). At the low-mass end (10.0$<\log M_{\star}/M_{\odot} <$10.4), the BC and RS fractions are much more dependent on the overdensity compared to the GV fraction. While at the high-mass end ($\log M_{\star}/M_{\odot} >$ 10.8), it becomes that the GV and RS fractions are sensitive to the overdensity, and the BC fraction is relative stable. It implies that the quenching timescale of the massive GV galaxies is longer than that of the less massive GV galaxies, which reveals the influence of ``mass quenching''. There is a general pattern of decline in the GV fraction and increase in the RS fraction with the increase of environment density at $0.5< z< 1$. It suggests that denser environments have effects on the quenching process. Considering the effect of mass on three fractions, it could be explained that both denser environments and mass aggregation might promote the galaxy quenching process.
5. The $f_{\rm eff, GV}$ rises gradually with the increase of environmental density, suggesting that environment boosts the beginning of the quenching process. The overall $f_{\rm eff, GV}$ is found to have a positive correlation with $M_\star$, regardless of redshift and environments.
Both original fraction and new effective fraction suggest that stellar mass and environments jointly promote the quenching process.
\acknowledgments
This work is based on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
This work is supported by the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB 41000000), the National Key R\&D Program of China (2017YFA0402600), the NSFC grant (No. 11973038), and the China Manned Space Project with No. CMS-CSST-2021-A07.
Z.S.L acknowledges the support from China Postdoctoral Science Foundation (2021M700137). Y.Z.G acknowledges support from China Postdoctoral Science Foundation funded project (2020M681281). G.W.F. acknowledges support from Yunnan Applied Basic Research Projects (2019FB007).
\bibliography{reference}{}
\bibliographystyle{aasjournal}
|
Title:
Searching for Spectroscopic Signatures of Ongoing Quenching in SDSS Galaxies |
Abstract: In this paper we estimate the "star formation change parameter", SFR$_{79}$,
which characterizes the current SFR relative to the average during the last 800
Myr, for $\sim$ 300'000 galaxies selected from the Sloan Digital Sky Survey
(SDSS). The goals are to examine, in a much larger and independent sample, the
trends previously reported in a sample of star-forming MaNGA galaxies, and also
to search for spectroscopic signatures of ongoing quenching in the so-called
"Green Valley", which is generally believed to contain galaxies that are
migrating from the star-forming (SF) population to the quenched population of
galaxies. Applying SFR$_{79}$ to our large sample of SDSS galaxies, we first
confirm the basic results of SF galaxies published by Wang & Lilly. We then
discuss in detail the calibration and meaning of SFR$_{79}$ for galaxies that
are well below the SFMS and establish what would be the expected signatures of
systematic ongoing quenching within the population. We conclude that it is not
possible at present to establish unambiguous observational evidence for
systematic ongoing quenching processes with the data at hand, due to
limitations of noise in the observational data, in particular in the
measurements of H$\delta$ absorption, and in the calibration of SFR$_{79}$, as
well as biases introduced by the necessity of selecting objects with
significant amounts of H$\alpha$ emission. We do however see plausible
indications of ongoing quenching, which are quantitatively consistent with
expectations from pertinent "growth+quenching" models of galaxy evolution and a
typical e-folding timescale for quenching of order $\sim500$ Myr.
| https://export.arxiv.org/pdf/2208.11668 | command.
\newcommand{\zgas}{$Z_{\rm gas}$}
\newcommand{\mgas}{$M_{\rm gas}$}
\newcommand{\mz}{$M_{\rm Z}$}
\newcommand{\re}{\Reff}
\newcommand{\msolar}{${\rm M}_\odot$}
\newcommand{\mstar}{$M_\ast$}
\newcommand{\lgmstar}{$\log (M_\ast/$\msolar)}
\newcommand{\lgmhalo}{$\log_{10}$($M_h$/$h^{-1}$\msolar)}
\newcommand{\dindex}{D$_n$(4000)}
\newcommand{\hd}{H$\delta$}
\newcommand{\hda}{\hd$_A$}
\newcommand{\ewhda}{EW(\hda)}
\newcommand{\ha}{H$\alpha$}
\newcommand{\hae}{\ha}
\newcommand{\ewhae}{EW(\hae)}
\newcommand{\lgewhae}{$\log_{10}$\ewhae}
\newcommand{\hb}{H$\beta$}
\newcommand{\mustar}{$\mu_\ast$}
\newcommand{\asec}{{^{\prime\prime}}}
\newcommand{\Reff}{{$R_{\rm e}$}}
\newcommand{\N}[1]{N$_{#1}$}
\newcommand{\myemail}{\email{[email protected], [email protected]}}
\newcommand{\rp}{r_{\rm p}}
\defcitealias{Wang-20a}{WL20}
\shorttitle{Signatures of Quenching}
\shortauthors{Weibel, Wang \& Lilly}
\graphicspath{{fig/}}
\begin{document}
\title {Searching for Spectroscopic Signatures of Ongoing Quenching in SDSS Galaxies}
\author {Andrea Weibel\altaffilmark{1,2},
Enci Wang\altaffilmark{1},
Simon J. Lilly\altaffilmark{1}
} \myemail
\altaffiltext{1}{Department of Physics, ETH Zurich, Wolfgang-Pauli-Strasse 27, CH-8093 Zurich, Switzerland}
\altaffiltext{2}{Departement d'Astronomie, UniversitГ© de GenГЁve, 51 Chemin Pegasi, CH-1290 Versoix, Switzerland}
\keywords{Galaxy quenching (2040), Galaxy evolution (594), Star formation (1569), Green valley galaxies (683), Galaxies (573), Galaxy spectroscopy (2171)}
\section{Introduction}
\label{intro_sec}
Galaxies over a broad range of cosmic epochs can broadly be divided into two distinct populations based on their distribution in the star formation rate-stellar mass (${\rm SFR}$-$M_*$) plane: active, star-forming (SF) galaxies and passive, quenched galaxies \citep[e.g.][]{ Strateva-01, Baldry-04, Bell-04, Blanton-05, Faber-07, Wetzel-12, Wang-18}. This bimodality is seen out to a redshift of at least 2.5 \citep[e.g.][]{Bundy-06, Martin-07, Muzzin-12}. SF galaxies form a relatively tight and slightly sub-linear sequence \citep[e.g.][]{Brinchmann-04, Daddi-07, Noeske-07, Pannella-09, Elbaz-11, Stark-13, Renzini-15, Boogaard-18, Wang-19}, which is often called the star-forming Main Sequence (SFMS). In contrast, the passive population is characterised by little or no star formation and by structural morphologies that are more dominated by spheroids \citep[e.g.][]{Baldry-04, Li-06, Muzzin-13, Barro-17, Wuyts-11}. The mass functions of these two populations are noticeably different, with passive galaxies dominating the galaxy mass function at high masses \citep[e.g.][]{Peng-10}.
This galaxy bimodality is generally interpreted in terms of a scenario in which, after formation, galaxies reside on the SFMS, forming stars and continually increasing their stellar mass as gas is accreted from the growing halo. At some point however, a given galaxy ``quenches", i.e. its SFR drops by a factor of 10 or more and it joins the passive population. This quenching process may be due to a number of different physical processes operating in and around both satellite galaxies and the central galaxies of dark matter haloes. \citet{Peng-10} \citep[see also][]{Peng-12} introduced a useful distinction between different quenching channels of ``mass-quenching", which limits the mass of galaxies as they approach the characteristic Schechter M*, and ``environment-" or ``satellite-quenching" which operates only on satellite galaxies and is more or less independent of their stellar mass.
The region between the dominant blue star-forming and the red quenched populations is often called the Green Valley (GV) \citep[e.g.][]{Wyder-07, Salim-07, Schawinski-14, Smethurst-15, Mahoro-17, Nogueira-Cavalcante-18, Belfiore-18, Wang-18}. The original definition of the GV in terms of a relative paucity of galaxies with intermediate photometric colours is easily translatable to an equivalent feature in the distribution of SFR at a given mass, where there is seen to be a paucity of galaxies with intermediate values of specific SFR (sSFR, defined as SFR/$M_*$) in logarithmic space.
It is therefore often assumed that galaxies in the GV are transitioning from the SFMS to the quenched population, i.e. that they are objects that are currently undergoing a quenching process. One of the goals of this paper is to search for direct evidence for such a one-way transition.
As an aside, it should be noted that, while widely accepted, this physically-motivated ``grow-and-quench" scenario has been questioned by e.g. \citet{Abramson-16}, who pointed out that the basic features of the galaxy population, such as the stellar mass function and the slope of the SFR-$M_*$ diagram can equally well be reproduced over a wide range of redshifts by an ad-hoc model in which all galaxies individually follow simple log-normal star formation histories (SFHs) with suitably chosen combinations of parameters.
While this smoothly evolving ``log-normal" scenario obviates any physical ``quenching process(es)", it should be appreciated that the adopted log-normal form of the SFR histories, the required correlations between the log-normal parameters for individual galaxies, and the {\it ab initio} distinction between future satellites and centrals, all lack a convincing physical basis.
Although it is clear that GV galaxies have, by definition, intermediate SFR with respect to the SF and quenched populations, the time-development of their individual SFRs is hard to determine from observations, as it requires measurements of both the current SFR and the SFR in the recent past. \citet{Wang-20a}, hereafter \citetalias{Wang-20a}, recently developed a parameter that characterizes the change of the star formation over Gyr timescales for SF galaxies. In this paper, we will explore the applicability of this approach to galaxies below the SFMS and search for direct spectral evidence of ongoing quenching processes.
In order to characterize quenching as a process, it is convenient to assume that the SFR in a quenching galaxy is declining exponentially with time. This then yields the e-folding timescale of the declining SFR, $\tau_{\rm Q}$ as a useful parameterization of the process \citep[e.g.][]{Wetzel-13, Hahn-17}.
Several authors have tried to statistically estimate or constrain $\tau_{\rm Q}$, but the results have not yielded consistent or convergent results \citep{Wetzel-13, Schawinski-14, Yesuf-14, Peng-15, Hahn-17, Smethurst-18, Trussler-20}.
For instance, by constructing a model that statistically tracks SFHs and quenching of central galaxies, \citet{Hahn-17} find that the median quenching timescale decreases as a function of $M_*$ from $\tau_{\rm Q} = 1.2$ Gyr at $M_* = 5\cdot10^{10}M_{\odot}$ to $\tau_{\rm Q} = 400$ Myr at $M_*= 2\cdot10^{11}M_{\odot}$. A similar trend is found specifically for satellites by \citet{Wetzel-13} but with a much shorter quenching timescale, where the typical $\tau_{\rm Q}$ is claimed to decrease from $\tau_{\rm Q} \approx 800$ Myr at $M_*= 6\cdot10^9M_{\odot}$ to $\tau_{\rm Q}\approx200$ Myr at $M_*= 10^{11}M_{\odot}$. On the contrary, with an analysis of the stellar metallicity in local galaxies, \citet{Peng-15} claimed that strangulation is the primary mechanism responsible for quenching, and found a typical {\it transitioning} timescale from the SF to the quenched population of as long as 4 Gyr, which is however difficult to compare to the timescale $\tau_{\rm Q}$ introduced above.
It is clear that further observational work is required to clarify the timescales of the suppression of star formation as well as the underlying physical mechanisms.
To directly see whether or not GV galaxies are quenching their star formation in real-time, and to determine the timescale of this suppression, it is necessary to constrain the time-evolution of their SFRs on extended timescales.
It is not of course possible to monitor any individual galaxy over any interesting timescale - they are all seen at single snapshots of their evolution.
It is however possible to characterise the SFR of a given galaxy \textit{averaged} over different timescales. \citetalias{Wang-20a} developed a framework for that based on the optical spectral features of H$\alpha$ emission, H$\delta$ absorption and the 4000 \AA\ break. In principle (although see the discussion in that paper) the measurement of these three sharp spectral features should be independent of the effects of reddening by dust.
The H$\alpha$ emission line from HII regions traces the recent SFR within the last 5 Myr (SFR$_{\rm 5Myr}$), while the H$\delta$ absorption feature roughly traces the SFR averaged over the last 800 Myr, SFR$_{\rm 800Myr}$ \citep[e.g.][]{Balogh-99, Kauffmann-03, Li-15, Wang-18}. A ``star formation change parameter" can then be defined as the ratio of the SFRs averaged on these two timescales, i.e. SFR$_{\rm 5Myr}$/SFR$_{\rm 800Myr}$.
Using this parameter, \citetalias{Wang-20a} investigated the variability in the SFR of SF galaxies. They found amongst other things that galaxies with a recent temporal enhancement (or suppression) in their overall SFR have enhanced (or suppressed) star formation at all galactic radii. In addition, galaxies, or regions of galaxies, with short gas depletion times, i.e. with high star formation efficiency, appear to undergo larger amplitude temporal variations in their SFRs. Exploring this further, \citet{Wang-20b} constrained the temporal power spectrum of the sSFR of SF galaxies. The results of both \citetalias{Wang-20a} and \citet{Wang-20b} are consistent with the dynamical response of a gas-regulator system \citep{Lilly-13} to a time-varying inflow, as previously proposed in \citet{Wang-19}.
In the present work, our first goal is to test those results using a much larger data set obtained from the Sloan Digital Sky Survey (SDSS). We then apply the star formation change parameter concept to galaxies that lie significantly below the SFMS and discuss in detail the limitations and caveats of this procedure. In order to search for direct evidence of systematic ongoing quenching, we carefully investigate the expected impact of a population of quenching galaxies on the observed distribution of the star formation change parameter. Finally, we try to address the question of whether the signature of a population of quenching galaxies can be identified, and if so, on what timescale(s) these galaxies are decreasing their (s)SFR.
The layout of the paper is as follows. In Section \ref{data_sec}, we present the data that is used in the present work, including the sample selection, an improved method to estimate the H$\delta$ absorption index and a refined calibration of the star formation change parameter SFR$_{79}$ as compared to \citetalias{Wang-20a}. We examine a number of issues associated with estimating SFR$_{79}$ for galaxies below the SFMS.
In Section \ref{sfms_sec} we define a sample of SFMS galaxies and establish broad consistency with the results published in \citetalias{Wang-20a} and \citet{Wang-20b}.
In Section \ref{q_sec}, we examine the SFR$_{79}$ for galaxies lying significantly below the SFMS. We derive the expected effect of systematic ongoing quenching on the distribution of SFR$_{79}$ and search for direct signatures of ongoing quenching. Finally, we provide indicative estimates of what the typical quenching timescales could be.
We summarise our main conclusions in Section \ref{conc_sec}.
Throughout this paper, we use the following shorthand notation: The SFR averaged over the last 5 Myr, SFR$_{\rm 5Myr}$ is denoted as SFR$_7$ (because 5 Myr $\approx10^7$ yr), that averaged over the last 800 Myr, SFR$_{\rm 800Myr}$, as SFR$_9$ (because 800 Myr $\approx10^9$ yr), and the ratio of the two, SFR$_{\rm 5Myr}$/SFR$_{\rm 800Myr}$, as SFR$_{79}$, consistent with \citetalias{Wang-20a}.
When computing distance-dependent quantities, we assume a flat cold dark matter cosmology with $\Omega_m=0.27$, $\Omega_\Lambda=0.73$ and $H_0=70 \ {\rm km \ s}^{-1}{\rm Mpc}^{-1}$.
\section{Data}
\label{data_sec}
\subsection{Selection of the SDSS sample}
\label{sample_sec}
We select our main galaxy sample from the MPA-JHU catalog\footnote{https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7} of the seventh data release of the Sloan Digital Sky Survey \citep[SDSS DR7,][]{Abazajian-09} where 927'552 extragalactic objects are listed. In the SDSS, spectra are obtained through 3 acrsec-diameter fibers which are centered on galactic centers. The wavelength coverage is 3800-9200 \AA.
We adopt the stellar masses (both integrated and just within the fiber aperture) from the MPA-JHU catalog in which they were determined based on fits to the photometry. These mass estimates agree well with the stellar masses from \citet{Kauffmann-03} based on spectral indices. It should be noted that these mass estimates represent the mass of \textit{living} or \textit{shining} stars. However, in our calibrator which we will introduce in Section \ref{meas_sfr79_sec}, the stellar mass is defined as the integral of the SFH. Therefore, for consistency, we correct the MPA-JHU mass to represent the integrated SFH by taking the fraction of the stellar mass that is returned to the interstellar medium through winds and supernova explosions into account. We adopt the return fraction to be 0.4 for all galaxies for the \citet{Chabrier-03} initial mass function \citep[IMF,][]{Vincenzo-16}.
Our basic pre-selection criteria for this study are as follows: (1) ${z <0.2}$, (2) the object is not a duplicate and (3) it can be matched with the UPenn PhotDec catalog \citep{Meert-15}.
The third criterion implies in turn that we adopt all the selection criteria of the UPenn PhotDec catalog. These are specified in \citet{Meert-15} to be (a) the extinction corrected Petrosian magnitude in the $r$-band is between 14 and 17.77 mag and (b) the object is identified to be a galaxy based on the spectroscopy \textit{and} the photometry. The limit at the bright end is set to exclude very nearby and large objects, which are either too well resolved to be fitted with a standard light profile or are split into multiple objects in the SDSS catalog. The limit at the faint end is adopted from \citet{Strauss-02} to be the limit on the completeness of the sample. Additional ${\rm \approx5000}$ objects are removed because they have ${\rm z<0.005}$, very low surface brightness or data quality issues \citep[see details in][]{Meert-15}.
Overall, this yields a sample consisting of 619'960 galaxies.
Of these, some 4'830 galaxies (less than 0.8\%) cannot be used for the subsequent analysis, because either the corresponding spectra are not available from SDSS DR12 (333 cases), or the data file is truncated (21 cases), or, most often, the fitting routine applied (see Section \ref{meas_spec_feat_sec}) fails to provide a reasonable fit to the continuum of the spectrum (4'476 cases). This leaves us with a final sample of 615'130 galaxies.
We cross-match this sample with the SDSS DR7 group catalog published by \citet{Yang-07} in order to distinguish between centrals and satellites. The group catalog consists of 639'359 objects, 555'594 of which are matched with the sample investigated here. We adopt the most massive galaxy in each group to be the central galaxy, as recommended.
\citet{Yang-07} tested their group finder on a mock galaxy redshift survey that mimics the SDSS DR4 and specify the fraction of galaxy groups in the mock survey whose central galaxy has actually been identified as the central galaxy of its group to be around 90\% for $M_{\rm halo}>10^{12}$M$_{\odot}$. The vast majority of the remaining 10\% of central galaxies are wrongly classified to be satellites.
We note that this effect alone would cause a $\sim$28\% contamination of the satellite population in our final sample as it contains 2.8 times more centrals than satellites. In reality, the contamination is likely to be even larger because the 90\% quoted above does not capture the full uncertainty of the group finder (see for example \citet{Knobel-09} for a more detailed discussion of impurity and contamination effects in group-finding algorithms).
On the left of Figure \ref{mz_sel}, we show the mass-redshift distribution of the full sample. The color-coding represents the number \textit{density} ${\rm \rho_ N}$, defined as the number of objects in a given grid cell divided by the total number of objects \textit{and} the area of the cell in parameter space.
Since the sample is flux-limited,
it is biased towards objects with a low mass-to-light ratio, i.e. towards bright, SF objects \citep[e.g.][]{Faber-79, Girardi-00, Bell-03, Cappellari-06}.
Below, we will wish to quantitatively analyse the relative number of SF and currently quenching galaxies. Therefore, we assign a mass-dependent redshift-cut to the sample galaxies to
ensure that, at each mass, there is no bias against high mass-to-light objects, and further that the sample is statistically complete out to some limiting (mass-dependent) redshift.
To do this, we first select galaxies which are between 0.9 and 1.1 dex below the SFR7-based SFMS (see definition in Section \ref{def_sfms_sec}). These objects are just {\it below} the lower boundary in EW(H$\alpha$) (and thus sSFR$_7$) above which we argue the calibration of SFR$_{79}$ to be meaningful (see Section \ref{sfr79_unc_sec}).
We then split these galaxies into 20 mass bins with a width of 0.1 dex respectively. For each mass bin, we define a cut in redshift, below which the sample galaxies are complete for $r$-band magnitudes less than 17.77 mag. To avoid confusion, we note that the sub-sample considered here is only used to define reasonable mass-dependent redshift-cuts, and not to do any scientific analysis.
The resulting mass-dependent redshift-cuts are indicated by the stepped black lines in Figure \ref{mz_sel}.
To better illustrate and justify these cuts, we show the ${M_*}$-$z$ distribution of the mentioned subsample, overlapped with the redshift-cuts on the right of Figure \ref{mz_sel}.
Those galaxies are expected to have a higher mass-to-light ratio on average than the SF galaxies, which shifts the lower boundary of the $M_*$-$z$-distribution up and to the left. We argue that the redshift-cuts are conservative enough to avoid any bias for objects with star formation rates representative of those found as far as 1 dex below the SFMS.
It should be noted that galaxies with high stellar masses are inevitably ``over-represented" in our sample because of the greater accessible volumes for these. This is not a concern in the present work, since we will usually be interested in the properties of galaxies as a function of stellar mass and not in the relative numbers of high and low mass galaxies. We will often work in 4 mass bins with a width of 0.5 dex in ${\rm \log(M_*/ M_{\odot})}$ respectively. Those 4 bins are indicated by the solid horizontal lines in Figure \ref{mz_sel}.
After the application of the mass-redshift cuts, we are left with a final sample size of 274'794 galaxies. Of these, 192'195 (69.9\%) are identified as centrals, 68'528 (24.9\%) are identified as satellites and the remaining 14'071 (5.1\%) cannot be identified as either centrals or satellites because they are not listed in the group-catalog of \citet{Yang-07}.
\subsection{Measurements of the spectral features}
\label{meas_spec_feat_sec}
The star formation change parameter SFR$_{\rm 79}$ is based on three observational quantities: the equivalent width of the H$\alpha$ emission line (EW(H$\alpha$)), the Lick Index of H$\delta$ absorption (EW(H$\delta_{\rm A}$)) and the size of the 4000 \AA\ break ($D_{\rm n}4000$). The band-passes for the computation of ${D_{\rm n}4000}$ are defined in \citet{Balogh-99} as [3850, 3950] and [4000, 4100] \AA. The three band-passes used for the Lick Index EW(H$\delta_{\rm A}$) are [4083.50, 4122.25], [4041.60, 4079.75], and [4128.50, 4161.00] \AA\ \citep{Worthey-97}. These are consistent with the definitions used in the MPA-JHU catalog.
Since the three parameters are each measured at essentially a single wavelength, they are in principle insensitive to effects of dust attenuation. However, as discussed in \citetalias{Wang-20a}, dust may still have an effect if different age components of the galaxy in question have different levels of dust extinction. After measuring the three spectral features, we therefore perform an empirical correction adopted from \citetalias{Wang-20a} to account for differential dust extinction between the emission lines and the continuum (see details in Section 2.6 of \citetalias{Wang-20a}).
One of the main difficulties in this work is to accurately measure the EW(H$\delta_{\rm A}$) for spectra with relatively low signal-to-noise ratio (SNR). This is because the H$\delta$ absorption line is usually contaminated by H$\delta$ emission that must be accurately subtracted. In principle, it would be possible to fit both absorption and emission simultaneously, provided that they did not have the same line profile.
We performed an initial spectral fitting with the penalized PiXel-Fitting (pPXF) code \citep{Cappellari-04, Cappellari-07} based on 150 stellar population template spectra from the MILES library \citep{Sanchez-Blazquez-06}. We correct for foreground galactic extinction by adopting the reddening map from \citet{Schlegel-98} and the dust extinction curve from \citet{ODonnell-94}. In pPXF, the spectral contribution of the emission lines can in principle be obtained by subtracting the stellar contribution from the observed spectra and simultaneously fitting all the emission lines with a single Gaussian line profile respectively.
Indeed, the EW(H$\alpha$) parameter is easily obtained directly from this fitting process. We note here however that 43'876 galaxies (16\%) show an unreasonable velocity dispersion in their H$\alpha$ emission lines (i.e. the dispersion of the fitted Gaussian is outside the range $\sigma\in(0.5,7.5)\ {\rm\mathring{A}}$ which roughly corresponds to $\sigma\in(50,800)\ {\rm km \ s^{-1}}$)
or extremely low H$\alpha$ fluxes (i.e. lower than their nominal uncertainties). Those galaxies, likely passive galaxies without significant H$\alpha$ emission, are ignored completely in the following analysis, unless otherwise stated.
We can then in principle also measure $D_{\rm n}4000$ and EW(H$\delta_{\rm A}$) based on the fitted absorption line spectra with the contribution of the emission lines subtracted. However, it became clear that the measurements of EW(H$\delta_{\rm A})$ obtained in this way were unsatisfactory, at least for the generally low SNR of the SDSS spectra. Unfortunately, accurate measurements of EW(H$\delta_{\rm A})$ are critical when deriving SFR$_{\rm 79}$, because its error almost always dominates the observational uncertainty of SFR$_{\rm 79}$.
To gain insight into the performance of the fitting procedure at low SNR, we first construct a set of representative spectra with extremely high SNR by stacking many individual SDSS-spectra that were matched in EW(H$\alpha$), stellar mass, and the SNR in the H$\delta$ continuum. We then randomly select 100 of these stacked spectra and measure their EW(H$\delta_{\rm A}$) with the spectral fitting method. We adopt these measured values to be the ``true" values of the representative spectra. For each of these 100 stacked spectra, we then produce 25 realizations at each of 15 different SNR levels by adding uncorrelated noise to the original high SNR stacked spectrum. We are therefore left with 37'500 noisy mock spectra, spanning a wide range of SNR. We then re-measure the EW(H$\delta_{\rm A}$) using the same spectral fitting method and compare the values to the corresponding ``true" values that have been obtained from the un-degraded spectra.
As shown by the orange curve in Figure \ref{hd_exp} there is a clear and rather large systematic deviation in the mean measured EW(H$\delta_{\rm A}$) at low SNR.
The spectral fitting method significantly overestimates the EW(H$\delta_{\rm A}$) at low SNR\footnote{For the same reason, we do not simply adopt the measurements of EW(H$\delta_{\rm A}$) from the MPA-JHU catalog.} (SNR $<$ 10) and this systematic deviation becomes more significant with decreasing SNR. For reference, the median SNR of our sample at the wavelength of ${\rm H\delta}$ is 8.5 with 90\% of the spectra between 4.3 and 17.2 (see the brown lines in Figure \ref{hd_exp}).
An alternative approach is to estimate the strength of the H$\delta$ emission from the observed emission line fluxes of H$\alpha$ and H$\beta$, since these lines are stronger and generally observed at much higher SNR. The intrinsic flux ratios between the Balmer lines should be fixed, (H$\alpha$/H$\beta$)$_{\rm int}$=2.86 and (H$\delta$/H$\beta$)$_{\rm int}$=0.259, under the assumption of case-B recombination, a temperature of ${\rm T=}$10$^{\rm 4}$ K and an electron density of ${\rm n_ e}$ = 100 cm$^{\rm -3}$ \citep{Osterbrock-89}. Adopting the extinction curve of \citet{ODonnell-94}, one can then establish the extinction based on the Balmer decrement \citep[e.g.][]{Dominguez-13} and thereby {\it predict} the flux of the H$\delta$ emission line. We can therefore subtract this predicted contribution from the EW(H$\delta_{\rm A}$) Lick Index that is obtained from the raw observed spectra. We note that no dust attenuation correction is performed for galaxies with (H$_{\alpha}$/H$\beta$)$_{\rm obs}<$ 2.86. For galaxies with a SNR $<3$ in either the H$\alpha$ or the H$\beta$ emission line, we apply a dust correction based on a median E(B$-$V) from galaxies with low H$\alpha$ (and H$\beta$) emission but SNR $>3$ in both lines.
We tested this new approach in the same way as above. Specifically, we applied the new method to the same 37'500 mock spectra, and then compared the derived measurements to the ``true" values previously obtained from the spectral fitting method applied to the high SNR spectra.
As can be seen by the gray curve in Figure \ref{hd_exp}, this new method is both completely consistent with the standard fitting procedure at high SNR, and largely eliminates the systematic bias at low SNR. It improves the EW(H$\delta_{\rm A}$) measurements, and appears to be quite stable for spectra of low SNR. Therefore, in the following, all measurements of EW(H$\delta_{\rm A}$) are based on this new method.
In addition, the analysis outlined above provides a way to quantitatively estimate the observational uncertainties of the final EW(H$\delta_{\rm A}$) as a function of the SNR. This uncertainty is taken to be the scatter among the EW(H$\delta_{\rm A}$) measurements of the mock spectra at a given SNR, i.e. the width of the gray shaded region in Figure \ref{hd_exp}. We therefore assign this empirical uncertainty in EW(H$\delta_{\rm A}$) to each individual galaxy based on its nominal SNR in the H$\delta$ continuum.
\subsection{Estimation of the SFR-Parameters}
\label{meas_sfr_params_sec}
We can obtain the SFR$_7$ straightforwardly from the H$\alpha$ {\it luminosity} adopting the relation from \citet{Kennicutt-98} and using the \citet{Chabrier-03} IMF (for consistency, see below). We obtain the star formation change parameter, SFR$_{79}$ using a similar but improved method to that in \citetalias{Wang-20a}, as detailed below. Combining this with the estimate of SFR$_7$ we can then also obtain the SFR$_9$ for each individual galaxy.
\subsubsection{Estimation of SFR$_{79}$}
\label{meas_sfr79_sec}
Here, we will briefly summarize the method, referring the readers to \citetalias{Wang-20a} for details, but highlight our further development of it.
We first construct millions of mock SFHs of galaxies that are designed to cover as comprehensively as possible the range of possible SFHs of galaxies in the Universe. As described in \citetalias{Wang-20a} these mock SFH consist of a smooth underlying SFH on which short term stochastic variations are superposed. We then generate synthetic spectra of these mock galaxies at the present epoch based on stellar population models of the corresponding stellar metallicity \citep[assuming the mass-metallicity relation from][]{Zahid-17}, using the Flexible Stellar Population Synthesis code \citep[{\tt FSPS};][]{Conroy-09}. In this process, we adopt the {\tt MILES} stellar library \citep{Sanchez-Blazquez-06, Falcon-Barroso-11}, a \citet{Chabrier-03} IMF, and the {\tt Padova} isochrones \citep[e.g.][]{Bertelli-94, Bertelli-08}. We then measure the three spectral features of interest following the method outlined in Section \ref{meas_spec_feat_sec} and also compute the actual SFR$_{79}$ for each of the mock SFHs.
Compared to the original method of \citetalias{Wang-20a}, we here employ an improved set of SFHs. Instead of using the smooth SFHs
taken from the Illustris simulation as in \citetalias{Wang-20a}, we instead start with the SFHs for a wide range of stellar masses constructed from the ``observed" evolution of the SFMS \citep{Stark-13, Speagle-14, Lilly-16}, which are likely to be more realistic.
In addition, since in this work we are interested in galaxies that are potentially undergoing quenching, we also include SFHs in which the SFR is irrevocably declining. A simple quenching model was imposed on half of the SFHs by multiplying by an exponential function:
\begin{equation}
\label{quenching_model}
Q(t)=
\begin{cases}
1 & t<\tau_{\rm S} \\
\exp(\frac{\tau_{\rm S}-t}{\tau_{\rm Q}}) & t>\tau_{\rm S},
\end{cases}
\end{equation}
where $\tau_{\rm S}$ represents the starting time of quenching, and $\tau_{\rm Q}$ is the e-folding time of quenching. A wide range of both $\tau_{\rm Q}$ and $\tau_{\rm S}$ is implemented in the model SFHs, i.e. $\tau_{\rm Q}\in[0.06,10]$ Gyr, uniformly distributed in logarithmic space and $\tau_{\rm S}\in[1.1,13.6]$ Gyr, uniformly distributed in linear space.
In this simple model, quenching goes on ``forever", i.e. a given quenching galaxy will keep decreasing its SFR indefinitely, approaching 0 for $t\rightarrow\infty$. We believe this to be both unrealistic and possibly dangerous when used in the calibration of SFR$_{\rm 79}$. In the final version of the calibrator that is used in this work, we therefore set a ``floor" to the quenching process so that quenching stops once a galaxy's SFR has decreased by 1.3 dex (i.e. a factor of $\approx$ 20), at which point the galaxy is well off the SFMS. From then on, the SFH remains ``flat", i.e. the ``quenched" galaxy keeps forming stars at a low and constant SFR. We discuss the effect of the choice of a prior distribution of quenching SFHs on our results in Section \ref{prior_expl_sec} and Appendix \ref{prior_sec}.
We then construct the final set of mock SFHs by adding short-term stochastic variations to all of these smoothly varying SFHs. This follows the same procedure as in \citetalias{Wang-20a}, but with a slightly larger amplitude ($\sim$0.5 dex in standard deviation) to cover more possibilities.
Figure \ref{cal_distr} shows the distribution of the mock spectra on the log(EW(H$\alpha$))-EW(H$\delta_{\rm A})$-plane (left panels) and on the log(EW(H$\alpha$))-${ D_{\rm n}4000}$-plane (right panels), color-coded by their median log(sSFR$_7$) (top panels), log(sSFR$_9$) (middle panels) and
log(SFR$_{79})$ (bottom panels). For comparison, we show the distribution of our SDSS sample galaxies in black solid contours. As can be seen, the range of observed spectra are well covered by the mock spectra, except for the lowest values of EW(H$\delta_{\rm A}$) and highest values of D$_{\rm n}4000$. The coverage of those regions would be slightly better, but still incomplete, if we adopted the model where quenching goes on ``forever", indicating that some galaxies do in fact decrease their SFR by substantially more than 1.3 dex. However, these objects would likely have quenched more than 1 Gyr ago and are therefore out of the scope of this paper (see Section \ref{sfr79_unc_sec}). Further, it should also be noted that the lowest measured values of EW(H$\delta_{\rm A}$) are likely the result of noise.
In all panels, the color-coding shows a strong and clear gradient. In the top panels this gradient is almost entirely horizontal while in the middle panels it is mostly vertical indicating that log(sSFR$_7$) is strongly correlated with log(EW(H$\alpha$)), as expected, while log(sSFR$_9$) is correlated with both EW(H$\delta_{\rm A}$) and ${D_{\rm n}4000}$. The clear gradients in the bottom panels illustrate that log(SFR$_{79})$ is indeed well-determined by the combination of log(EW(H$\alpha$)), EW(H$\delta_{\rm A}$) and ${\rm D_n4000}$ over a large volume in parameter space.
Instead of using an analytic formula to calibrate the SFR$_{79}$, as in \citetalias{Wang-20a}, we here construct a 3-dimensional lookup table based on all the mock spectra.
Specifically, for a given combination of EW(H$\alpha$), EW(H$\delta_{\rm A}$) and ${D_{\rm n}4000}$ observed in an SDSS galaxy, together with the corresponding observational uncertainties, we sample 1000 SFHs from the mock-catalog whose spectral features constitute a three-dimensional Gaussian distribution in (EW(H$\alpha$), EW(H$\delta_{\rm A}$), ${D_{\rm n}4000}$) parameter space with mean equal to the 3-tuple of the observed spectral features and dispersion in each ``dimension" given by the according observational uncertainty. We then obtain the SFR$_{79}$ as the median value of these 1000 mock SFHs and also get an estimate of the uncertainty in this quantity, ${u_{\rm SFR79, tot}}$, from the r.m.s. scatter of the SFR$_{\rm 79}$ in these 1000 sampled SFHs.
We note again that all of the spectra-based measurements only apply to the central region of each galaxy that falls within the 3-arcsec SDSS fibre. We therefore convert those measurements of SFR to \textit{specific} SFRs by dividing by the stellar mass \textit{in the fiber} given in the MPA-JHU catalog.
\subsubsection{The ad-hoc correction of the SFR$_{79}$-$\Sigma_*$-dependence}
\label{ad_hoc_corr_sec}
By studying the spaxels of spatially-resolved MaNGA \citep{Bundy-15} galaxies, \citetalias{Wang-20a} found that the derived SFR$_{79}$ weakly increases with stellar surface density ($\Sigma_*$), suggesting that SF galaxies have on average a slightly negative SFR$_{79}$ radial gradient. This is not likely to be real, because we would in fact expect the opposite trend in any ``inside-out" scenario of galaxy evolution (see the discussion in Section 3.3 of \citetalias{Wang-20a}).
We can examine the dependence of SFR$_{79}$ on the average $\Sigma_*$ within the fiber for SF galaxies. This is shown in Figure \ref{ad_hoc_corr}. The SF galaxies are here selected to be within $\pm$0.3 dex of the fitted SFR$_7$-based SFMS (see Section \ref{def_sfms_sec}).
We have corrected the inclination effect when calculating $\Sigma_*$ by multiplying the minor-to-major axis ratio measured in the $r$-band image.
As can be seen, we find a similar but even stronger trend than that in \citetalias{Wang-20a}. This could be due to the use of the 3-arcsec fiber spectra, meaning that we only investigate the central regions of SDSS galaxies while the MaNGA sample used in \citetalias{Wang-20a} is dominated by the \textit{outer} regions of galaxies.
As in that previous work, the physical origin of the trend in Figure \ref{ad_hoc_corr} is not completely understood. It could be due to a dependence on $\Sigma_*$ of the metallicity, of the IMF or of a broadening of the stellar absorption (see also \citetalias{Wang-20a}), or some combination of these.
Following the same approach as in \citetalias{Wang-20a}, we apply an ad-hoc correction to the values of SFR$_{79}$, in order to eliminate the dependence of SFR$_{79}$ on $\Sigma_*$.
To do this, we first fit a straight line to the median relation of log(SFR$_{79}$) vs. log($\Sigma_*$) of the SF galaxies that are located within 0.3 dex of the fitted SFR$_7$-based SFMS (Section \ref{def_sfms_sec}, shown in light gray in Figure \ref{ad_hoc_corr}). We then use this line to correct for the dependence by assuming that the median log(SFR$_{79}$) of these SFMS galaxies at a given log($\Sigma_*$) equals zero. This assumption is equivalent to the statement that there are as many objects with a recently enhanced SFR as there are objects with a recently suppressed SFR, with respect to the SFR averaged over the previous $\sim$ 800 Myr. This is a reasonable assumption for galaxies close to the ridge line of the SFMS, as we will further discuss in the context of a possible quenching signature in Section \ref{stat_crit_sec}.
Moreover, a median log(SFR$_{79}$) significantly different from 0 for these galaxies would indicate a cosmic evolution of the SFMS which is inconsistent with observations (cf. \citetalias{Wang-20a}).
This ad-hoc correction, derived from the restricted set of SF galaxies near to the ridge-line of the SFMS, is then applied to all of our sample galaxies.
Even though this correction is quite substantial, we emphasise that it will not change any of our basic conclusions since we are mainly interested in the \textit{scatter} in log(SFR$_{79}$) or its {\it relative} values with respect to the values found in typical SFMS galaxies. The effect of this correction on the presented results will be discussed subsequently whenever it is relevant.
\subsubsection{Uncertainty in SFR$_{79}$}
\label{sfr79_unc_sec}
The uncertainty ${u_{\rm SFR_{79}, tot}}$ of SFR$_{\rm 79}$ (introduced in Section \ref{meas_sfr79_sec}) consists of two parts: 1) the observational uncertainty in the input measurements of the spectral features ${u_{\rm SFR_{79}, obs}}$ and 2) the uncertainty that is intrinsic to the calibration of the estimator, ${u_{\rm SFR_{79}, cal}}$. The latter is due to the fact that there is not a unique match between SFR$_{79}$ and the three spectral features. Galaxies with the same SFR$_{79}$ can in principle have different spectral features because a range of SFHs can have the same value of SFR$_{79}$. The corollary of this is that a range of different SFR$_{79}$ can produce the same 3-tuple of spectral features.
If our mock SFHs constitute a good representation of the range of SFHs exhibited by galaxies in the real Universe, then the ${u_{\rm SFR_{79}, tot}}$ will be a reasonable estimate of the real uncertainty. The SFH variation among the mock galaxies is likely however, by construction, to be larger than in the real Universe. This means that ${u_{\rm SFR_{79}, cal}}$ and therefore ${u_{\rm SFR_{79}, tot}}$ likely overestimate the true uncertainty of SFR$_{79}$ for any real galaxy. In practice, the real uncertainty in SFR$_{79}$ should lie somewhere between ${u_{\rm SFR_{79}, obs}}$ and ${u_{\rm SFR_{79}, tot}}$.
Figure \ref{cal_acc} shows the RMS ${u_{\rm SFR_{79}, tot}}$ for our full calibrator as a function of log(EW(H$\alpha$)) as the gray solid line. The brown and orange solid lines show the same quantity for a hypothetical calibration that is only based on two spectral features respectively. It is calculated as the scatter of $\log({\rm SFR}_{79})$ among mock galaxies within grid-cells in parameter space whose width is given by $\pm1\sigma$ where $\sigma$ represents the median observational uncertainties in each spectral feature respectively. The gray dashed line is obtained by computing the scatter of log(SFR$_{79}$) in very small grid cells in parameter space and therefore provides an estimate of the uncertainty intrinsic to the calibrator, ${u_{\rm SFR_{79}, cal}}$. Based on the three observed spectral features and their uncertainties, the $\log({\rm SFR}_{79})$ can be determined with an uncertainty of $\sim$0.15 dex or less for SF galaxies, i.e. those with EW(H$\alpha)>7$ \AA\, shown here in the blue region (see Section \ref{def_sfms_sec}). For EW(H$\alpha)<4$ \AA\ (shown in the red region) the total uncertainties in SFR$_{79}$ increase markedly, while the uncertainty intrinsic to the calibrator increases more modestly, indicating that the uncertainty in SFR$_{79}$ just below 4 \AA\ is dominated by the observational uncertainty in H$\alpha$ which increases rapidly relative to the measured emission in logarithmic space as we go towards low EW(H$\alpha$) (see also Figure \ref{ewha_dist} and the corresponding discussion below). There are however other caveats concerning galaxies with low H$\alpha$ emission as we will explore in the following.
Figure \ref{ewha_dist} shows the overall distribution of EW(H$\alpha$) for all galaxies in our SDSS sample, in logarithmic space in the left panel and in linear space in the right panel. The $\sim$16\% of the initial SDSS sample that were classified as having vanishingly small H$\alpha$ emission (Section \ref{meas_spec_feat_sec}) are represented by a floating box.
The well-known bi-modality of the galaxy population is clearly seen in the histogram of log(EW(H$\alpha$)), with the minimum ``GV feature" occurring at around EW(H$\alpha$) $\sim 7$ \AA. However, it should be noted that in the linear EW(H$\alpha$) histogram there is only one peak, at EW(H$\alpha$) $\sim 1.2$ \AA\ . Above this peak, the number density of objects (per linear increment in EW(H$\alpha$)) monotonically decreases with increasing EW(H$\alpha$). This already raises a caution about the meaningfulness of any definition of the GV as the minimum (``valley") between the two peaks in logarithmic space \citep[e.g.][]{Salim-14, Coenda-18}.
Galaxies around the peak at EW(H$\alpha$) $\sim 1.2$ \AA\ can be treated as quenched galaxies. Their apparent H$\alpha$ emission may be due to noise in the spectrum and/or by contamination from LINERs \citep[Low-Ionisation Nuclear Emission Regions;][]{Baldwin-81, Kewley-06}.
The typical observational uncertainty in our measurement of EW(H$\alpha$) around this peak is $\sim$0.6 \AA, which is broadly consistent with the width of the peak, as illustrated by the gray line in the right panel, which represents a Gaussian with a dispersion of 0.6 \AA. This is certainly a major contributor to the increase in uncertainty of SFR$_{79}$ below 4 \AA. For EW(H$\alpha)>4$ \AA, however, the measurements of EW(H$\alpha$) are reliable in the sense that above 4 \AA, the measured emission is highly unlikely to be just noise.
We have also examined the role of the two parameters EW(H$\delta_{\rm A}$) and $D_{\rm n}4000$ in determining SFR$_{79}$ by calculating the ${u_{\rm SFR_{79}, tot}}$ in a similar way as above but based on just a 2-dimensional parameter space. As shown in Figure \ref{cal_acc}, the SFR$_{79}$ can be well determined by EW(H$\alpha$) and EW(H$\delta_{\rm A}$) for galaxies with EW(H$\alpha)\gtrsim10$ \AA, while $D_{\rm n}4000$ significantly improves the calibration of SFR$_{\rm 79}$ for galaxies with lower EW(H$\alpha$).
If we attempt to calibrate SFR$_{79}$ based on only EW(H$\alpha$) and $D_{\rm n}4000$, ${u_{\rm SFR_{79}, tot}}$ increases by the equivalent of adding $\sim$0.15 dex in quadrature for 4 \AA\ $\lesssim$ EW(H$\alpha)\lesssim250$ \AA\ and by slightly less (or more) at higher (or lower) values of EW(H$\alpha$). This means that the uncertainty of the calibration roughly doubles for typical SF galaxies if only EW(H$\alpha$) and $D_{\rm n}4000$ are used. It should be noted that, because in this case the uncertainty mainly comes from the uncertainty intrinsic to the calibrator, more accurate measurements of EW(H$\alpha$) and $D_{\rm n}4000$ will not significantly help.
Consideration of the uncertainties in SFR$_{79}$ suggests that any sample of sub-SFMS galaxies should be limited to have EW(H$\alpha) > 4$ \AA\ and we will apply this cut to the analysis in Section \ref{q_sec}.
\subsubsection{Contamination of H$\alpha$ by LINER emission}
\label{liner_expl_sec}
Even with a cut of EW(H$\alpha)>4$ \AA, one cannot fully exclude the contribution of LINER emission to the EW(H$\alpha$). \citet{Belfiore-16} found that the LINER emission tightly follows the continuum due to the underlying Old Stellar Population (OSP), so it usually contributes only weakly to the equivalent width, i.e. EW(H$\alpha)_{\rm LINERs}<3$ \AA.
To get an idea of the effect of LINER emission on our results, we may apply an empirical correction to the H$\alpha$ emission based on the classification of our sample galaxies on the BPT diagram \citep{Baldwin-81, Kauffmann-03, Kewley-06}. We refer the reader to the details of this correction in Appendix \ref{liner_sec}. We note that this LINER correction moves 15'525 or 12.7\% of the galaxies that originally had EW(H$\alpha)>4$ \AA\ to below this selection boundary.
In Section \ref{q_sec}, we will therefore always show, either in the main text or in the Appendix, two versions of our main results, with and without this LINER correction and will discuss the implications of possible differences. The results in Section \ref{sfms_sec} regarding SFMS galaxies are almost entirely unaffected by this correction because the overwhelming bulk of these objects have much stronger H$\alpha$ emission.
\subsubsection{The effect of the quenching SFH prior}
\label{prior_expl_sec}
In Appendix \ref{prior_sec}, we examine whether the estimates of SFR$_{79}$ are significantly affected by the choice of the prior in the distribution of quenching SFHs that were used in the construction of the mock spectra. We examine two different representations of the quenching process. We find that the choice of prior indeed has a substantial and systematic effect on the inferred values of SFR$_{79}$ at low EW(H$\alpha$), but that for the sample above the proposed cut of EW(H$\alpha)>4$ \AA\ the effect is negligible.
\subsubsection{The effect of additional old stellar populations on the estimation of SFR$_{79}$}
\label{osp_expl_sec}
A final question is whether the addition of a substantial OSP will perturb the estimate of SFR$_{79}$. We here consider an OSP to be one in which all the stars are much older than 1 Gyr and which therefore contribute nothing to SFR$_{9}$ (or SFR$_{7}$). A composite system, consisting of a ``normal" SF component plus a substantial additional OSP, would have a decreased sSFR$_{9}$ (and sSFR$_{7}$) but the value of SFR$_{79}$ should not, at least in principle, be affected. Such a galaxy would not normally be considered to be ``quenching" as it would have a more or less constant SFR on Gyr timescales, and in particular, it would have SFR$_{79} \sim 1$.
An important question is whether the addition of such an OSP could nevertheless spuriously bias our estimate of SFR$_{79}$.
This is investigated in Appendix \ref{osp_sec}, in which we examine how a system moves on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram when we progressively add such an OSP to an otherwise normal SFMS system.
To summarize the result, we first note that for an OSP with an age of 2, 5 or 10 Gyr, a mass roughly 2 to 3 times the integrated mass of the original SF component is required to move a composite object out of the SF population into the GV region. In the case of older OSP ages, 5 Gyr and greater, application of our standard SFR$_{79}$ estimator (correctly) returns the SFR$_{79} \sim 1$ of the star-forming component. Such a composite system would therefore {\it not} mimic a galaxy with a \textit{currently} significantly declining sSFR.
If the added OSP is younger, i.e. 2-5 Gyr, then the estimator may well return a falsely low value of log(SFR$_{79}$) $\approx 0.4$, suggesting a declining SFR. However, a galaxy that consists of a continually forming population of a certain integrated mass plus an OSP that is 2 to 3 times as massive but formed only 2 to 5 Gyr previously would represent a rather odd SFH. We believe that such galaxies will be rare in the Universe. Even if the derived SFR$_{79}$ is biased, such objects could anyway more justifiably be considered to be ``quenching" since the SFR would have dramatically declined over the last few Gyr.
\section{The SFMS Population}
\label{sfms_sec}
In this Section, we will introduce the SFR-$M_*$ diagrams on two different averaging timescales and provide a definition of the SFMS in Section \ref{def_sfms_sec}. We will then present a consistency check for our model in Section \ref{cons_check_sec} before analysing the time-variability in the SFR of SF galaxies in Section \ref{sfms_var_sec}, comparing it to results published for MaNGA galaxies in \citetalias{Wang-20a} and \citet{Wang-20b}.
\subsection{The definition of the SFMS}
\label{def_sfms_sec}
After obtaining the SFR$_7$ and SFR$_9$ of each individual galaxy, we first investigate the recent change in the SFR (i.e. the SFR$_{79}$ value) for galaxies in different regions of the SFR-${M_*}$ plane. In principle, this directly tells us about the time variation of the SFR of SFMS galaxies.
Figure \ref{sfms_7_sfms_9} shows the two SFR-stellar mass relations based on the SFR averaged over different timescales and color-coded with the median log(SFR$_{79}$).
We again emphasize that all the star formation parameters (SFR$_7$, SFR$_9$, SFR$_{79}$) and here also the stellar mass (denoted as $M_{\rm *,fib}$ in Figure \ref{sfms_7_sfms_9}) are measured within the SDSS fiber aperture and therefore apply to those regions of the galaxies rather than the whole galaxies.
On the left, we plot the entire sample while on the right, we only plot SF galaxies, defined to be objects less than 0.7 dex below the fitted SFR$_7$-based SFMS (see below). This is because the calibration of SFR$_{79}$ (and thus sSFR$_9$) becomes unreliable for low values of EW(H$\alpha$) (or sSFR$_7$) and this can have misleading effects on the color-coding on the right of Figure \ref{sfms_7_sfms_9}.
In both panels, the gray contours indicate the number density of the sample galaxies on the SFR-stellar mass plane, enclosing 10, 30, 50, 70 and 90\% of the objects respectively. In the left panel of Figure \ref{sfms_7_sfms_9}, the bimodality of the galaxy distribution is clearly seen.
This enables us to separate the main SF population from the quenched population, and to define the ridge-line of the SFMS in the following way. First, we select by eye a straight line in logarithmic space dividing the two populations. Then, we fit a straight line to the objects {\it above} that line, shift it down by 0.6 dex and use this as the new dividing line. We iterate the above procedure 20 times, which is sufficient to reach convergence.
The resulting SFMS can then be described by the relation:
\begin{equation} \label{eq:sfms7}
\log\left(\frac{\rm SFR_7}{M_{\odot} \ {\rm yr}^{-1}}\right) = 0.92\cdot \log\left(\frac{{M_{\rm *, fib}}}{M_{\odot}}\right) - 9.39,
\end{equation}
and is shown as the black solid line in the left panel of Figure \ref{sfms_7_sfms_9}. In the following, we use this relation as the definition of the ridge-line of the SFR$_7$-based SFMS (subsequently referred to as the SFMS$_7$).
Based on that, we define a parameter $\Delta$log(sSFR$_7$), to quantify the vertical deviation in dex of log(SFR$_7$) (or thus also log(sSFR$_7$)) from the fitted SFMS$_7$ ridge-line at a given stellar mass. This parameter measures the enhancement or suppression of the sSFR$_7$ relative to the SFMS.
Additional lines of constant $\Delta$log(sSFR$_7$) are drawn on the left panel of Figure \ref{sfms_7_sfms_9} for orientation.
We note that the fitted SFMS$_7$ also provides a reasonable fit to the SFR$_9$-based SFMS, as shown in the right panel of Figure \ref{sfms_7_sfms_9}. Strictly speaking this is a consequence of the ad-hoc correction (see Section \ref{ad_hoc_corr_sec}) setting log(SFR$_{79}$) to be zero, but it also holds well without the correction. This reflects the fact that the mean SFH of individual galaxies on the SFMS has changed only weakly over the last Gyr (see also \citetalias{Wang-20a}).
Therefore, we also use Equation \ref{eq:sfms7} to define the SFR$_9$-based SFMS, denoted as SFMS$_9$ for short. We then define the $\Delta$log(sSFR$_9$) as the vertical deviation in dex of log(SFR$_9$) (or log(sSFR$_9$)) from the SFMS$_9$ at a given stellar mass.
In contrast to the smooth color-gradient on the left panel of Figure \ref{sfms_7_sfms_9}, indicating a positive correlation between $\Delta$log(sSFR$_7$) and log(SFR$_{79}$), that extends through both the SF and the quenched population, the right panel in Figure \ref{sfms_7_sfms_9} shows that the median log(SFR$_{79}$) is more or less constant and $\approx$0 around the SFMS$_9$ and only slightly increases towards the lowest $\Delta$log(sSFR$_9$). As will become clear below, this slight increase is primarily caused by our selection of objects with $\Delta$log(sSFR$_7)>-$0.7 dex, which introduces a bias towards \textit{higher} log(SFR$_{79}$) for the lowest $\Delta$log(sSFR$_9$).
\subsection{Consistency check: stability of the SFMS}
\label{cons_check_sec}
Observationally, the scatter of the SFMS, i.e. the scatter in sSFR at a given stellar mass on the SFMS, varies in the literature between 0.2 dex and 0.4 dex \citep[e.g.][]{Whitaker-12, Speagle-14, Schreiber-15, Boogaard-18}, depending on the detailed definition of the sample and on how the stellar mass and SFR are measured. There is no evidence for a significant evolution with redshift of the scatter of the SFMS \citep[e.g.][]{Speagle-14}. This stability of the SFMS can be interpreted as the quasi-steady-state interplay between cold gas inflow, star formation, and outflow under a gas-regulator system \citep[e.g.][]{Bouche-10, Schaye-10, Dave-11, Lilly-13, Wang-19, Wang-20b}.
As discussed further below in Section \ref{stat_crit_sec}, the stability of the SFMS requires that the change in the SFR of individual galaxies should not depend on the position of galaxies on the SFMS$_9$ \citep{Wang-20a, Wang-20b}. In other words, the log(SFR$_{79}$) should not be correlated with the $\Delta$log(sSFR$_9$) for SFMS galaxies. Any such correlation would produce a runaway effect, either broadening or narrowing the width of the SFMS over time.
We indeed find that the $\Delta$log(sSFR$_9$) and log(SFR$_{79}$) for SF galaxies are essentially uncorrelated, as shown in the right panel of Figure \ref{sfms_7_sfms_9}.
We have checked that this result does not come from the ad-hoc correction applied in Section \ref{ad_hoc_corr_sec}. The fact that we do not see a correlation is an important consistency check that our analysis is producing reasonable results.
It is then easy to see that log(SFR$_{79}$) must be correlated with $\Delta$log(sSFR$_7$) for these same SF objects, provided there is any time-variability in their sSFR. This is because SFR$_{79}$ tells us whether an object currently has an enhanced or suppressed SFR with respect to its SFR averaged over a longer timescale (see the left panel of Figure \ref{sfms_7_sfms_9}). The significant correlation that we observe indicates that there is a significant contribution of short timescale fluctuations in the SFR to the scatter of the SFMS$_7$. We will investigate the variability in the SFR of SF galaxies in more detail in the next subsection, where we present a quantitative comparison with the results given in \citetalias{Wang-20a}.
\subsection{The Variability in SFR on the SFMS}
\label{sfms_var_sec}
In this section, we will focus on the {\it population} of SFMS galaxies. We define this SF-population to consist of all galaxies with $\Delta$log(sSFR$_7) > -0.7$ dex, i.e. objects above the red dashed line in the left panel of Figure \ref{sfms_7_sfms_9}.
The distribution of log(SFR$_{79}$) of this SF-population is quite symmetric with a median of $-$0.006, a mean of $-$0.0006 and a dispersion of 0.32 dex, which is relatively large compared to the RMS uncertainties in log(SFR$_{79}$), RMS($u_{\rm log(SFR_{79}),tot})=$ 0.21 and RMS($u_{\rm log(SFR_{79}),obs})=$ 0.17 dex (see Section \ref{sfr79_unc_sec}). If we do not apply the ad-hoc correction (Section \ref{ad_hoc_corr_sec}), we find a slightly higher median of 0.07 (mean of 0.08) and a very slightly higher dispersion of 0.33 dex. We will discuss the distribution of SFR$_{79}$ of the SF galaxies in more detail in the context of quenching below.
\citet{Wang-20b} developed a method to constrain the Power Spectrum Distribution (PSD) of the time-variations in the SFR of galaxies on the SFMS, based on the dispersions of the SFMS$_7$ and the SFMS$_9$ ($\sigma_7$ and $\sigma_9$), as well as the dispersion in log(SFR$_{79}$) ($\sigma_{79}$).
They found that $\sigma_{79}$ is closely related to the overall amplitude of the variations (i.e. the normalisation of the PSD) while the ratio $\sigma_{\rm 7} / \sigma_{\rm 9}$ indicates the relative contribution of shorter and longer timescale variations to the overall dispersion of the SFMS (i.e. the slope of the PSD).
We can now investigate the dispersion in $\Delta$log(sSFR$_7$), $\Delta$log(sSFR$_9$) and log(SFR$_{79}$) with a much larger galaxy sample compared to that in \citet{Wang-20b}. We separate the SF galaxies into four bins of {\it total} stellar mass according to Figure \ref{mz_sel} as well as in four bins of stellar mass surface density {\it in the fiber}, $\Sigma_*$. The latter are defined as 7.5 $<{\rm log}(\Sigma_*/M_{\odot}{\rm kpc^{-2}})<$ 8, 8 $<{\rm log}(\Sigma_*/M_{\odot}{\rm kpc^{-2}})<$ 8.5, 8.5 $<{\rm log}(\Sigma_*/M_{\odot}{\rm kpc^{-2}})<$ 9 and 9 $<{\rm log}(\Sigma_*/M_{\odot}{\rm kpc^{-2}})<$ 9.5. We show the three measured $\sigma$'s and the ratio $\sigma_{\rm 7} / \sigma_{\rm 9}$ as a function of stellar mass on the left and as a function of log($\Sigma_*$) on the right of Figure \ref{sigmas}. We note that these dispersions are completely unaffected by the ad-hoc correction (Section \ref{ad_hoc_corr_sec}).
The shaded regions in Figure \ref{sigmas} correspond to the upper and lower limits of the intrinsic dispersion of the galaxy population respectively. The lower limits are obtained by subtracting the typical uncertainties of the corresponding quantities in quadrature. For $\sigma_{\rm 9}$ and $\sigma_{\rm 79}$, we use the maximum estimate of the uncertainty in SFR$_{79}$, $u_{\rm SFR_{79},tot}$, as discussed in Section \ref{sfr79_unc_sec}. When inferring the uncertainty in sSFR$_9$ we do take into account that the uncertainties in SFR$_{79}$ and sSFR$_7$ are \textit{correlated} (see Figure \ref{err_ell} below). The upper limits on the intrinsic dispersions are obtained by ignoring any uncertainty, i.e. they are the raw measured values from the data.
For comparison, we show the values of these three dispersions taken from \citetalias{Wang-20a} as horizontal dashed lines in the left panel of Figure \ref{sigmas}. We note that their quantities (i.e. SFR$_7$, SFR$_9$ and SFR$_{79}$) were measured within the effective radius of each galaxy. Further, the dispersions were computed for their entire SF sample without any binning and ignoring any noise contribution (which was however probably negligible in their case).
Overall, the dispersions found here, with a much larger and completely independent sample, are very consistent with those previously found in \citetalias{Wang-20a}, particularly if we account for noise (i.e. use the lower limits).
Values of the $\sigma_7 / \sigma_9$ ratio between 1 and 1.6 indicate a significant contribution of short timescale SFR variations to the overall dispersion of the SFMS. While all the $\sigma$'s (and thus $\sigma_7 / \sigma_9$) show almost no trend with $M_*$, they do slightly increase with $\Sigma_*$ in the higher two bins, again consistent with findings in \citetalias{Wang-20a} where this correlation was interpreted as the response of a gas regulator system to a time-varying inflow of gas. Assuming that $\Sigma_*$ traces the Star Formation Efficiency (SFE) (following e.g. \citet{Shi-11}), a higher $\Sigma_*$ will lead to a faster response to a given variation in the inflow and to more dispersion in the measured values of SFR$_{79}$. The trend found here is weaker than the trend found in \citetalias{Wang-20a}. This may follow from the fact that they were looking at a spatially-resolved sample, taken from MaNGA \citep{Bundy-15}, enabling them to investigate \textit{annuli} of individual galaxies while we are bound to the 3-arcsec fibers of the SDSS which always sample the \textit{inner} regions of galaxies. The fraction of a given galaxy which is covered by the fiber then depends both on its physical size and its distance from the observer. Therefore, we are dealing with a smaller overall range of $\Sigma_*$ as compared to \citetalias{Wang-20a} and it is hard to disentangle the different effects that determine the $\Sigma_*$ of a given object. For these reasons, we do not further investigate this trend and only point out the qualitative consistency between the results.
The lack of correlation between $\Delta$log(sSFR$_9$) and log(SFR$_{79}$) (see Section \ref{cons_check_sec}) further suggests that the dispersion of the SFMS$_7$ is a combination of the dispersion present on longer timescales (i.e. $\sigma_9$) plus the additional effect of (uncorrelated) short timescale fluctuations, as characterised by $\sigma_{79}$. We might therefore expect $\sigma_{7}^2 \approx \sigma_{9}^2 + \sigma_{79}^2$ and indeed this relation holds to within about 5\% in all the $M_*$- and $\Sigma_*$ bins if we adopt our maximum noise estimates (i.e. assume the lower limits of the intrinsic dispersion respectively).
We have also examined the SFR time-variability for centrals and satellites separately. We find that overall the differences between the two populations are very small. In the lower two stellar mass bins (i.e. for $M_*<10^{10.5}M_{\odot}$), satellite galaxies appear to have slightly larger $\sigma_{\rm 7}$, $\sigma_{\rm 9}$ and $\sigma_{\rm 79}$ than centrals.
In addition, the dependence of the $\sigma$'s on $\Sigma_*$ appears to be weaker for satellites than for centrals. This may be interpreted as being due to satellite-specific processes, perhaps associated with satellite quenching. We will not discuss this in more detail, since it is not the main focus of this work.
\section{Searching for signatures of ongoing quenching}
\label{q_sec}
The goal of this Section is to search for spectroscopic signatures of ongoing quenching in galaxies. The SFMS population considered in Section \ref{sfms_sec} will be overwhelmingly dominated by galaxies that are {\it not} quenching. This therefore necessitates extension of the study into the region below the SFMS, although it will be clear that the analysis must still consider all galaxies.
As discussed above in Section \ref{meas_sfr_params_sec}, a number of considerations force us to limit the analysis to galaxies with EW(H$\alpha) > 4$ \AA. We therefore adopt this cut, resulting in
a sample of 122'092 galaxies.
\subsection{The log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram}
\label{diagram_sec}
We now introduce the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram as a key diagnostic of the change of SFR in galaxies. This is shown for all our sample galaxies with EW(H$\alpha)>4$ \AA\ in Figure \ref{sf79_dsf9_all}. The magenta contours enclose 10, 30, 50, 70 and 90\%, respectively, of the galaxies plotted.
A locus of constant $\Delta$log(sSFR$_7)$ is a diagonal line in Figure \ref{sf79_dsf9_all}. The region corresponding to $\Delta$log(sSFR$_7)<-$0.9 is shaded in red. This is closely (but not exactly) equivalent to the EW(H$\alpha)<4$ \AA\ cut, according to a linear fit between these two quantities. The blue dashed line indicates the adopted lower limit of $\Delta$log(sSFR$_7)=-0.7$ for the SFMS population examined in Section \ref{def_sfms_sec}. The diagonal strip between this line and the red shaded region may therefore be considered to be a ``Green Valley" (GV) sample.
It should be noted that the median log(SFR$_{79}$) and $\Delta$log(sSFR$_9$) of the ridgeline SFMS galaxies are both, largely by construction, zero (see Section \ref{ad_hoc_corr_sec}).
Note that the large population of already-quenched passive galaxies is expected to lie below the SF population in both $\Delta$log(sSFR$_7)$ and $\Delta$log(sSFR$_9)$. It is not clear what the SFR$_{79}$ of such galaxies should be, and, as discussed in Section \ref{meas_sfr_params_sec}, we anyway cannot reasonably constrain the SFR properties of this population with our methodology. It is excluded here, but we do show the full original sample in Appendix \ref{prior_sec}, Figure \ref{sf79_dsf9_prior}, in the context of our discussion of the effect of the assumed prior distribution of SFHs in the calibration of SFR$_{79}$.
It is instructive to consider the possible tracks of a ``quenching" galaxy on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram. For simplicity we assume that, prior to the onset of quenching, which occurs at time $\tau_{\rm S}$, the galaxy has had a more or less flat SFH \citep[e.g.][]{Peng-10} and thus resided in the middle of the SFMS cloud, i.e. at log(SFR$_{79})=\Delta$log(sSFR$_9)=0$.
Once it starts quenching at $\tau = \tau_{\rm S}$, the SFR of this galaxy subsequently declines exponentially with an e-folding timescale of $\tau_{\rm Q}$ (see Equation \ref{quenching_model}). It is then straightforward to compute the track of this galaxy on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram. Four quenching tracks with different quenching timescales ($\tau_{\rm Q}= $ 0.1, 0.3, 1.0 and 3.0 Gyr respectively) are shown as blue dotted lines. The quenching galaxies move to lower sSFR$_9$ and lower SFR$_{79}$. If the decline in SFR is a pure exponential then, as shown, the tracks become vertical after 1 Gyr, i.e. as soon as memory of the pre-quenching state has been lost.
Galaxies leaving the SFMS via quenching will therefore do so via the lower left quadrant relative to their starting point (assumed here to be the peak of the SFMS population). It can be seen that such galaxies will therefore cross our ``GV strip" at a location in log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) that depends on their quenching timescale.
It is evident in Figure \ref{sf79_dsf9_all} that there are indeed galaxies found in the expected location, i.e. in the GV strip just below the SFMS population and with negative values of log(SFR$_{79}$). This is indicated by the distortion of the magenta contours towards the lower left. We may already note that the bulk of the objects lie between the tracks corresponding to $\tau_{\rm Q}$ of 300 Myr and 1 Gyr respectively. Clearly, however, the tracks and the corresponding timescales will depend somewhat on the assumed starting point of the quenching tracks which may not be the midpoint of the SF population in the real Universe, at least not for all galaxies. This is illustrated in Appendix \ref{conv_func_sec} where we also show how we derived an analytic relation to compute the quenching tracks.
Furthermore, while the distribution of objects on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram is at least consistent with the existence of a subset of galaxies quenching along such tracks, the identification of these objects as ``quenching" is by no means trivial, as we will discuss below. Not least, as discussed previously and further below (see Section \ref{stat_crit_sec}), the stability of the SFMS sets a constraint, via a stationarity criterion, on the distribution of SFR$_{79}$ of SFMS galaxies at a given sSFR$_9$.
One could imagine that the true two-dimensional distribution of genuine SFMS objects (i.e. of a stable SFMS population without any quenching) in Figure \ref{sf79_dsf9_all} could be symmetrical in both log(SFR$_{79}$) and sSFR$_9$. It is then clear that if we slice off a ``GV population" (defined from sSFR$_7$) lying along a diagonal strip of the diagram, some of the objects with low SFR$_{79}$ could simply be the SFMS counterparts of other SFMS galaxies with the same sSFR$_{9}$ but higher SFR$_{79}$.
This makes clear that the overall distribution of SFR$_{79}$ of galaxies in the entire diagram, as well as how this may be modified by observational uncertainties, must be considered before any conclusions about the presence of a quenching population of galaxies can be drawn. An important conclusion is that it may be very misleading to look only at galaxies in the (SFR$_7$-defined) GV. We address this issue in the following sections of the paper.
\subsubsection{Effects of observational noise on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram}
\label{noise_sec}
To better understand the effect of noise, both arising from the observational measurements and from the uncertainties inherent in the SFR$_{79}$ calibrator (see Section \ref{sfr79_unc_sec}) on the distribution of objects in the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram, we show in Figure \ref{err_ell} representative error ellipses across this diagram. These are shown for illustration for galaxies in the mass bin 10.5 $<$ log$(M_*/M_{\odot})<11$.
As discussed earlier, the uncertainty in SFR$_{79}$ mainly comes from the observational uncertainty in EW(H$\delta_{\rm A}$). Since sSFR$_9$ is derived from SFR$_{79}$, the uncertainty in $\Delta$log(sSFR$_9$) is highly correlated with the uncertainty in log(SFR$_{79}$), as reflected by the diagonal orientation of the error ellipses. As an aside, this indicates that our sample selections in $\Delta$log(sSFR$_7$) and EW(H$\alpha$) should be relatively insensitive to noise.
At fixed $\Delta$log(sSFR$_7$) (i.e. along a diagonal locus in the diagram), there is therefore a substantial contribution of noise to the distribution of log(SFR$_{79}$) (and thus also $\Delta$log(sSFR$_9$)). In particular, at the lower boundary of the sample plotted in the Figure (i.e. around $\Delta$log(sSFR$_7)=-0.9$), the width of the distribution of log(SFR$_{79}$) is likely to be dominated by observational noise.
Furthermore, the orientation of the error ellipses tells us that noise will tend to disperse the main peak of SFMS objects in a diagonal direction, scattering them towards the upper-left and lower-right. One important consequence of this is that noise will amplify the lower right extension of the SFMS cloud which balances the lower left extension which lies in the same location as any quenching galaxies.
This makes clear that in such a diagram, the galaxies in the GV strip cannot be considered in isolation, but only in the context and modelling of the entire SFMS population above it and it is to this that we now turn.
\subsubsection{Histograms of log(SFR$_{79}$)}
\label{hist_sec}
As another illustration of the data we show the distribution of log(SFR$_{79}$) in four bins of $M_*$ (see also Figure \ref{mz_sel}) as well as in 8 bins of $\Delta$log(sSFR$_9$) indicated on the left of each row of panels in Figure \ref{sf79_hists}. For better visual comparison, all the histograms are normalised such that the area covered by the histogram bars equals 1. The number of objects in each bin is indicated in the top right of each panel respectively. The vertical red dashed line in each panel shows the respective value of log(SFR$_{79}$) below which objects are excluded due the cut in EW(H$\alpha$), translated to $\Delta$log(sSFR$_7)>-0.9$ (equivalent to the red shaded region in Figure \ref{sf79_dsf9_all}). This selection effect becomes more significant towards the left and towards the bottom of the panels in Figure \ref{sf79_hists}, i.e. towards higher mass and lower $\Delta$log(sSFR$_9$).
It is apparent that with decreasing $\Delta$log(sSFR$_9)$, the distribution of log(SFR$_{79}$) tends to be more and more skewed or biased towards the left, i.e. towards lower (negative) values. This effect is clearly stronger at higher $M_*$. It is very obvious in the highest mass bin, while it is hardly seen in the lowest mass bin. Such a dependence on mass is expected if this effect is due to quenching (\citet{Peng-10} and see Section \ref{peng_sec}). At the same time, the selection effect cuts off more and more of the left part of the distribution and the number of objects in the rightmost tail of the distribution at low sSFR$_9$ increases due to noise, as discussed in the previous Section.
These histograms are just another illustration of the distribution shown in Figure \ref{sf79_dsf9_all}, split in different mass bins, and again, they indicate a picture which is consistent with the expectation of ongoing quenching, but hidden and blurred by the selection effect and by the effects of noise in the data.
\subsection{The expected signature of a quenching population}
\label{qsig_sec}
We here try to derive quantitative expectations of quenching signals. We will start in Section \ref{stat_crit_sec} by deriving and discussing the general stationarity criterion for the SFMS population and then show how the addition of a subset of quenching galaxies will perturb this, using the analytic scheme outlined in \citet{Peng-10} and \citet{Peng-12} to estimate the current quenching rate of galaxies and thus the expected number of quenching galaxies as a function of mass. We then examine the effect of the H$\alpha$ selection and of observational scatter before comparing this expectation to the data.
\subsubsection{The stationarity criterion for a stable SFMS population}
\label{stat_crit_sec}
As noted above it is impossible to specify for any individual galaxy that has a negative log(SFR$_{79}$), whether it is actually quenching out of the SFMS on a ``one-way" trip, or whether its (s)SFR is only \textit{currently} suppressed and will increase again in the future, i.e. whether it is part of a stable SFMS population of individually varying galaxies that satisfies the stationarity criterion of the SFMS.
It is clear from Figure \ref{sf79_dsf9_all} that the low values of SFR$_{79}$ seen in the GV strip are not more extreme than those exhibited by galaxies on the ridge-line of the SFMS at $\Delta$log(sSFR$_9) \sim 0$. In a formal sense it is evidently therefore not possible (at least with the current methodology) to separate quenching galaxies from SFMS galaxies only on the basis of their individual past SFH. Such an analysis must be statistical.
Not least, the cut at $\Delta$log(sSFR$_7)=-0.7$ used to define the SFMS population in Section \ref{def_sfms_sec} is somewhat arbitrary and does not mean that there are no SFMS objects whatsoever at $\Delta$log(sSFR$_7$)$<-0.7$, i.e. in our GV strip. Given the relatively small number density of galaxies right below the SF population (see Figure \ref{sf79_dsf9_all}), even a small tail of SFMS galaxies at $\Delta$log(sSFR$_7)<-0.7$ could dominate the galaxy population in this part of the diagram (see the discussion in Section \ref{tauq_sec}).
We therefore need to examine whether the overall distribution of galaxies on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram is consistent with a SFMS population alone or whether there is evidence for an additional quenching population. We therefore need to first establish the expected signature of ongoing quenching, and then search for it in the data.
We therefore examine the stationarity criterion for a stable SFMS population. Stationarity means that the shape of the SFMS$_9$ distribution should be constant over time, i.e. dN(log(sSFR$_9)$)/dt = 0 where N represents the number distribution of galaxies in log(sSFR$_9$). Given that the value of SFR$_{79}$ tells us how fast a galaxy is changing its sSFR$_9$, it is straightforward to prove that this implies, at any and all sSFR$_9$, that the distribution of SFR$_{79}$ must satisfy the following condition:
\begin{equation}
\label{stationarity}
{\rm mean}({\rm SFR_{79}})-1 = 0.
\end{equation}
This is equivalent to saying that the net \textit{flux} of galaxies through a given value of sSFR$_9$ must vanish for any stable population, since any net flux would imply that the distribution of sSFR$_9$ of SFMS galaxies would be changing with time.
\subsubsection{Expected effect of a quenching population on $\Delta$mean(log(SFR$_{79}$))}
\label{peng_sec}
As a corollary of the stationarity criterion, the systematic decrease in SFR for a set of galaxies that are quenching implies a mean(SFR$_{79})<1$ for those galaxies and a net flux towards lower sSFR$_{9}$.
This downward flux of quenching galaxies should be essentially independent of $\Delta$log(sSFR$_{9}$) (in the interval between the initial and final levels of $\Delta$log(sSFR$_{9}$)) and also independent of the quenching timescale $\tau_{\rm Q}$. It should be given only by the quenching rate of galaxies (i.e. the probability that a given SFMS galaxy quenches per unit time).
The perturbation of the mean(SFR$_{79}$) of the entire population will however increase to lower sSFR$_{9}$ because of the decreasing number of SFMS galaxies relative to the quenching population.
The signature of any quenching sub-population in the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram will therefore be a progressive shift of the mean SFR$_{79}$ below unity as we go down in SFR$_9$ towards the lower end of the SFMS.
In practice, this signature of quenching will inevitably be countered by two large observational effects: (i) the applied (and required) selection of objects with EW(H$\alpha)>4$ \AA\ introduces a strong bias towards \textit{higher} SFR$_{79}$ by progressively removing all objects with the lowest SFR$_{79}$ at low sSFR$_9$, as easily seen from the red shaded region in Figure \ref{sf79_dsf9_all} and (ii) the effect of observational noise (both observational in the spectroscopic measurements and from the calibration of SFR$_{79}$). This observational noise has two distinct effects that are both important here.
First, the error ellipses in Figure \ref{sf79_dsf9_all} are diagonal, i.e. lie \textit{parallel} to lines of constant $\Delta$log(sSFR$_7$). Below the peak of the SFMS (i.e. $\Delta$SFR$_9 < 0$) this will always bias the mean SFR$_{79}$ to higher values, since at a given $\Delta$SFR$_9$ more objects have been scattered from above to higher SFR$_{79}$ than have been scattered from below to lower SFR$_{79}$. The converse is true above the peak.
Second, the observational noise is roughly symmetric in log(SFR$_{79}$). Noise of this form will always bias the mean (linear) SFR$_{79}$ towards higher values. In fact, this logarithmic noise dominates the distribution of log(SFR$_{79}$) at low sSFR$_7$.
This means that the best estimate of the underlying mean(SFR$_{79})$ is actually the anti-log of the mean(log(SFR$_{79}))$. We should therefore look for deviations from mean(log(SFR$_{79}$))$=0$.
As an aside, it may be noted that the ad hoc correction to the \textit{values} of SFR$_{79}$ in Section \ref{ad_hoc_corr_sec} effectively normalizes the values of log(SFR$_{79}$) to the values found $\pm$0.3 dex around the fitted ridge-line of the SFMS$_7$. It thus effectively \textit{forces} the stationarity criterion to hold for galaxies within $\pm0.3$ dex of the SFMS$_7$ ridge-line. This is a reasonable requirement for the current purposes.
In fact, we will define below an empirical \textit{observed} midpoint of the SFMS population in each mass bin and compute a $\Delta $mean(log(SFR$_{79}$)), relative to the objects just above the SFMS$_9$ ridge-line, as a function of $\Delta$log(sSFR$_{9}$). The following results should therefore be largely independent of the ad-hoc correction.
We may proceed to estimate the expected size of the quenching signal as follows.
\citet{Peng-10} and \citet{Peng-12} developed an analytic framework for quenching based on a continuity approach to galaxy evolution. This yields the required quenching rates, i.e. the probability that a given SF galaxy quenches per unit time, or equivalently the number of galaxies that quench in unit time if we multiply by the number of star-forming galaxies. It is convenient to follow the distinction between so-called ``mass-quenching" and ``environment-" or ``satellite-quenching" introduced by \citet{Peng-10} and \citet{Peng-12}.
The strongly mass-dependent mass-quenching rate $\eta_m$ per galaxy follows directly from the negligible cosmic evolution that is seen in the value of the Schechter parameter ${\rm M}^*$ at $z \lesssim 2$ \citep{Ilbert-10} and is given \citep{Peng-10} by
\begin{equation}
\label{Peng mass}
\eta_m(m)= \rm{SFR}/M^* = sSFR_{MS}(M^*) \times (m/M^*)^{1+\beta},
\end{equation}
where sSFR$_{\rm MS}({\rm M}^*)$ is the sSFR of the SFMS at the Schechter mass ${\rm M}^*$ (taken from \citet{Peng-10}) and $\beta=-0.08$ is the logarithmic slope of the SFMS in terms of sSFR (see Section \ref{def_sfms_sec}). We here adopt ${\rm M}^*=10^{11}M_{\odot}$.
To quantify the rate of (mass-independent) satellite quenching, we may assume for simplicity that the satellite fraction of galaxies is constant with time and that the satellite quenching efficiency $\epsilon_{\rm sat}$, is also constant, and that it is independent of satellite mass, $\epsilon_{\rm sat}\sim0.5$ \citep{Peng-12}.
A straightforward calculation then leads to the required rate of (mass-independent) environmental quenching, $\eta_{\rho}$, averaged across the overall SFMS population of centrals and satellites:
\begin{equation}
\label{Peng env}
\eta_{\rho}=-(1+\alpha_s+\beta)\,\epsilon_{\rm sat} \times \rm{sSFR_{MS}(M^*)},
\end{equation}
where $\alpha_s=-1.3$ is the power-law slope of the Schechter mass function of star-forming galaxies. The total quenching rate of the galaxy population is then given by the sum of rates of the two quenching channels
\begin{equation}
\label{Peng tot}
\eta_{\rm tot}(m)=\eta_m(m)+\eta_{\rho}.
\end{equation}
With these expected quenching rates, it is then possible to calculate the predicted perturbation of the SFMS stationarity criterion due to the presence of a population of quenching galaxies.
Specifically, we model the SF population of galaxies as a 2D-Gaussian distribution centered at (0,0) on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram. In each mass-bin considered below, we infer the dispersion of that Gaussian distribution from the SF population defined in Section \ref{def_sfms_sec}, with the estimated observational noise subtracted in quadrature. Further, we use the mass-distribution (within each bin) of the sample galaxies to infer the overall mass quenching rate, and also the satellite fraction (from the \citet{Yang-07} group catalog) so as to compute the satellite quenching rate according to Equation \ref{Peng tot}. We multiply the number of mock SF galaxies by a factor of 100 with respect to the number of sample galaxies to largely eliminate stochastic variations in the results.
Using Equation \ref{Peng tot}, we can then calculate the expected number of quenching galaxies. We assume that the quenching rate is constant in time over the timescales of interest and, as previously, that the corresponding galaxies are quenching with an exponential timescale $\tau_{\rm Q}$ and, for illustrative purposes, that they start quenching from the midpoint of the SFMS. This then gives us the distribution of both SF and quenching galaxies on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram. We can then optionally apply a selection to mimic the cut at EW(H$\alpha$) of 4 \AA\ that was applied to the real data, and can also convolve the distribution on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) plane with typical observational noise so as to compare more directly to the real data.
The results of this exercise are shown in Figure \ref{delta_mean_log_sf79_peng}. As noted above, we measure the mean(log(SFR$_{79}$)) in each bin relative to the mean found in the bin $0<\Delta$log(sSFR$_9)<0.2$, for which the mean log(SFR$_{79}$) should be close to 0, since we do not expect a significant fraction of quenching galaxies above the SFMS$_9$. Moreover, we are primarily interested in the differential trend of this normalized $\Delta$mean(log(SFR$_{79}$)) with $\Delta$log(sSFR$_9$), rather than the absolute values.
In Figure \ref{delta_mean_log_sf79_peng} we show in the left panels this $\Delta$mean(log(SFR$_{79}$)) as a function of $\Delta$log(sSFR$_9$) for different $\tau_{\rm Q}$ in a representative mass-bin ($10.5<$ log($M_*/M_{\odot}) < 11$), and in the right panels for a range of masses (the four mass-bins used before) and thus quenching rates for a representative $\tau_{\rm Q}$ of 500 Myr. In the top panels, we show the expected effect for the idealized case without any selection effects and without noise, in the middle panels we remove all objects with $\Delta$log(sSFR$_9)<-0.9$ which is roughly equivalent to EW(H$\alpha)<4$ \AA\ and in the bottom panels we also add typical observational noise to the star formation parameters.
If there is no noise (i.e. in the top and middle panels) there is by construction zero $\Delta$mean(log(SFR$_{79}$)) at $\Delta$log(sSFR$_9)>0$ (since there are no quenching galaxies above the SFMS$_9$), but as we progress to lower sSFR$_9$ there is the expected perturbation towards negative log(SFR$_{79}$) for most mass-bins and quenching timescales. In the top left panel, it can be seen that the different curves are largely independent of $\tau_{\rm Q}$ just below $\Delta$log(sSFR$_9)=0$ (except for the very rapidly quenching red curve) but then, they progressively diverge towards lower sSFR$_9$. The first effect is because our $\Delta$mean(log(SFR$_{79}$)) is effectively measuring the {\it flux} of quenching objects: galaxies that are quenching faster (i.e. with shorter $\tau_{\rm Q}$) will have a more extreme SFR$_{79}$ but there will be fewer of them in an interval of $\Delta$log(sSFR$_9$) because they are changing their sSFR$_9$ faster. The net effect is therefore independent of $\tau_{\rm Q}$. This is however only true as long as the underlying stationary SF population is dominant in terms of numbers. As we move towards lower sSFR$_9$, the relative number of quenching galaxies increases and ultimately the $\Delta$mean(log(SFR$_{79}$)) converges to a value that is uniquely determined by the corresponding $\tau_{\rm Q}$. This one-to-one match between SFR$_{79}$ and $\tau_{\rm Q}$ at sufficiently low sSFR$_9$ can already be inferred from the quenching tracks in Figure \ref{sf79_dsf9_all} that become vertically parallel sufficiently far away from the SFMS$_9$.
The difference between different mass-bins in the top right panel is a simple consequence of the increased mass quenching rate at higher masses, which dominates the total quenching rate at the masses considered here (Equation \ref{Peng tot}).
When the cut in sSFR$_7$ is introduced (the middle panels), the quenching signature is weakened substantially as it is overwhelmed by the bias towards higher SFR$_{79}$ with decreasing sSFR$_9$, as can be easily seen in e.g. Figure \ref{sf79_dsf9_all}. Furthermore, this introduces differentiation between different $\tau_{\rm Q}$, essentially inverting the previous dependence of the signature on $\tau_{\rm Q}$. At a given low sSFR$_9$, galaxies that are quenching on a shorter timescale will have a lower sSFR$_7$ and are therefore more quickly affected by the cut in sSFR$_7$. Therefore, for the shortest $\tau_{\rm Q}$ considered (100 Myr), there is hardly any quenching signature left after applying the cut at $\Delta$log(sSFR$_7)=-0.9$, while for the longest timescale (1.3 Gyr), we would still expect $\Delta$mean(log(SFR$_{79}))\approx-0.1$ at $\Delta$log(sSFR$_9)\approx$-0.7. For $\tau_{\rm Q} = 500$ Myr, a signature is only seen for $M_*\gtrsim10^{10.5}M_{\odot}$. At lower masses, the quenching rate is too small to produce a measurable signature for this quenching timescale.
If we then add typical observational noise, adopting our minimum (i.e. pure observational) noise estimates (see Section \ref{sfr79_unc_sec}), we find a net trend that is actually opposite to the expected quenching signature, i.e. $\Delta$mean(log(SFR$_{79}$)) is now \textit{increasing} towards lower sSFR$_9$ over the entire range considered. Despite this reversal in overall slope, there are still some differential effects with $\tau_{\rm Q}$ and/or mass and quenching rate that may be searched for in actual data, especially for different $\tau_{\rm Q}$. We will compare this to the data in the next Section.
\subsection{Comparison to the data}
\label{comp_data_sec}
In order to compare the expected $\Delta$mean(log(SFR$_{79}$)) shown in Figure \ref{delta_mean_log_sf79_peng} to the real data, we will only consider objects with $\Delta$log(sSFR$_7)>-0.9$ in the data. This is roughly but not entirely equivalent to the previously applied cut at EW(H$\alpha)=4$ \AA\ (as can be seen from the red shaded region in Figure \ref{sf79_dsf9_all}). This additional cut leaves us with 114'496 objects.
We show the mean value of log(SFR$_{79}$) relative to the mean found in the bin $0<\Delta$log(sSFR$_9)<0.2$ for each of our four mass-bins respectively in Figure \ref{delta_mean_log_sf79_data} with and without (light lines) the LINER correction (see Section \ref{liner_expl_sec} and Appendix \ref{liner_sec}).
As expected from the bottom panels in Figure \ref{delta_mean_log_sf79_peng}, we find that the $\Delta$mean(log(SFR$_{79}$)) \textit{increases} to positive values with decreasing sSFR$_9$. This increase is qualitatively consistent with the expected effect from Figure \ref{delta_mean_log_sf79_peng} once the effect of observational scatter is taken into account (lowest panels). The trend is less pronounced for higher masses in the data which is also qualitatively consistent with the trend with mass expected from Figure \ref{delta_mean_log_sf79_peng} (bottom left panel).
However, the trend with mass could also possibly reflect a mass-dependent $\tau_{\rm Q}$ (Figure \ref{delta_mean_log_sf79_peng} bottom left), which would then point to \textit{longer} timescales for higher masses, in tension with results from e.g. \citet{Hahn-17} who find \textit{shorter} $\tau_{\rm Q}$ for more massive central galaxies (see more discussion in Section \ref{tauq_sec}). However, disentangling these effects is clearly not possible with these data, given the effects of observational noise.
Not least, the fact that the trend of $\Delta$mean(log(SFR$_{79}$)) with $\Delta$log(sSFR$_9$) levels off at $\Delta$log(sSFR$_9)>0$ in the data, but should continue from our modelled prediction (Figure \ref{delta_mean_log_sf79_peng}), also suggests caution in any quantitative interpretation of trends in the data. This may indicate that the underlying true SF population is not a Gaussian distribution in logarithmic space as we have assumed in the previous Section and/or that we are overestimating the noise in that regime.
In summary, the trends we observe in $\Delta$mean(log(SFR$_{79}$)) with $\Delta$log(sSFR$_9$) are at first sight the opposite of the searched-for quenching signal. However, the observed trends are qualitatively consistent with expectations once the substantial effects of the required cut in sSFR$_7$ (or in EW(H$\alpha$)) and of observational scatter are taken into account. We can only conclude that while the observational SDSS data is {\it consistent} with the presence of a quenching population of the expected strength, but does not {\it demand} it, given the selection effect and the noise.
\subsection{Estimating quenching timescales}
\label{tauq_sec}
If we assume that there is indeed ongoing quenching in the galaxy population, can we get even a rough estimate of the relevant quenching timescales $\tau_{\rm Q}$ from SFR$_{79}$?
We have already seen that while, in principle, this could be done based on examining the trend of $\Delta$mean(log(SFR$_{79}$)) with $\Delta$log(sSFR$_9$) (Section \ref{peng_sec}), this is in practice not possible given the effect of the required selection-cut in sSFR$_7$ and the effect of the significant noise in the SFR$_{79}$ estimates of galaxies.
If we could isolate a representative ``quenching population'', we could use SFR$_{79}$ directly to estimate the quenching timescale $\tau_{\rm Q}$ using the tracks in Figure \ref{sf79_dsf9_all}. One possibility could be to use the galaxies in the GV located in the range of $-0.9<\Delta$log(sSFR$_7)<-0.7$ (corresponding to 4 \AA\ $\lesssim$ EW(H$\alpha)\lesssim7$ \AA), i.e. lying just below the SF population defined above this limit (see Figure \ref{sf79_dsf9_all}) and located at the minimum in the number density distribution of log(EW(H$\alpha$)) (see Figure \ref{ewha_dist}). Such a substantially sub-SFMS population could conventionally be viewed as ``quenching", although we will see that this is unlikely to be valid.
Looking at these GV galaxies, the median log(SFR$_{79}$) is found to be $-$0.38, which translates to a $\tau_{\rm Q}$ of $\sim500$ Myr assuming that the ``typical" galaxy follows the quenching tracks in Figure \ref{sf79_dsf9_all} (see also Appendix \ref{conv_func_sec} for details). Those tracks assume that the galaxies start somewhere close to the ridge of the SFMS population. If quenching instead starts from the top (or bottom) envelope of the SFMS, then the inferred quenching timescales would become slightly longer (or shorter), depending on the detailed assumptions (see Appendix \ref{conv_func_sec} for a more quantitative discussion of this effect).
Furthermore, this direct conversion only provides an estimate of the quenching timescale of the galaxies that are \textit{currently seen} to be in the GV, i.e. within a narrow range of $\Delta$log(sSFR$_7$), which will be biased against short quenching timescales since these objects will spend little time crossing the SFR$_7$-defined GV. This could in principle be corrected by weighting an observed distribution of $\tau_{\rm Q}$ by 1/$\tau_{\rm Q}$, but this makes no sense in the present study because we can hardly infer anything about the underlying \textit{intrinsic} distribution of SFR$_{79}$.
If we have an a priori estimate of the quenching rate of galaxies, following \citet{Peng-10} and \citet{Peng-12} (Equation \ref{Peng tot}), then we can estimate the {\it number} of ``quenching galaxies'' that should lie in the GV, which also will depend on the typical quenching timescale because this will determine how quickly the galaxies are moving (declining) in SFR$_7$. This second approach is similar to that in e.g. \citet{Wetzel-13, Hahn-17}.
The total number of objects in our GV sample is 11'799, corresponding to 10\% of the entire sample of galaxies with $\Delta$log(sSFR$_7)>-0.9$. Given the quenching rates computed in Section \ref{peng_sec} and using the $\tau_{\rm Q} \sim$ 500 Myr from the previous paragraph, we find that the expected number of ``quenching galaxies'' that should be seen within our GV is about a factor of 4.5 lower than this actually observed number.
This discrepancy decreases to a factor of about 3 when we apply the LINER correction, which hardly affects the median log(SFR$_{79}$) of GV galaxies but does significantly reduce the number of objects in the GV strip relative to the SF population.
This discrepancy in number should not be surprising since we have already argued that the GV population may contain a significant number of SFMS galaxies, i.e. galaxies that should be considered to be members of the ``stationary" SFMS population (see the discussion in Section \ref{stat_crit_sec}). This is already evident in the $\Delta$log(SFR$_{79}$)-$\Delta$log(SFR$_{9}$) plots (e.g. Figure \ref{sf79_dsf9_all}). The GV population (defined in terms of SFR$_{7}$) almost certainly contains a substantial number of low-SFR$_{7}$ SFMS galaxies which are the counterparts of the large number of galaxies with the same long-term SFR$_{9}$ but much higher SFR$_{79}$, and thus higher SFR$_7$, that are visible in that diagram, remembering that any stable SFMS population will satisfy the stationarity criterion of Equation \ref{stationarity}.
The conclusion that, even 0.8 dex below the ridge line of the SFMS, the SFR$_{\rm 7}$-defined GV population may still be dominated by galaxies that should be regarded as being part of the ``stationary" SFMS population has important implications for any study that tries to use such GV galaxies as ``quenching galaxies''.
It also implies that the direct estimate of $\tau_{\rm Q} \sim500$ Myr based on the median log(SFR$_{79}$) of GV galaxies should be used with caution, as the quenching population could be a minority within the GV. Against this, the distribution of log(SFR$_{79}$) at low $\Delta$log(sSFR$_7$) is dominated by the symmetric Gaussian noise in the data and the underlying intrinsic distribution of log(SFR$_{79}$) in the GV region may well be relatively narrow, with a dispersion of $\lesssim0.2$ dex, suggesting that any differences in the distribution of log(SFR$_{79}$) of the sub-dominant population of quenching objects and the dominant population of SFMS objects in the GV may be small.
We clearly need to try to subtract the contamination by SFMS galaxies before applying any number-argument to derive $\tau_{\rm Q}$. In principle, we could simply subtract from the GV population at a given value of SFR$_{\rm 9}$, the number of SFMS galaxies with the same SFR$_{\rm 9}$ but much higher SFR$_{\rm 79}$, i.e. effectively reflect the SFMS population around the locus of SFR$_{\rm 79} = 0$ to remove it from the GV. Unfortunately, as noted in the previous section, the effect of the substantial noise in SFR$_{\rm 79}$ makes such an estimate of this contamination very difficult at low SFR$_{\rm 9}$. The diagonal error ellipses will be scattering the peak of the SFMS down to lower SFR$_{\rm 9}$ and higher SFR$_{\rm 79}$, spuriously increasing the number of SFMS counterparts of the GV galaxies without affecting the number of GV galaxies.
We could try to get around this problem with the following argument. Since the noise is symmetric in log(SFR$_{79}$) and is very small in log(sSFR$_{7}$), it has no effect on the mean log(SFR$_{79}$) of any sSFR$_{7}$-selected sample, including the overall sample of SF galaxies defined earlier to have
$\Delta$log(SFR$_{7})<-0.7$.
As noted above in Section \ref{sfms_var_sec} this sample has a very symmetric, roughly Gaussian distribution of log(SFR$_{79}$), with both the mean and median log(SFR$_{79}$) being $\sim0$.
The measured scatter is $\sim0.3$ dex while the typical 1$\sigma$ uncertainty is only $\sim0.2$ dex.
In this sSFR$_{7}$-selected sample of overall SF galaxies, the fact that the median log(SFR$_{79}$) remains close to zero suggests that the putative number of truly quenching galaxies within this sample (with log(SFR$_{79}$) $< 0$) is being compensated by the elimination of those SFMS galaxies below the sSFR$_{7}$-cut at log(sSFR$_{7}$)$ = -0.7$.
We can get a rough estimate of the latter if we now add the number of galaxies in the GV population below this sSFR$_{7}$-cut, i.e. with $-0.9 < $log(sSFR$_{7}$)$ < -0.7$. We have argued that this is still likely to be dominated by SFMS galaxies. This argument suggests that the number of quenching galaxies present in the {\it overall} SF population above log(sSFR$_{7}$)$ > -0.9$ is likely to be comparable to the observed number of galaxies observed {\it within} the GV.
Applying again the number argument from the quenching rate, but now considering the full range of SFR$_{7}$ rather than the 0.2 dex range of the GV, then yields a quenching timescale around $\sim$500 Myr, i.e. quite consistent with the value that was independently inferred above from the direct examination of the median log(SFR$_{79}$) of galaxies in the GV (but with the caveat noted above). In effect, the factor of 4.5 discrepancy that was noted above in the expected numbers of quenching galaxies {\it within} the GV strip is being compensated by the 4.5 times larger range in log(sSFR$_{7}$) of the overall SF population compared to the GV strip, i.e. 0.9 dex relative to 0.2 dex.
The convergence of these timescale estimates is suggestive (and less striking if the LINER correction in the number of GV galaxies is taken into account). However, we can again only claim (self-)consistency of our data with this timescale, while noting that it is broadly consistent with those from \citet{Wetzel-13} and \citet{Hahn-17}, who find that the e-folding quenching timescale is 200-800 Myr for satellites and 500-1500 Myr for central galaxies with generally \textit{decreasing} timescales with increasing $M_*$.
It is clear that the main difficulty remains that of reliably identifying a set of objects that can be considered to be quenching. We stress again that, based on our own analysis, location within the SFR$_7$-defined GV is not by itself a sufficient condition for a galaxy to be considered to be ``quenching". Such a GV galaxy may well have a counterpart (with the same long-term SFR$_9$) with much higher SFR$_7$ (and SFR$_{79}$), and together form part of the stable SFMS population.
\section{Summary and Conclusions}
\label{conc_sec}
Galaxies are separated into two populations on the color-magnitude (or SFR-$M_*$) diagram: SF galaxies and quenched galaxies. Galaxies in between these two populations are usually called Green Valley galaxies and they are generally assumed to be transitioning from the SF to the quenched population, i.e. they are suffering from ongoing quenching processes. Observationally, it is quite challenging to tell whether this is the case or not, because one can usually only measure the current SFR of a given GV galaxy, rather than any tendency in its SFR.
In this work, we search for direct evidence of ongoing quenching processes in the galaxy population based on the star formation change parameter introduced by \citetalias{Wang-20a}. The H$\alpha$ emission line traces the recent SFR within the last 5 Myr, and the H$\delta$ absorption feature roughly traces the SFR within the last 800 Myr. Therefore, \citetalias{Wang-20a} calibrated the star formation change parameter SFR$_{\rm 79}$, the ratio of the SFR on two different timescales SFR$_{\rm 5 Myr}$/SFR$_{\rm 800Myr}$, based on these two spectral features plus an additional feature, the 4000 \AA\ break. By definition, the SFR$_{\rm 79}$ directly tells us whether a given galaxy currently has an enhanced or a suppressed (s)SFR with respect to its (s)SFR averaged over the last $\sim1$ Gyr and therefore in principle provides a way to examine possible signatures of quenching in the galaxy population as well as quenching timescales.
Compared to the method in \citetalias{Wang-20a}, we make several improvements in calibrating the SFR$_{\rm 79}$ in the present work (see details in Section \ref{data_sec}). The uncertainty of SFR$_{\rm 79}$ mainly comes from the uncertainty in measuring the absorption index of H$\delta$, due to the difficulty of decomposing H$\delta$ emission and absorption. First, we therefore develop a new method to estimate the H$\delta$ emission line flux via the line fluxes of H$\alpha$ and H$\beta$ and show that this yields unbiased measurements of EW(H$\delta_{\rm A}$) even for spectra of very low SNR (see Figure \ref{hd_exp}). Second, in calibrating the SFR$_{\rm 79}$, we use more realistic SFHs as input, including the possibility of quenching in order to account for potential quenching galaxies in the sample. Third, instead of using an analytic formula to calibrate the SFR$_{\rm 79}$ as in \citetalias{Wang-20a}, in this work we construct and use a 3-dimensional lookup table based on all the mock spectra. Fourth, we carefully assign the uncertainty in SFR$_{\rm 79}$ for each individual galaxy, including the observational uncertainty in the spectral features and the uncertainty in the calibration.
By applying the method described above to the 3-arcsec fiber spectra from SDSS \citep{Abazajian-09}, we obtain the SFR$_{\rm 79}$ for each individual galaxy. Using this large galaxy sample, we first confirm the basic results found in \citetalias{Wang-20a} for the SF population. First, the stability of the SFMS (the dispersion does not evolve with time) requires that the SFR$_{\rm 79}$ and the position of galaxies on the SFR$_{\rm 9}$-based SFMS are not correlated, which is indeed seen also in the present work (see the right panel of Figure \ref{sfms_7_sfms_9} and Figure \ref{sf79_dsf9_all}). Second, we calculate the dispersion of the SFR$_7$-based and the SFR$_9$-based SFMS, as well as of the log(SFR$_{\rm 79}$) of SF objects. These dispersions contain information about the variability in the SFR of SF galaxies \citep{Wang-20b}. The dispersions as well as the qualitative trend of increasing dispersions with the stellar mass surface density $\Sigma_*$ that we find in this work are overall very consistent with the results in \citetalias{Wang-20a} despite the substantially larger and different data set used here.
We then turn to look at objects significantly below the SFMS, in the traditionally defined Green Valley and search for direct evidence of quenching. We establish several new results in the present work:
\begin{itemize}
\item The calibration of SFR$_{79}$ for objects below the SFMS is limited by several factors. The noise in the measurement of EW(H$\alpha$) constitutes a fundamental limitation. In addition, the possible contribution to the H$\alpha$ emission from LINERs, which also becomes relatively more important for objects with low H$\alpha$ emission, further complicates the calibration of SFR$_{79}$ at low EW(H$\alpha$). Finally, the calibration is affected by the choice of the prior in the distribution of SFHs used in the construction of the calibrator, not least the inclusion of quenching SFHs. In summary, the calibration of SFR$_{79}$ becomes more uncertain with decreasing EW(H$\alpha$). Carefully examining all of the mentioned effects, we argue that the calibration should be reliable for EW(H$\alpha)>4$ \AA\ and therefore limit our subsequent analysis to galaxies above this limit.
\item We introduce the key diagram of log(SFR$_{\rm 79}$) vs. $\Delta$log(sSFR$_9$) for analysis of the data, in particular to study any potential quenching signature (see Figure \ref{sf79_dsf9_all}). When moving down in SF$_9$ to galaxies that are below the SF population (and galaxies in the GV) on this diagram, we clearly see
asymmetries in the log(SFR$_{\rm 79}$) distribution towards negative values. This is at least consistent with the presence of a genuine quenching population.
\item On this log(SFR$_{\rm 79}$)-$\Delta$log(sSFR$_9$) diagram, we show the tracks of model galaxies that are exponentially decreasing their SFRs with different characteristic e-folding timescales, assuming for simplicity that they start from the midpoint of the SF population. The observed galaxies below the SFMS are well covered by these quenching tracks, consistent with the idea that some fraction of these are quenching (see Figure \ref{sf79_dsf9_all}).
\item Starting from the general assumption that the SFMS population should be stationary, i.e. that the distribution of sSFR$_9$ of SFMS galaxies should not change with time, we derive the stationarity criterion for the distribution of SFR$_{79}$ for such a population to be mean(SFR$_{79})-1=0$ at any given sSFR$_9$. Since the noise in the data is symmetric in logarithmic space and will therefore bias the mean linear SFR$_{79}$ towards higher values, we instead approximate the stationarity criterion to be mean(log(SFR$_{79}))=0$ for our sample.
\item If there are genuinely quenching galaxies in the population, then we would expect to see deviations of the mean(log(SFR$_{79}))$ towards \textit{negative} values, as we move off the SFMS towards lower values of log(sSFR$_9$).
We use the quenching formalism introduced in \citet{Peng-10} and \citet{Peng-12} to estimate quantitatively the size of this effect on the mean(log(SFR$_{79}$)) as a function of $\Delta$log(sSFR$_9$). This depends somewhat on the average quenching timescales $\tau_{\rm Q}$, and on the mass of the galaxies, since different masses will have different overall quenching rates.
\item This prediction is however subject to two substantial observational effects that combine to reverse the predicted trend. First, the selection of objects with EW(H$\alpha)>4$ \AA\ introduces a strong bias towards higher log(SFR$_{79}$) at low log(sSFR$_9$). Second, the significant noise in the SFR$_{79}$ measurements scatters objects along lines of constant log(sSFR$_7$), as can be seen from the error ellipses on the log(SFR$_{\rm 79}$)-$\Delta$log(sSFR$_9$) diagram (see Figure \ref{err_ell}).
The scattering of objects in the peak of the SFMS population therefore artificially increases the number of objects with low log(sSFR$_9$) but high (i.e. positive) log(SFR$_{79}$), and thus counters the potential signature of quenching. These two effects combine to produce a predicted observational signal in reverse of what is expected for ongoing quenching, i.e. the mean(log(SFR$_{79}))$ actually {\it increases} as $\Delta$log(sSFR$_9$) decreases. Differential effects with quenching rate or quenching timescale should however be largely preserved, but are not easily distinguishable.
\item The observed mean (log(SFR$_{79}$)) of galaxies is quite consistent with this modified prediction.
We conclude that the distribution of galaxies
on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram is certainly quite consistent with the presence of galaxies currently undergoing quenching, but unfortunately cannot be used to establish this unequivocably.
\item If we naively assume that the galaxies below the SFMS, i.e. in the SFR$_7$-defined Green Valley (GV) with
$-0.7 >$ $\Delta$log(sSFR$_7) > -0.9$, are representative of the quenching population, then their median log(SFR$_{79}$) (relative to the typical SFMS value) gives a direct estimate of a quenching timescale of $\tau_{\rm Q}\sim 500$ Myr, largely independent of contamination from LINER emission.
\item However, the number of galaxies lying in the GV galaxies is a factor of 3-4.5 higher than expected from the standard \citet{Peng-10} and \citet{Peng-12} quenching rates for this same $\tau_{\rm Q}$. This discrepancy suggests that the GV population is still dominated by galaxies that should better be considered to be the tail of the stable SFMS, as also indicated by inspection of the log(SFR$_{79}$)-$\Delta$log(sSFR$_{9}$) plot. These SFMS galaxies lying in the GV are the counterparts of indisputable SFMS galaxies that have the same SFR$_9$ but much higher SFR$_7$ (and thus SFR$_{79}$). Their number likely exceeds those of true ``one-way" quenching galaxies and we therefore caution against the presumption that the GV consists predominantly of ``quenching" galaxies. This introduces a significant caveat to the quenching timescale derived from direct examination of the typical SFR$_{79}$ values of GV galaxies.
\item We can however try to estimate the total number of quenching objects in the {\it overall} SF population (integrated above the GV), and argue that this should in fact be quite similar to the total number of ``SFMS plus quenching" galaxies that lie {\it within} the GV. It is essentially a coincidence that the factor of 3-4.5 (depending on the LINER-correction) discrepancy within the GV in the previous bullet is being compensated by the 4.5$\times$ increase in the range of log(sSFR$_7$) if we consider the entire SF+GV instead of just the GV population. Reapplying the number argument, we now again get typical quenching timescales of $\sim500$ Myr,
consistent with the previous direct estimates from the values of SFR$_{79}$. Again, we conclude that we can only claim (self-)consistency within the data for quenching timescales of this order.
\end{itemize}
We have tested that these conclusions are all robust to a maximum plausible LINER contribution to the H$\alpha$ emission (Section \ref{liner_expl_sec} and Appendix \ref{liner_sec}). Neither are they affected by the ad-hoc correction applied in Section \ref{ad_hoc_corr_sec}, since we are concerned only about the values of log(SFR$_{\rm 79}$) relative to those of typical SFMS galaxies.
The star formation change parameter, SFR$_{\rm 79}$, is a powerful tool to study both the variability in the SFR of SF galaxies {\it and} quenching processes. Compared to almost all previous studies, it provides a different and valuable perspective on galaxy evolution on Gyr timescales in ``real-time". Despite the theoretical strength of this framework, it is disappointing that we nevertheless do not find unambiguous evidence of ongoing quenching processes in the galaxy population. This is partly due to the current limitations of the observational methodology, but also because, at least at the current cosmic epoch, the perturbation of the properties of the overall population of galaxies, by these ``currently" quenching ones, is small. Even in the conventional SFR$_{7}$-defined GV, they are hard to disentangle from the much larger population of SFMS galaxies that may be undergoing strong but short-term fluctuations in their sSFR.
Nevertheless, if we \textit{assume} that quenching is an ongoing physical phenomenon in the local Universe, and use simple estimates of the expected rate, then the distributions of SFR$_{79}$ that we find and analyze in this work, consistently yield rather short e-folding quenching timescales of order 500 Myr.
\acknowledgements
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Ed- ucation Funding Council for England. The SDSS Web Site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Partic- ipating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos Na- tional Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\bibliography{rewritebib.bib}
\appendix
\section{A. Calculation of an analytic relation between SFR$_{\rm 79}$ and $\tau_{\rm Q}$}
\label{conv_func_sec}
Based on the quenching model introduced in Section \ref{meas_sfr79_sec} and specifically Equation \ref{quenching_model}, we can compute the log(SFR$_{79}$) and $\Delta$log(sSFR$_7$) for a given combination of $\tau_{\rm S}$ and $\tau_{\rm Q}$ as follows. Note that the same derivation is also valid for $\Delta$log(SFR$_{79})$, i.e. the log(SFR$_{79}$) relative to a (non-zero) reference value of the SF population.
We can express the SFH of a quenching object as
\begin{equation}
\label{quenching_sfh}
{\rm sSFR(\tau) = sSFR_0 \cdot}
\begin{cases}
{\rm 0} & {\rm \tau<\tau_S} \\
{\rm exp\left(\frac{\tau_S-\tau}{\tau_Q}\right)} & {\rm \tau>\tau_S}
\end{cases}
\end{equation}
where ${\rm sSFR_0}$ is the nominal ${\rm sSFR}$ of the SFMS, i.e. ${\rm \Delta \log(sSFR_7) = \log(sSFR_7) - \log(sSFR_0)}$. Assuming that $\tau_S < \tau_0 - 5$ Myr where $\tau_0 = 13700$ Myr is the age of the Universe, i.e. assuming that quenching started more than $5$ Myr ago (which is a minor constraint given the quenching timescales considered), we can express ${\rm sSFR_7}$ as the average ${\rm sSFR}$ over the past $5$ Myr as
\begin{multline}
\label{ssfr7}
{\rm sSFR_7 = \frac{1}{5\,Myr} \cdot \int_{\tau_0 - 5\,Myr}^{\tau_0} sSFR(\tau)\,d\tau = \frac{1}{5\,Myr} \cdot \int_{\tau_0 - 5\,Myr}^{\tau_0} sSFR_0 \cdot exp\left(\frac{\tau_S-\tau}{\tau_Q}\right)\,d\tau =} \\
... {\rm = \frac{sSFR_0\,\tau_Q}{5\,Myr} \cdot exp\left(\frac{\tau_S - \tau_0}{\tau_Q}\right)\cdot \left(exp\left(\frac{5\,Myr}{\tau_Q}\right) - 1\right)}
\end{multline}
In order to derive an analogous expression for ${\rm sSFR_9}$, we have to distinguish two cases. In the case that quenching starts more than $800$ Myr ago, i.e. $\tau_{\rm S} < \tau_0 - 800$ Myr, a calculation completely analogous to Equation \ref{ssfr7} yields
\begin{equation}
\label{ssfr9_1}
{\rm sSFR_9 = \frac{sSFR_0\,\tau_Q}{800\,Myr} \cdot exp\left(\frac{\tau_S - \tau_0}{\tau_Q}\right)\cdot \left(exp\left(\frac{800\,Myr}{\tau_Q}\right) - 1\right)}
\end{equation}
If however $\tau_S > \tau_0 - 800$ Myr, i.e. quenching started within the last 800 Myr, we need to compute sSFR$_9$ as
\begin{multline}
\label{ssfr9_2}
{\rm sSFR_9 = \frac{1}{800\,Myr}В \cdot \left(\int_{\tau_0 - 800\,Myr}^{\tau_S} sSFR_0\,d\tau + \int_{\tau_S}^{\tau_0} sSFR_0 \cdot exp\left(\frac{\tau_S-\tau}{\tau_Q}\right)\,d\tau \right)} = \\
... = {\rm \frac{sSFR_0}{800\,Myr}\cdot\left(\tau_S - \tau_0 + 800\,Myr + \tau_Q \cdot \left(1 - exp\left(\frac{\tau_S - \tau_0}{\tau_Q}\right)\right)\right)}
\end{multline}
Combining Equations \ref{ssfr7} to \ref{ssfr9_2} we obtain
\begin{multline}
\label{sfr79}
{\rm log(SFR_{79}) = log(sSFR_7/sSFR_9) = log(160) +} \\ \\
\begin{cases}
{\rm log\left(\dfrac{exp\left(\dfrac{5\,Myr}{\tau_Q}\right) - 1}{exp\left(\dfrac{800\,Myr}{\tau_Q}\right) - 1}\right)} & {\rm \tau_S < \tau_0 - 800\,Myr} \\
{\rm log\left(\dfrac{exp\left(\dfrac{5\,Myr}{\tau_Q}\right) - 1}{\left(\dfrac{\tau_S - \tau_0 + 800\,Myr}{\tau_Q} +1 \right)\cdot exp\left(\dfrac{\tau_0 - \tau_S}{\tau_Q}\right) -1}\right)} & {\rm \tau_S > \tau_0 - 800\,Myr}
\end{cases}
\end{multline}
Since we did not find an analytic inverse of this function, we use a numeric approximation to get from a given combination of $\Delta$ log(sSFR$_7)$ and log(SFR$_{79})$ to a ($\tau_{\rm S}$, $\tau_{\rm Q}$). We start with a range of values of $\tau_{\rm Q}$, for each of which we compute a corresponding $\tau_{\rm S}$ at a given $\Delta$log(sSFR$_7)$ and using Equation \ref{ssfr7} which can be solved for $\tau_{\rm S}$. For each tuple ($\tau_{\rm S}$, $\tau_{\rm Q}$), we can then compute log(SFR$_{79})$ using Equation \ref{sfr79}. In this way, we can construct a lookup table, matching tuples ($\tau_{\rm S}$, $\tau_{\rm Q}$) to values of log(SFR$_{79}$) at fixed $\Delta$log(sSFR$_7)$. From the lookup table corresponding to the measured $\Delta$log(sSFR$_7)$, we finally find the ($\tau_{\rm S}$, $\tau_{\rm Q}$) that best matches the measured log(SFR$_{79})$ of a given galaxy or galaxy population.\\
We show the conversion functions ($\Delta$)log(SFR$_{79})\rightarrow {\rm log(\tau_Q/Gyr)}$ assuming $\Delta$log(sSFR$_7$) = $-$1.2, $-$1, $-$0.8, $-$0.6, $-$0.4, $-$0.2 dex in different colors and line styles in Figure \ref{conv_func}.
So far, we have assumed that quenching starts from the midpoint of the SF population at $\Delta$log(sSFR$_7)=$ log(SFR$_{79})=0$. However, this may not be true in the real Universe where objects might have their sSFR suppressed for some time before the actual quenching process starts or they may experience a phase of enhanced star formation (a starburst) prior to quenching. Any such scenario would likely produce a more complicated quenching track on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram but to give a rough idea of how the quenching timescales derived from the typical value of log(SFR$_{79}$) of GV galaxies are affected by a change of the starting point of the quenching tracks, we note that shifting the starting point of the tracks up (down) is equivalent to measuring $\tau_{\rm Q}$ at a lower (higher) value of $\Delta$log(sSFR$_7$). For example, for an object that has a measured $\Delta$log(sSFR$_7$) of $-0.8$, the conversion function ($\Delta$)log(SFR$_{79})\rightarrow\tau_{\rm Q}$ shown as the orange dotted line in Figure \ref{conv_func} is the one we would use if quenching starts from the midpoint of the SFMS. If however, quenching started 0.4 dex above (below) the SFMS, then the according conversion function would be the brown dash-dotted (blue dashed) line corresponding to $\Delta$log(sSFR$_7)=-1.2$ ($\Delta$log(sSFR$_7)=-0.4$). Note that by construction, the minimal $\Delta$log(SFR$_{79}$) that can be measured at a fixed $\Delta$log(sSFR$_7$) is equal to the latter for an object quenching along a quenching track. So any direct comparison between inferred timescales in Figure \ref{conv_func} is only meaningful vertically, i.e. at a fixed $\Delta$log(SFR$_{79}$) and between the conversion functions that actually cover that value.
This illustrates that for the $\Delta$log(SFR$_{79})=-0.38$ that we find for the GV galaxies in our sample, choosing the starting point 0.4 dex above the SFMS midpoint does not affect the inferred median quenching timescale of $\sim500$ Myr but it would {\it lengthen} inferred timescales if we had measured a lower $\Delta$log(SFR$_{79}$) of e.g. $-0.7$. On the other hand, the conversion function corresponding to the starting point below the SFMS would yield a median quenching timescale between 100 and 300 Myr in our sample, i.e. it would lead to {\it shorter} timescales. We emphasize again that this is not a realistic scenario but only serves to illustrate the potential effect of a different starting point of quenching on the inferred quenching timescales.\\
\section{B. Constraining the Effect of LINER Emission on our Results}
\label{liner_sec}
In order to investigate the effect of LINER emission on our results, we correct our measured and dust-corrected H$\alpha$ emission, assuming a \textit{maximum} plausible contribution of LINER emission. The idea behind this correction is therefore not to realistically correct for LINER emission, but to illustrate the maximum effect that LINERs might have on our results.\\
On the left panel of Figure \ref{classification}, we show the ${\rm \log(NII/H\alpha)}$-${\rm \log(OIII/H\beta)}$ distribution of our sample. We adopt the thresholds given in \citet{Kewley-06} to distinguish between star-forming objects, composites and AGN. The colored markers on the plot show the median line ratios in our four mass bins and in bins of $\Delta$log(sSFR$_7$) indicated by the color-coding respectively. This already makes the point that, with decreasing $\Delta$log(sSFR$_7$) and more distinctly at higher ${\rm M_*}$, the typical object in our sample moves from the star-forming through the composite- and close to the AGN-region of the diagnostic diagram. Another illustration of the effect is shown on the right panel of Figure \ref{classification} where the fraction of objects belonging to each of the three groups (star-forming, composite or AGN) is shown as a function of $\Delta$log(sSFR$_7$). While the fraction of objects classified as star-forming monotonically decreases, the fraction of composites and AGN both increase with decreasing $\Delta$log(sSFR$_7$). This indicates that there might indeed be a significant contribution of LINERs to our measured H$\alpha$ emission at low $\Delta$log(sSFR$_7$).\\
We now proceed as follows. Based on the work of \citet{Belfiore-16}, we assume that the maximum contribution of LINERs to the dust-free EW(H$\alpha$) of an individual galaxy is 3 \AA. For all composite objects we then reduce their measured and dust-corrected EW(H$\alpha$) by a factor of 2, but not by more than 3 \AA\ and for all objects classified as AGN, we directly reduce their EW(H$\alpha$) by the maximum LINER contribution of 3 \AA. We add dust back into the corrected EW(H$\alpha$) inverting our dust correction prescription (Section \ref{meas_spec_feat_sec}) and multiply the dusty EW(H$\alpha$) with the measured continuum of each spectrum to retrieve a LINER-corrected, dusty H$\alpha$-flux which we then again correct for dust to get an updated estimate of sSFR$_7$ adopting the star formation law of \citet{Kennicutt-98}. We use the LINER corrected estimate of the dust free EW(H$\alpha$) together with the other spectral features which we leave unaltered to derive a new SFR$_{79}$ using our calibrator. Figure \ref{sf79_dsf9_all_liners} shows the comparison of the overall log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram without (left panel, a duplicate of Figure \ref{sf79_dsf9_all}) and with (right panel) the LINER correction.
\section{C. Effects of the Prior in the distribution of SFHs}
\label{prior_sec}
Our method of calibrating SFR$_{79}$ involves a prior in the sense that we start with a pre-defined range of SFHs which we then use to produce spectra to extract the corresponding spectral features which will be matched to the observations. Our approach to this was to start with a very broad range of SFHs to cover all possible SFHs in the Universe by superimposing stochastic fluctuations on the otherwise smooth SFHs that follow the cosmic evolution of the SFMS. While this may lead to an overestimate of the uncertainty in the derived SFR$_{79}$ intrinsic to the calibrator, as discussed in Section \ref{meas_sfr_params_sec}, we believe that it allows a relatively unbiased determination of the values of SFR$_{79}$. The situation is however different when it comes to quenching. In order to account for objects in the data that show relatively low H$\alpha$ emission \textit{and} H$\delta$ absorption, it is inevitable to include SFHs in which the SFR has been suppressed for a sufficiently long time, i.e it decreases on some timescale and then either keeps decreasing or remains at a suppressed level. As described in Section \ref{meas_sfr_params_sec}, we experimented with two implementations of the quenching process which we superimpose on half of the SFHs. In the first implementation, the SFR declines exponentially ``forever" (subsequently referred to as the quenching prior A) and in the second, we set a "floor" to the quenching process, i.e. the SFR of a quenching galaxy decreases by 1.3 dex and then remains at the corresponding low and constant value (subsequently referred to as the quenching prior B). We have also tested different values for the ``floor", such as 0.9 dex or 2 dex below the SFMS and found only minor differences between them.
With the quenching prior A, sample objects with low H$\alpha$ emission and H$\delta$ absorption are predominantly matched with SFHs in which the SFR is still declining exponentially and therefore have a \textit{negative} log(SFR$_{79}$). In the calibration, we are selecting mock galaxies that constitute a 3-dimensional Gaussian distribution in the spectral features around the measured values with dispersions given by the measurement uncertainties (see Section \ref{meas_spec_feat_sec}). As we approach lower and lower values of EW(H$\alpha$), this will tend to include SFHs all the way down to EW(H$\alpha)=0$ \AA\ and therefore the retrieved value of SFR$_{79}$ will depend on the distribution of quenching timescales $\tau_{\rm Q}$ that we put in. Since we used a uniform distribution of $\tau_{\rm Q}$ in logarithmic space, this is dominated by \textit{short} timescales yielding very low values of SFR$_{79}$.
With the quenching prior B, ``quenched" mock galaxies are fluctuating around a constant SFR (due to the stochastic fluctuations which are still superimposed), similar to star-forming mock galaxies, but at a suppressed level of star formation. So, in principle their typical log(SFR$_{79}$) is expected to be $\sim0$. In any model that involves stochastic fluctuations in the SFR, the lowest values of EW(H$\alpha$) will however always be associated with local minima in the mock SFHs and thus with \textit{negative} log(SFR$_{79}$). Therefore, sample objects with low EW(H$\alpha$) and EW(H$\delta_{\rm A}$) still typically get a negative value of log(SFR$_{79}$) assigned, which is however closer to 0 than with the quenching prior A.
Note that by ``low" H$\alpha$ emission or H$\delta$ absorption we mean something like EW(H$\alpha)\lesssim4$ \AA\ which we introduced as a boundary on the reliability of the calibration. Most of those objects in the data also have a low EW(H$\delta_{\rm A})\approx-2$ \AA\ with the bulk of the objects between $-$4 \AA\ and 1 \AA. There is a small number of objects with very low or even vanishing EW(H$\alpha$) which do however show significant H$\delta$ absorption, e.g. there are 2'685 objects with EW(H$\alpha)<4$ \AA\ and EW(H$\delta_{\rm A})>2$ \AA, corresponding to $\sim1$\% of the entire sample. Those objects have likely quenched recently and quite rapidly and/or have quenched after a burst of star formation.
The bulk of the objects with EW(H$\alpha)<4$ \AA\ have likely quenched a relatively long time ago and it is therefore no surprise that our calibration does not work for those objects.
To illustrate the effects discussed in this section, we show the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram for all of our sample galaxies in the two top panels in Figure \ref{sf79_dsf9_prior}. All four panels are produced in analogy to Figure \ref{sf79_dsf9_all}. The magenta contours enclose 10, 30, 50, 70 and 90\% of the sample galaxies respectively. In the top left panel of Figure \ref{sf79_dsf9_prior}, we show our results as obtained using the quenching prior A and in the top right, we show the same objects but for the quenching prior B. Note that it is this latter version of the calibration that we eventually used throughout the paper. The two bottom panels then show analogous plots but only displaying objects with EW(H$\alpha)>4$ \AA. For better illustration of that sample selection, we color the region corresponding to $\Delta$log(sSFR$_7)<-$0.9 (roughly equivalent to EW(H$\alpha)>4$ \AA) in red.
Figure \ref{sf79_dsf9_prior} shows that the quenched objects in the sample form a diagonal sequence that extends from the bottom right to the upper left of the diagram and whose exact shape and location is extremely sensitive to the choice of a prior distribution of SFHs. For the quenching prior A, the quenched population is overall shifted up and to the left. Note that this is simply a consequence of this prior yielding lower values of SFR$_{79}$ for those objects as discussed above, which then also directly translate to higher values of $\Delta$log(sSFR$_9$) since $\Delta$log(sSFR$_7$) is obtained from the H$\alpha$ luminosity, independent of the SFR$_{79}$ calibration. Note also that 16\% of the sample objects are still missing from that plot because they are classified as having EW(H$\alpha)=0$ \AA\ (see Section \ref{meas_spec_feat_sec}).
The lower two panels of Figure \ref{sf79_dsf9_prior} then demonstrate that for EW(H$\alpha)>4$ \AA, the calibration is fairly robust to the choice of a prior distribution of quenching SFHs, justifying the adoption of this cut.
\section{D. Effects of an added Old Stellar Population}
\label{osp_sec}
In the following, we investigate the effect of adding a substantial OSP to a typical SF galaxy on its location on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram. This effectively simulates galaxies with very different SFHs compared to those considered above in the context of both SF as well as quenching galaxies. By adding a substantial OSP to a continually star-forming galaxy, we simulate a galaxy that formed a large fraction of its stars in a burst some significant time ago, but thereafter has maintained a more or less constant SFR at a substantially sub-SFMS level. Such a galaxy might lie below the SF population because of its low sSFR, but should probably not be considered ``quenching" as it does not have a currently declining SFR. Where would such an object lie in the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) plane, and could it be mis-identified to lie at the bottom left of the SF cloud where we argued to see some indication for ongoing quenching?
Using the Flexible Stellar Population Synthesis code \citep[{\tt FSPS};][]{Conroy-09}, we model the spectral features of a single stellar population at different ages adopting the {\tt MILES} stellar library \citep{Sanchez-Blazquez-06, Falcon-Barroso-11}, a \citet{Chabrier-03} IMF, and the {\tt Padova} isochrones \citep[e.g.][]{Bertelli-94, Bertelli-08} as in Section \ref{meas_sfr79_sec}.
We then start with the spectral features of a typical SF galaxy from our sample (i.e. a typical galaxy around $\Delta$log(sSFR$_7$) = log(SFR$_{79}$) = 0) and successively add a contribution of the modelled OSP adopting different ages and mass-fractions with respect to the initial mass of the SF population. We select the initial mass of the SF galaxy to be $10^{10.5}M_{\odot}$, roughly the typical mass of a SF galaxy in our sample. We then investigate how the estimates of the star formation parameters sSFR$_7$, SFR$_{79}$ and thus also sSFR$_9$ are affected by the increasing contribution of an OSP. In principle, we would expect sSFR$_7$ and sSFR$_9$ to scale exactly inversely with the added mass, while SFR$_{79}$ should remain unaltered. In other words, we would ideally see the composite galaxy move vertically downwards in the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram as more OSP is added.
The results of this exercise are shown in Figure \ref{old_stellar_pop}, where 15 different mass fractions (equally spaced in log, ranging from $-$1 to 2, illustrated by the color coding) are plotted for three different ages of the OSP (2, 5 and 10 Gyr, illustrated by different marker types) respectively.
As might be expected, the older the OSP population is, the better the composite galaxy follows the desired vertical track in the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram.
For all ages, an OSP with a mass roughly 2 to 3 times the mass of the SF component is required to move an object out of the SF population. For the older OSP ages, 5 Gyr and greater, such an object would {\it not} mimic a galaxy with a \textit{currently} significantly suppressed sSFR because it (correctly) still has log(SFR$_{79}$) $\sim 0$. For younger ages of the added OSP, the composite galaxy may well contaminate the region to the bottom left of the SF population with log(SFR$_{79}$) $\approx -0.4$.
However, a galaxy with an OSP that is 2 to 3 times as massive as the population of continually formed stars but with an age of only 2 to 5 Gyr would represent a rather odd SFH. It would have effectively required $65-75$\% of the stellar mass to have been formed in a short burst a few Gyr ago. While such galaxies may exist, we would not expect them to be very common, and they are therefore unlikely to contribute significantly to the population of galaxies to the bottom left of the SF population and to any (sub)set of galaxies on the log(SFR$_{79}$)-$\Delta$log(sSFR$_9$) diagram.
\label{lastpage} |
Title:
A note on the (non-)conservation of curvature perturbation |
Abstract: In this note, we compare two different definitions for the cosmological
perturbation $\zeta$ which is conserved on large scales and study their
non-conservation on small scales. We derive an equation for the time evolution
of the curvature perturbation on a uniform density slice through a calculation
solely in longitudinal (conformal-Newtonian) gauge. The result is concise and
compatible with that obtained via local conservation of energy-momentum tensor.
| https://export.arxiv.org/pdf/2208.07568 |
\title{A note on the (non-)conservation of curvature perturbation}
\author{Chia-Min Lin}
\affiliation{Fundamental General Education Center, National Chin-Yi University of Technology, Taichung 41170, Taiwan}
\large
\baselineskip 18pt
\section{Introduction}
Our universe appears to be homogeneous and isotropic on large enough scales in accordance with cosmological principle and is described by an expanding Friedmann metric. On smaller scales, there are inhomogeneities that can be studied by using perturbation theory. These primordial perturbations generate isotropy of the cosmic microwave background (CMB) and are the seeds of subsequent structure formation which eventually lead to galaxies and stars. But inhomogeneity of what? Naively we can say it is the inhomogeneity of energy density, but in cosmological perturbation theory, one has the freedom to choose a spatial hypersurface (or slice) where the energy density is constant (or uniform) and there is no density perturbation at all. Instead, the hypersurface thus chosen may have intrinsic curvature. There is a give and take between primordial density perturbation and primordial curvature perturbation. They can be transformed into each other. Therefore cosmological perturbation theory is complicated by the issue of coordinate (or gauge) transformation and there are different theoretical representations for the same physics. The question about which gauge is better has an answer up to personal taste. One popular approach is to choose a longitudinal (conformal-Newtonian) gauge and study the time evolution of a gauge invariant quantity $\Phi$ which can be regarded as a generalized Newtonian potential. Another approach is to consider a quantity $\zeta$ which is conserved on large scales, but not so conserved on small scales. The purpose of this note is to consider two definitions of $\zeta$ (we call them $\zeta_1$ and $\zeta_2$) and compare their non-conservation on small scales.
The note is organized as follows.
We introduce cosmological perturbation theory in Section \ref{sec1}.
We review some relevant equations which will be needed subsequently in order to work in longitudinal gauge in Section \ref{sec2}. We present the time evolution and matching conditions for $\zeta_1$ in order to compare with $\zeta_2$ in Section \ref{sec3}. We solve for the time evolution of $\zeta_2$ and obtain a concise equation in Section \ref{sec4}. The calculations are done solely in the framework of a longitudinal (conformal-Newtonian) gauge. The results can be compared with that of other approaches. In Section \ref{sec5}, we present our conclusions.
\section{cosmological perturbation}
\label{sec1}
We consider a spatially flat Friedmann metric $^{(0)}g_{\mu\nu}$ with first-order scalar perturbations $\delta g_{\mu\nu}$. The line element is given by
\begin{equation}
d s^2= a^2 \left[ (1+2A)d\eta^2 + 2B_{,i}d\eta dx^i -(1-2\psi \delta_{ij} -2E_{,ij})dx^i dx^j \right],
\end{equation}
With the coordinate transformation
\begin{equation}
x^\alpha \rightarrow x^\alpha + \xi^\alpha,
\end{equation}
the variation of the metric perturbation $\delta g_{\mu\nu}$ (up to first-order) is nothing but the Lie derivative of the background metric $^{(0)}g_{\mu\nu}$ with respect to the vector $\xi^\alpha$, namely $\delta g_{\mu\nu} \rightarrow \delta g_{\mu\nu}+\mathsterling_\xi ^{(0)}g_{\mu\nu}$, where
\begin{equation}
\mathsterling_\xi ^{(0)}g_{\mu\nu}=\xi^\lambda {^{(0)}g_{\mu\nu,\lambda}}+^{(0)}g_{\lambda \nu}\xi^\lambda_{,\mu}+ ^{(0)}g_{\mu\lambda} \xi^\lambda_{,\nu}.
\end{equation}
If we define $\xi^\alpha \equiv (\xi^0, \xi^i)$ where $\xi^i$ is decomposed through Helmholtz's theorem as
\begin{equation}
\xi^i=\xi^i_{\perp}+\xi^{,i},
\end{equation}
with $\xi^i_{\perp,i}=0$, the gauge transformation of the metric perturbation is given by
\begin{equation}
A \rightarrow A-\frac{1}{a}(a\xi^0)^\prime, \;\;\; B \rightarrow B+\xi^\prime -\xi^0, \;\;\; \psi \rightarrow \psi+\frac{a^\prime}{a} \xi^0, \;\;\; E \rightarrow E+ \xi,
\label{gauge}
\end{equation}
where a dash indicates differentiation with respect to $\eta$.
For the unperturbed background, we have the continuity equation
\begin{equation}
\epsilon^\prime_0=-3\mathcal{H}(\epsilon_0+p_0).
\label{cl}
\end{equation}
and the Friedmann equation
\begin{equation}
\mathcal{H}^2=\frac{8\pi G}{3}a^2 \epsilon_0,
\label{e2}
\end{equation}
where $\mathcal{H}\equiv a^\prime/a$ is the Hubble parameter\footnote{This is different from $H \equiv \frac{(da/dt)}{a}$ where $dt=ad\eta$.}, $\epsilon_0$ is the unperturbed energy density, and $p_0$ is the unperturbed pressure.
These equations will be used in the following sections.
\section{Longitudinal (conformal-Newtonian) gauge}
\label{sec2}
From Eq.~(\ref{gauge}), it can be seen that gauge transformation can be used to set $B=E=0$, and the gauge freedom is used up\footnote{Two scalar degrees of freedom $\xi^0$ and $\xi$ are used to cancel two scalar degrees of freedom $B$ and $E$.}. In this particular gauge (longitudinal or conformal-Newtonian gauge), we call $A=\Phi$ and $\psi=\Psi$\footnote{They are gauge-invariant quantities.}.
The line element is then simplified to
\begin{equation}
ds^2=a^2 \left[ (1+2\Psi)d \eta^2-(1-2\Phi)\delta_{ij}dx^i dx^j \right]
\end{equation}
The linearized Einstein equations are
\begin{equation}
\overline{\delta G}^\alpha_\beta=8\pi G \overline{\delta T}^\alpha_\beta,
\end{equation}
where the overline symbol denotes perturbations calculated in this particular gauge and can be defined in a gauge-invariant way. This gives \cite{Mukhanov:2005sc, Mukhanov:1990me}
\begin{equation}
\Delta \Psi - 3 \mathcal{H}(\Psi^\prime+\mathcal{H}\Phi)=4 \pi G a^2 \overline{\delta T}^0_0,
\label{ee11}
\end{equation}
\begin{equation}
(\Psi^\prime +\mathcal{H}\Phi)_{,i}=4\pi G a^2 \overline{\delta T}^0_i,
\label{e12}
\end{equation}
\begin{equation}
\left[ \Psi^{\prime\prime} +\mathcal{H}(2\Psi +\Phi)^\prime +(2\mathcal{H}+\mathcal{H}^2)\Phi + \frac{1}{2}\Delta (\Phi-\Psi) \right] \delta_{ij}-\frac{1}{2}(\Phi-\Psi)_{,ij}=-4 \pi G a^2 \overline{\delta T}^i_j.
\label{e13}
\end{equation}
Here the symbol $\Delta$ denotes the Laplacian for the comoving spatial coordinates $x^i \equiv \mathbf{x}$. It corresponds to $-k^2$ in momentum space, where $k \equiv |\mathbf{k}|$ is the comoving wave number for a mode $\propto \exp(i \mathbf{kx}) $.
The energy-momentum tensor is written as
\begin{equation}
\overline{\delta T}^0_0=\overline{\delta \epsilon}, \;\;\; \overline{\delta T}^0_i=\frac{1}{a}(\epsilon_0+p_0)(\overline{\delta u}_{\| i}), \;\;\; \overline{\delta T}^i_j=-\overline{\delta p}\delta ^i_j.
\label{eq14}
\end{equation}
Here we define $\delta u_{\|i}\equiv \delta u_{\|,i}$ for a scalar function $\delta u_\|$ and \footnote{We ignore another component $\delta u_{\perp i}$ with the property $(\delta u_{\perp i})^{,i}=0$ because it does not affect scalar perturbation.}
\begin{equation}
\overline{\delta u}_{\|i}=\delta u_{\|i}-a(B-E^\prime)_{,i}.
\label{eqgt}
\end{equation}
From Eq.~(\ref{eq14}), $\overline{\delta T}^i_j=0$ for $i \neq j$ therefore $\Psi=\Phi$ from Eq.~(\ref{e13}). By using these results, Eqs.~(\ref{ee11}), (\ref{e12}), and (\ref{e13}) are simplified to
\begin{equation}
\Delta \Phi-3\mathcal{H}(\Phi^\prime+\mathcal{H}\Phi)=4\pi G a^2 \overline{\delta \epsilon},
\label{de}
\end{equation}
\begin{equation}
(\Phi^\prime+\mathcal{H}\Phi)_{,i}=4\pi G a (\epsilon_0+ p_0)\overline{\delta u}_{\| i},
\label{eq9}
\end{equation}
\begin{equation}
\Phi^{\prime\prime}+3 \mathcal{H}\Phi^\prime +(2 \mathcal{H}^\prime + \mathcal{H}^2)\Phi=4\pi G a^2 \overline{\delta p}.
\label{e11}
\end{equation}
The pressure is a function of energy density $\epsilon$ and entropy $S$, hence the perturbation is
\begin{equation}
\overline{\delta p}=c^2_s \overline{\delta \epsilon}+\tau \delta S,
\label{state}
\end{equation}
where $c^2_s \equiv (\partial p/\partial \epsilon)_S$ and $\tau \equiv (\partial p / \partial S)_\epsilon$. From Eqs.~(\ref{de}) and (\ref{e11}), we have
\begin{equation}
\Phi^{\prime\prime}+3(1+c^2_s)\mathcal{H}\Phi^\prime -c^2_s \Delta \Phi +(2\mathcal{H}^\prime +(1+3c_s^2)\mathcal{H}^2)\Phi=4\pi G a^2 \tau \delta S.
\label{main}
\end{equation}
We will consider adiabatic perturbations where $\delta S=0$ in the following discussion. The first derivative term in the above equation can be eliminated if we define
\begin{equation}
u \equiv \frac{\Phi}{(\epsilon_0+p_0)^{1/2}}
\label{u}
\end{equation}
and
\begin{equation}
\theta \equiv \frac{1}{a}\left( 1+\frac{p_0}{\epsilon_0} \right)^{-1/2}.
\label{theta}
\end{equation}
By using $u$ and $\theta$, Eq.~(\ref{main}) becomes
\begin{equation}
u^{\prime\prime}-c_s^2 \Delta u - \frac{\theta^{\prime\prime}}{\theta}u=0,
\label{eq2}
\end{equation}
which can be further rearranged into
\begin{equation}
\left[ \theta^2 \left( \frac{u}{\theta} \right)^\prime \right]^\prime = c^2_s \theta^2 \Delta \left( \frac{u}{\theta} \right).
\label{Del}
\end{equation}
Let us define the quantity $\zeta_1$ as
\begin{equation}
\zeta_1 \equiv \frac{2}{3}\frac{\mathcal{H}^{-1}\Phi^\prime+\Phi}{1+w}+\Phi.
\label{zeta1}
\end{equation}
This definition is used in modern textbooks and reviews such as \cite{Mukhanov:2005sc, Lyth:2009zz, Brandenberger:1994ce, Mukhanov:1990me, Durrer:2004fx}.
By using Eqs.~(\ref{u}) and (\ref{theta}), we can obtain
\begin{equation}
\zeta_1 = \frac{2}{3}\left( \frac{8\pi G}{3} \right)^{-1/2}\theta^2 \left( \frac{u}{\theta} \right)^\prime.
\label{z12}
\end{equation}
From Eq.~(\ref{Del}), the time derivative of $\zeta$ is given by
\begin{equation}
\zeta_1^\prime = \frac{2}{3}\left( \frac{8\pi G}{3} \right)^{-1/2}c_s^2 \theta^2 \Delta \left( \frac{u}{\theta} \right).
\label{z12p}
\end{equation}
For large scales (or the comoving wavenumber $k\rightarrow 0$ or $\Delta \Phi \rightarrow 0$), $\zeta^\prime=0$ and it is a useful conserved quantity.
We would like to study the effect of non-zero $k$ in the following.
\section{(violation of) the conservation of $\zeta_1$}
\label{sec3}
For a mode with wavenumber $k$, Eq.~(\ref{eq2}) can be rewritten as
\begin{equation}
u_k(\eta)=C_1 \theta + C_2 \theta \int \frac{d\eta}{\theta^2}-k^2 \theta \int^\eta \left( \int^{\tilde{\eta}} c_s^2 \theta u_k d\bar{\eta} \right)\frac{1}{\theta^2(\tilde{\eta})}d\tilde{\eta}.
\end{equation}
From Eq.~(\ref{z12}), we have\footnote{This appears as an exercise in \cite{Mukhanov:2005sc}.}
\begin{equation}
\zeta_1=\frac{2}{3}\left( \frac{8\pi G}{3} \right)^{-1/2} C_2 - \frac{2}{3}\left( \frac{8\pi G}{3} \right)^{-1/2} k^2 \int^\eta c_s^2 \theta u_k d\bar{\eta}.
\label{vio}
\end{equation}
The first term is just a constant and the second term depends on $\eta$ and explicitly shows the violation of the otherwise conserved quantity $\zeta_1$.
It can also be obtained by integrating Eq.~(\ref{z12p}).
If the pressure $p(\epsilon)$ is discontinuous on the hypersurface $\Sigma$, matching conditions \cite{Mukhanov:2005sc, Deruelle:1995kd} can be developed by integrating Eq.~(\ref{Del}) near $\Sigma$ as
\begin{equation}
\left[ \theta^2 \left( \frac{u}{\theta} \right)^\prime \right]_{\pm}=\int^{\Sigma+0}_{\Sigma-0} c^2_s \theta^2 \Delta \left( \frac{u}{\theta} \right) d \eta,
\end{equation}
where $[X]_{\pm} \equiv X_+ - X_-$.
By using the relation (which is derived in the appendix)
\begin{equation}
c_s^2 \theta^2 =\frac{\epsilon_0}{3 a^2 \mathcal{H}}\left( \frac{1}{\epsilon_0+p_0} \right)^\prime - \frac{\epsilon_0}{a^2 (\epsilon_0+p_0)},
\label{re}
\end{equation}
and continuity of $a$, $\epsilon$, and $u/\theta$ one obtains the matching conditions
\begin{equation}
[\Phi]_{\pm}=0, \;\;\; \left[ \zeta_1-\frac{2}{9\mathcal{H}^2}\frac{\Delta \Phi}{1+w}, \right]_{\pm}=0
\label{m2}
\end{equation}
where $w=p_0/\epsilon_0$.
Only for long-wavelength perturbations when $\Delta \Phi$ can be neglected, we have
\begin{equation}
[\zeta_1]_{\pm}=0.
\end{equation}
\section{(violation of) the conservation of $\zeta_2$}
\label{sec4}
The curvature perturbation on a uniform density slice is defined as\footnote{This quantity is originated in \cite{Bardeen:1983qw} defined in the uniform expansion gauge. There could be a minus sign difference in the definitions. A comparison of $\zeta_1$ and $\zeta_2$ is discussed in \cite{Martin:1997zd} where they are called $\zeta$ and $\zeta_{BST}$.}
\begin{equation}
\zeta_2 \equiv \mathcal{H} \frac{\delta \epsilon}{\epsilon_0^\prime}+ \psi,
\end{equation}
which can be calculated in any gauge due to its gauge invariance. Note that $\delta \epsilon$ is the energy perturbation in any gauge. The relation between $\delta \epsilon$ and $\overline{\delta \epsilon}$ is $\overline{\delta \epsilon}=\delta \epsilon-\epsilon^\prime_0(B-E^\prime)$. We have $\overline{\delta \epsilon}=\delta \epsilon$ in the longitudinal gauge where $B=E=0$. If one chooses a gauge where $\delta \epsilon=0$ (uniform density slice), $\zeta_2$ is given by the $\psi$ (curvature perturbation\footnote{It is called curvature perturbation because $\psi$ determines the intrinsic spatial curvature on hypersurfaces of constant $\eta$.})
at this gauge hence it is called curvature perturbation in uniform density slice. In particular, we can calculate $\zeta_2$ in the longitudinal (conformal-Newtonian) gauge as
\begin{equation}
\zeta_2= \mathcal{H} \frac{\overline{\delta \epsilon}}{\epsilon_0^\prime}+ \Phi.
\end{equation}
By using Eqs.~(\ref{cl}), (\ref{e2}), and (\ref{de}), we obtain\footnote{This appears as the definition of $\zeta$ in \cite{Brandenberger:1983tg, Brandenberger:1984cz}.}
\begin{equation}
\zeta_2 = \frac{2}{3}\frac{\mathcal{H}^{-1}\Phi^\prime+\Phi}{1+w}+\Phi-\frac{2}{9\mathcal{H}^2}\frac{\Delta \Phi}{1+w}=\zeta_1-\frac{2}{9\mathcal{H}^2}\frac{\Delta \Phi}{1+w}.
\end{equation}
We find that Eq.~(\ref{m2}) is immediately simplified to
\begin{equation}
[\Phi]_{\pm}=0, \;\;\; \left[ \zeta_2 \right]_{\pm}=0.
\label{m4}
\end{equation}
The condition for long-wavelength perturbations is immaterial for $\zeta_2$. How about the violation of conservation?
By using Eqs.~(\ref{u}), (\ref{theta}) and (\ref{re}), we write the integrand of the second term in Eq.~(\ref{vio}) as
\begin{equation}
c^2_s \theta^2 \Delta \left( \frac{u}{\theta} \right)=\frac{\epsilon_0}{3a^2 \mathcal{H}}\left( \frac{1}{\epsilon_0+p_0} \right)^\prime \frac{a \Delta \Phi}{\epsilon_0^{1/2}}-\frac{\epsilon_0}{a^2 (\epsilon_0+p_0)} \frac{a \Delta \Phi}{\epsilon_0^{1/2}} \equiv \frac{X}{3 \mathcal{H}}\left( \frac{1}{\epsilon_0+p_0} \right)^\prime -X\left( \frac{1}{\epsilon_0+p_0} \right),
\label{e29}
\end{equation}
where we have defined
\begin{equation}
X \equiv \frac{\Delta \Phi \epsilon_0^{1/2}}{a}=\sqrt{\frac{3}{8 \pi G}}\frac{\Delta \Phi \mathcal{H}}{a^2}
\label{xd}
\end{equation}
to simplify the calculation.
The equality is obtained by using Eq.~(\ref{e2}).
Let us calculate
\begin{equation}
\zeta_2^\prime=\zeta_1^\prime -\left[ \frac{2}{9 \mathcal{H}^2}\frac{\Delta \Phi}{1+w} \right]^\prime=\frac{2}{3}\left( \frac{8 \pi G}{3} \right)^{-1/2}c_s^2 \theta^2 \Delta \left( \frac{u}{\theta} \right)-\left[ \frac{2}{9 \mathcal{H}^2}\frac{\Delta \Phi}{1+w} \right]^\prime.
\label{z2p}
\end{equation}
We can use the equality (obtained from Eq.~(\ref{e2}))
\begin{equation}
\frac{2}{9\mathcal{H}^2}\frac{\Delta \Phi}{1+w}= \frac{2}{3} \left( \frac{8 \pi G}{3} \right)^{-1/2} \frac{\epsilon_0^{1/2} \Delta \Phi}{3a \mathcal{H}}\left( \frac{1}{\epsilon_0+p_0} \right) \equiv \frac{2}{3} \left( \frac{8 \pi G}{3} \right)^{-1/2} \frac{X}{3 \mathcal{H}}\left( \frac{1}{\epsilon_0+p_0} \right),
\end{equation}
to obtain
\begin{equation}
\left[ \frac{2}{9 \mathcal{H}^2}\frac{\Delta \Phi}{1+w} \right]^\prime=\frac{2}{3} \left( \frac{8 \pi G}{3} \right)^{-1/2} \left\{ \left( \frac{X}{3 \mathcal{H}}\right)^\prime \left( \frac{1}{\epsilon_0+p_0} \right)+ \left( \frac{X}{3 \mathcal{H}}\right) \left( \frac{1}{\epsilon_0+p_0} \right)^\prime \right\},
\label{e33}
\end{equation}
where direct calculation shows
\begin{equation}
\left( \frac{X}{3 \mathcal{H}} \right)^\prime=X \left( \frac{\Delta \Phi^\prime}{3 \Delta \Phi \mathcal{H}}-\frac{2}{3} \right).
\end{equation}
By substituting Eqs.~(\ref{e29}) and (\ref{e33}) into Eq.~(\ref{z2p}), we obtain
\begin{equation}
\zeta_2^\prime = \frac{2}{3} \left( \frac{8 \pi G}{3} \right)^{-1/2} \frac{X}{3\mathcal{H}\Delta \Phi}(\Delta \Phi^\prime+\mathcal{H} \Delta \Phi)
\end{equation}
Finally, by using Eqs.~(\ref{eq9}) and (\ref{xd}), we obtain
\begin{equation}
\zeta_2^\prime = \frac{\overline{\delta T}^{0,i}_i}{3(\epsilon_0+p_0)} = \frac{1}{3a}\Delta \overline{\delta u}_{\|}= \frac{1}{3a}\Delta (\delta u_\|-a(B-E^\prime)),
\label{result}
\end{equation}
where the third equality is from Eq.~(\ref{eqgt}).
This is the main result of this note\footnote{There is a similar expression for the time derivative of $\zeta$ in \cite{Wands:2000dp} obtained via a different approach using only the local conservation of energy-momentum tensor without assumption of Einstein gravity.}.
This concise equation simply shows that in the case of adiabatic perturbations, $\zeta_2$ is conserved whenever we can neglect the right-hand side of the equation. The last equality allows us to calculate $\zeta^\prime_2$ in any gauge. For example, in comoving gauge where $\delta u_\|=B=0$, we have $\zeta_2^\prime=\Delta E^\prime /3$. If we define\footnote{Velocity divergence is considered for example in \cite{Lesgourgues:2013qba} where it is called $\theta$.} a velocity divergence $\Theta$ as $ \delta T^{0,i}_i \equiv(\epsilon_0+p_0)\Theta $, Eq.~(\ref{result}) can be written more succinctly as
\begin{equation}
\zeta_2^\prime = \frac{\overline{\Theta}}{3}.
\end{equation}
This shows the time evolution of $\zeta_2$ is governed by the velocity divergence in the longitudinal (conformal-Newtonian) gauge.
The equation can be applied to various models. For example, consider a single-field inflation model with a scalar field as the inflaton field $\phi$. The action is given by
\begin{equation}
S=\int \left( \frac{1}{2} g^{\mu\nu} \phi_\mu \phi_\nu -V \right)\sqrt{-g}d^4 x.
\end{equation}
The scalar field is a quantum field with quantum fluctuation which results in a small perturbation $\delta\phi$.
The corresponding perturbation of the relevant component of the energy-momentum tensor is
\begin{equation}
\overline{\delta T}^0_i=\frac{1}{a^2}(\phi^\prime_0 \overline{\delta \phi})_{,i}.
\end{equation}
By using Eq.~(\ref{result}), this immediately gives the equation of motion for $\zeta_2$ as
\begin{equation}
\zeta_2^\prime = \frac{\phi^\prime_0 \Delta \overline{\delta \phi}}{3 a^2 (\epsilon_0 + p_0)}.
\label{sfi}
\end{equation}
On large scales, the Laplacian for $\overline{\delta \phi}$ is small and $\zeta_2$ is a conserved quantity, in this case, Eq.~(\ref{sfi}) would be useful to let us estimate how large the scale needs to be in order to ignore the evolution of $\zeta_2$. On the other hand, it is also useful when we consider the case of small-scale fluctuations. It may also be useful in the study of reheating after inflation.
As another example, let us consider $k$-inflation \cite{Armendariz-Picon:1999hyi} with the action
\begin{equation}
S=\int p(X,\phi)\sqrt{-g}d^4 X,
\end{equation}
where $X=(1/2)g^{\mu\nu} \phi_\mu \phi_\nu$ and $\epsilon=2X p_{,X}-p$.
Similar to the previous case, consider the perturbation of the inflaton field $\phi$ and the corresponding perturbation of the energy-momentum tensor, we would have
\begin{equation}
\overline{\delta T}^0_i=(\epsilon +p)\left( \frac{\overline{\delta \phi}}{\phi_0^\prime} \right)_{,i}.
\end{equation}
Therefore from Eq.~(\ref{sfi}),
\begin{equation}
\zeta_2^\prime = \frac{ \Delta \overline{\delta \phi}}{3 a^2 \phi_0^\prime}.
\end{equation}
By applying this result, we can clearly know under what condition the time evolution of $\zeta_2$ can be ignored.
\section{conclusion}
\label{sec5}
In this note, we compare two definitions of $\zeta_1$ and $\zeta_2$. We show that the matching condition for $\zeta_2$ is simpler than that of $\zeta_1$. In particular, we derive the time evolution equation for $\zeta_2$ solely in the framework of the longitudinal (conformal-Newtonian) gauge. The result is very concise and can be compared with that obtained through a different approach in the literature. We present two categories of inflation models as examples, but the application of this result is very broad, especially when one is working on their model in the longitudinal (conformal-Newtonian) gauge.
Of course, if we are only interested in large scales where the comoving wave number $k \rightarrow 0$, then $\zeta_1=\zeta_2$ and we could just call them $\zeta$ and it is conserved (for adiabatic perturbations). However, when discussing situations where $k \neq 0$, it is not always clear which definition is used in the literatures and it is good to know when we could neglect those small-scale corrections. For $\zeta_1$ the condition is whether $\Delta \Phi$ can be neglected. On the hand for $\zeta_2$, the condition is whether $\Delta \overline{\delta u}_{\|}/3a$ can be neglected. We believe this note could help to unify the ideas from different approaches.
\appendix
\section{a useful relation}
We derive Eq.~(\ref{re}) here.
Let us start by calculating
\begin{eqnarray}
\left( \frac{1}{\epsilon_0+p_0} \right)^\prime \frac{\epsilon_0}{3a^2 \mathcal{H}}&=&-\frac{\epsilon_0^\prime + p_0^\prime}{(\epsilon_0+p_0)^2}\frac{\epsilon_0}{3a^2 \mathcal{H}}\\
&=&-\frac{\epsilon_0^\prime \left( 1+ \frac{p_0^\prime}{\epsilon_0^\prime} \right)}{(\epsilon_0+p_0)^2}\frac{\epsilon_0}{3a^2 \mathcal{H}}\\
&=&\frac{3\mathcal{H}(\epsilon_0+p_0)\left( 1+\frac{p_0^\prime}{\epsilon_0^\prime}\right)\epsilon_0}{3\mathcal{H}(\epsilon_0+p_0)^2a^2}\\
&=&\frac{\epsilon_0}{a^2 (\epsilon_0+p_0)}\left(\frac{p_0^\prime}{\epsilon_0^\prime} \right)+\frac{\epsilon_0}{a^2(\epsilon_0+p_0)}\\
&=&c_s^2 \theta^2+\frac{\epsilon_0}{a^2(\epsilon_0+p_0)}.
\end{eqnarray}
Here in the third equality, we have used Eq.~(\ref{cl}). In the last equality, we have used $c_s \equiv (\partial p_0 /\partial \epsilon_0)=p_0^\prime/\epsilon_0^\prime$ and the definition of $\theta$ from Eqs.~(\ref{state}) and (\ref{theta}). Therefore
\begin{equation}
c^2_s \theta^2=\frac{\epsilon_0}{3a^2 \mathcal{H}}\left( \frac{1}{\epsilon_0+p_0} \right)^\prime-\frac{\epsilon_0}{a^2(\epsilon_0+p_0)}.
\end{equation}
\acknowledgments
This work is supported by the National Science and Technology Council (NSTC) of Taiwan under Grant No. NSTC 111-2112-M-167-002.
|
Title:
The Physical Content of Long Tensor Modes in Cosmology |
Abstract: We analyze the physical content of squeezed bispectra involving
long-wavelength tensor perturbations, showing that these modes cannot be gauged
away, except for the exact (unphysical) limit of infinite wavelength, $k = 0$.
This result has a direct implication on the validity of the Maldacena
consistency relation, respected by a subclass of inflationary models.
Consequently, in the squeezed limit, as in the case of the scalar-scalar-scalar
bispectrum, squeezed mixed correlators could be observed by future experiments,
remaining a key channel to study Early Universe physics and discriminate among
different models of inflation.
| https://export.arxiv.org/pdf/2208.00075 |
\hyphenrules{nohyphenation}
\thispagestyle{empty}
\vspace*{-2.5cm}
\begin{minipage}{.45\linewidth}
\end{minipage}
\vspace{2.5cm}
\begin{center}
{\huge\sffamily\bfseries The Physical Content of Long Tensor Modes\\ in Cosmology}
\end{center}
\vspace{0.5cm}
\begin{center}
{\sffamily\bfseries \large Nicola Bartolo}$^{a,b,c}$,
{\sffamily\bfseries \large Giovanni Battista Carollo}$^{a,d}$,
{\sffamily\bfseries \large Sabino Matarrese}$^{a,b,c,e}$,
{\sffamily\bfseries \large\\ Luigi Pilo}$^{f,g}$,
{\sffamily\bfseries \large Rocco Rollo$^{f, h}$}\\[2ex]
{\it $^a$ Dipartimento di Fisica e Astronomia ``G. Galilei",
Universit\`{a} degli Studi di Padova, via Marzolo 8, I-35131 Padova,
Italy\\\vspace{0.1cm}
$^b$ INFN, Sezione di Padova, via Marzolo 8, I-35131 Padova, Italy\\\vspace{0.1cm}
$^c$ INAF-Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, I-35122 Padova, Italy\\\vspace{0.1cm}
$^d$ Dipartimento di Fisica ``M. Merlin", Universit\`{a} degli Studi di Bari, Via Giovanni Amendola, 173, 70125 Bari, Italy\\\vspace{0.1cm}
$^e$ GSSI-Gran Sasso Science Institute, Viale Francesco Crispi, 7, 67100 L'Aquila, Italy\\\vspace{0.1cm}
$^f$ Dipartimento di Scienze Fisiche e Chimiche, Universit\`a degli Studi dell'Aquila, I-67010 L'Aquila, Italy\\\vspace{0.1cm}
$^g$ INFN, Laboratori Nazionali del Gran Sasso, I-67010 Assergi, Italy\\\vspace{0.3cm}
$^h$ Centro Nazionale INFN di Studi Avanzati GGI, Largo Enrico Fermi 2, I-50125 Firenze, Italy\\\vspace{0.1cm}
{\tt [email protected]},
{\tt [email protected]}}, {\tt [email protected]}, {\tt [email protected]}, {\tt [email protected]
}
\end{center}
\vspace{0.7cm}
\begin{center}
{\small \today}
\end{center}
\vspace{0.7cm}
\begin{center}
{\bf \Large Abstract}\\
\end{center}
We analyze the physical content of squeezed bispectra involving long-wavelength tensor perturbations, showing that these modes cannot be gauged away, except for the exact (unphysical) limit of infinite wavelength, $k=0$. This result has a direct implication on the validity of the Maldacena consistency relation, respected by a subclass of inflationary models. Consequently, in the squeezed limit, as in the case of the scalar-scalar-scalar bispectrum, squeezed mixed correlators could be observed by future experiments, remaining a key channel to study Early Universe physics and discriminate among different models of inflation.
\newpage
\section{Introduction}
The study of primordial non-Gaussianity (PNG) is one of the most important avenues to probe inflation, trying to resolve the large degeneracy among different models still present after analyzing Cosmic Microwave Background (CMB) data. As well-known, the amount of non-Gaussianity in standard single-field slow-roll inflation is very tiny, being of the order of the slow-roll parameters (\cite{Gangui:1993tt,Gangui:1999vg,Wang:1999vf,Acquaviva:2002ud,Maldacena:2002vr,Lyth:2005du}), yet non-vanishing; on the other hand, a large class of multi-field theories leads to quite different predictions (\cite{Seery:2005gb,Gao_2008,Byrnes_2010,Garcia_Saenz_2020}), as well more general single-field models of inflation~\cite{Seery_2005,Chen:2006nt,Senatore:2009gt,planckcollaboration2019planck}. \\
The strength of non-Gaussianity $f_{\rm NL}$ is the key parameter to quantify the phenomenon (\cite{planckcollaboration2019planck}), being related to the bispectrum, which vanishes for a perfectly Gaussian field.
In the case of single-field ``standard inflation", $f_{\rm NL}$ contains in particular a {\it local} contribution, which is maximum for {\it squeezed} bispectrum triangles, where one wave-number is much shorter than the other two.
In this context, one of the main results is the so-called {\it Maldacena consistency relation} (\cite{Maldacena:2002vr,Creminelli_2004,Creminelli:2004pv,Cheung:2007sv,Creminelli:2011rh,Bartolo:2011wb,Senatore:2012wy}), stating that in the squeezed limit the bispectrum (for {\it any} single-field model of inflation) becomes simply a product of two power-spectra
\begin{equation}
\begin{aligned}
\lim _{k_1\rightarrow 0}\langle\zeta(\vec {k}_{1}) \zeta(\vec {k}_{2}) \zeta(\vec {k}_{3})\rangle =- (2 \pi)^{3} \delta^{(3)}(\vec{k}_{1}+\vec{k}_{2}+\vec{k}_{3}) (n_s-1) P_\zeta(k_1)P_\zeta(k_2) \ ,
\end{aligned}
\label{consistencyrelation}
\end{equation}
where $\zeta$ is the comoving curvature perturbation\footnote{In our convention, $\zeta$ is defined from the Ricci scalar curvature $\mathcal R_S$ of an hypersurface of equation $S(x)=$const. where $S$ is a four-dimensional scalar, according with
$$
\mathcal R_S=-\frac{4}{a^2}\nabla^2 \zeta \ ,
$$
at first order in perturbations (where $a$ is the scale factor defined in (\ref{FLRW})).\label{footnote1}}, $P_\zeta$ its power spectrum and $n_s$ the scalar spectral index. This consistency relation has been derived, using different approaches, such as path-integration (\cite{Goldberger_2013}), exploiting the residual symmetries of the gauge-fixed action for $\zeta$ (\cite{Creminelli:2012ed,Hinterbichler:2012nm,Hinterbichler:2013dpa,Hui:2018cag}), BRST symmetry (\cite{Binosi,Berezhiani_2014}) and holography (\cite{Schalm,Bzowski_2013}).\\
A similar consistency condition is valid for any type of bispectra in (single-field) inflation, including both the curvature perturbation and the tensor modes (\cite{Maldacena:2002vr,Hinterbichler:2013dpa,Bordin:2016ruc}). There are however models for which the consistency relation is violated. For example, it has been explicitly shown that some inflationary models involving more than one scalar field (\cite{GordonWands}), a non-attractor phase (\cite{Lindefast,Hossein,Kinney}), an unstable background (\cite{Brahma,KhouryPiazza}) or breaking of space-time diffeomorphism invariance (\cite{SolidInflation,Bartolo:2015qvr,Celoria_2021, celoria2021primordial}) violate the consistency condition. This implies that, from the phenomenological point of view, the consistency relation is a very interesting channel to study Early Universe physics, given that it links observable quantities: any deviation from it would rule out all single-field models of inflation and could imply that one of the previous assumptions is valid. The {\it Planck} analysis of the CMB temperature and E-mode polarization provided, among the various results, the following limit on the non-Gaussianity strength for the local shape (\cite{planckcollaboration2019planck}) of curvature bispectrum: $f^\text{local}_\text{NL}=-0.9 \pm 5.1$ at $68 \%$ C.L. When compared to the spectral index, $n_s = 0.9652 \pm 0.0042$ ($68 \%$ C.L.), it is clear that the consistency relation is far from being tested from an experimental point of view.\\
In the last decade a debate has emerged (\cite{Tanaka_2011, Pajer:2013ana, Dai:2015jaa, Dai:2015rda, Bravo:2017gct}) on the observability of squeezed bispectra: various groups have claimed that the consistency relations can be gauged away by a suitable rescaling of the spatial coordinates and, as a result, they cannot be considered as physical observables. In particular, the key-ingredient to cancel squeezed bispectra is the passage to the so-called Conformal Fermi Coordinates (CFC) frame (\cite{Pajer:2013ana,Dai:2015rda}). The very same technique was used to cancel any squeezed $\zeta$-related quantity and, as a consequence, also the halo bias scale-dependence, as far as the so-called ``GR-contribution" (see~\cite{Bartolo:2005xa}) is concerned (\cite{Dai:2015jaa,Baldauf_2011,dePutter:2015vga,Cabass:2018roz}).
Moreover, tensor fossils (\cite{Giddings:2010nc,Masui:2010cz,Giddings:2011zd,Jeong:2012df,Dai:2013ikl,Dai:2013kra,Dimastrogiovanni:2014ina, Dimastrogiovanni_2016,Dimastrogiovanni:2019bfl}) in single-field inflationary models have been claimed to be not genuine physical quantities~\cite{Pajer:2013ana,Brahma}.\\
In this paper we argue that the gauge freedom used to cancel squeezed correlation functions is only valid if the squeezed momentum is {\it exactly zero}. As shown in \cite{Matarrese:2020why}, when the squeezed momentum is finite, the gradient expansion restores the consistency relations. In \cite{Matarrese:2020why} the analysis was limited only to the scalar sector, but here we argue that the same result applies to the tensor sector.\\
This paper is organized as follows. In Section \ref{deformeddilation} we discuss the transformation of the metric components under a gauge transformations involving long-wavelength modes, for a more generic transformation than the one discussed in \cite{Matarrese:2020why} which in particular accounts for tensor modes. In Section \ref{regSVT} we show that under this deformed space dilations, the tensor perturbation of the metric tensor is unaffected and no shift is present for any finite value of the wave-number $k$. In Section \ref{Bispectrum} we discuss the transformation properties of a generic bispectrum under such a transformation. We conclude by summarizing our main results in Section \ref{conclusion}.
\section{Deformed Dilation}
\label{deformeddilation}
Let us consider a perturbed Friedmann-Lemaitre-Robertson-Walker (FLRW) spacetime\footnote{We take the spatial $\kappa$ curvature zero for simplicity, the results could be easily extended to the case of $\kappa \neq 0$.}
\be
ds^2=-dt^2 +a^2\,\delta_{ij}\,dx^i\,dx^j +h_{\mu\nu}\,dx^\mu\,dx^\nu \,.
\label{FLRW}
\ee
Under an infinitesimal coordinate transformation of the following type,
\begin{equation*}
x^\mu \to \tilde{x}^\mu=x^\mu+\epsilon^\mu\,,
\end{equation*}
the induced change $\Delta h=\tilde{h}(x)-h(x)$ in the metric perturbation (gauge transformation) is given by (\cite{Weinberg:2008zzc})
\begin{equation}
\begin{aligned}
\Delta h_{00}&=2\dot\epsilon^0\, ;\\
\Delta h_{0i}&=\partial_i\epsilon^0 -a^2\dot \epsilon^i\, ;\\
\Delta h_{ij}&=-2a\dot a\epsilon^0\delta_{ij} -a^2\partial_j\epsilon^i-a^2\partial_i\epsilon^j\ .
\end{aligned}
\label{hijtransfrules}
\end{equation}
Exploiting rotational invariance, it is convenient to decompose the metric perturbation into scalars, vectors and tensors (SVT) under $SO(3)$; namely, we decompose $\epsilon^\mu$ according to
\be
\epsilon^\mu=\left(\epsilon^0, \partial^i
\epsilon+\epsilon^i_V\right) \qquad \qquad \text{where}\quad \de_i \epsilon^i_V=0
\ee
and $h_{\mu \nu}$ as
\be
\begin{aligned}
h_{00}&=-2\phi \ ,\\
h_{0i}&=a(\partial _i F+G_i) \qquad \qquad \de_i G_i=0 \ ,\\
h_{ij}&=a^2\left (-2\, \psi\, \delta_{ij} +\partial_i\partial_j B
+\partial_j C_i +\partial_i C_j +D_{ij}\right ) \, \qquad \de_i
C_i=\de_j D_{ij} = \delta _{ij} D_{ij} =0 \ .
\label{so(3)decomposition}
\end{aligned}
\ee
In this way, we get the following standard transformation rules for linear perturbations (\cite{Weinberg:2008zzc})
\be
\Delta \phi =\dot \epsilon^0 \, , \;\;\; \Delta F=\frac{1}{a}\, \epsilon^0- a \, \dot{\epsilon}\, ,\;\;\;\Delta G_i=-a\,\dot \epsilon^i_V\, ,
\ee
\begin{equation}
\Delta\psi= H\epsilon^0 \ ,\quad \Delta B=-2\epsilon \ , \quad\Delta C_i=-\epsilon^V_i \ , \quad\Delta D_{ij}=0 \, .
\label{ijgaugetransfrollo}
\end{equation}
Consider now the following transformation
\begin{equation}
\epsilon^{ i}=\lambda\, x^i+\omega^i_j \,x^j \, , \qquad \qquad
\omega^i_{i} =0 \, ,
\label{Weinbergtransformationeps}
\end{equation}
where $\lambda$ is a constant and $\omega$ a constant $3\times3$ matrix, traceless by definition\footnote{The trace part of $\omega$, one can always absorb it in $\lambda$. Indeed
$$
\omega^i_{j}x^j=(\omega^i_{j}-\omega_k^k\delta^i_{j})x^j+\omega^k_kx^i \, ;
$$
the first term gives a traceless $\omega$ and what is left can be reabsorbed in $\lambda$.}. This can be interpreted as the leading contribution in a derivative expansion of $\epsilon^i$.
Using rules (\ref{hijtransfrules}), the change of the spatial metric perturbation is given by
\begin{equation}
\Delta h_{ij}=- a^2\left (2\,\lambda
\,\delta_{ij}+\omega^i_{j}+\omega^j_{i}\right)\,.
\label{exactklimit}
\end{equation}
Thus, only the symmetric part $\omega^{S}$ of $\omega$ contributes to the transformed metric. Moreover, one can easily realize that the transformation (\ref{exactklimit}) can be reproduced by the following 3-parameter family of transformations of the scalar, vector and tensor parts defined in (\ref{so(3)decomposition}):
\begin{equation}
\Delta \psi=\alpha\,\lambda \ , \quad\Delta
B=\lambda \, (\alpha-1)\,x^jx^j+\gamma \, \omega_{ij}^S\, x^i\,x^j \ , \quad
\Delta C_i=(\beta-1)\,\omega_{ij}^S\,x^j \ , \quad\Delta D_{ij}=-2 \,
(\beta+\gamma) \, \omega_{ij}^S \, ,
\label{degeneracy1}
\end{equation}
with $\alpha, \,\beta, \,\gamma\in \mathbb{R}$. One should stress that the degeneracy in the above transformation rule is due the ambiguity of the decomposition (\ref{so(3)decomposition}) for the transformed metric (\ref{exactklimit}) and it is removed as soon as the coordinates transformation (\ref{Weinbergtransformationeps}) contains terms quadratic in $x^i$, or equivalently $\lambda$ and $\omega_{ij}$ becomes space-dependent (in the general case $\lambda$ and $\omega$ are functions of $x^i$). A popular choice (\cite{ Pajer:2013ana, Dai:2015rda, Bravo:2017gct}) is to argue that a scalar perturbation $\psi_L$ and the tensor perturbation $D_L$ with a very long wavelength can always be gauged away by setting $\alpha=\beta=1$ and $\gamma=0$, thus
\be
\Delta \psi_L=\lambda \, , \qquad \Delta D_{ij} = 2 \,
\omega_{ij}^{S} \, , \qquad \Delta
B=\Delta C_i =0 \, .
\label{canc}
\ee
Besides the fact that such a choice is only one among the infinitely many possible, it works only to gauge away a genuinely {\it constant} mode which is not physical\footnote{In Fourier basis this is equivalent to have a scalar or a tensor perturbation proportional to $\delta^{(3)}(\vec{k})$.}.\\
\noindent Consider now to split the scalar and tensor part of the metric perturbation in their long and short parts
\be
\psi = \psi_L + \psi_S \, , \qquad D_{ij}=
D_{ij}^{(L)} +D_{ij}^{(S)} \, ,
\label{split1}
\ee
by using a suitable window function $W(k)=W_k$ such that
\be
\psi_{L}(x)=\frac{1}{(2\,\pi)^3} \int d^3 k\, e^{i\, \textbf{k}\cdot\textbf{x}} \,W_k \,\psi(k) \,;
\label{split}
\ee
and similarly for the tensor part. In \cite{Pajer:2013ana} it was claimed that under a class of coordinates transformations\footnote{This class of transformations are similar to the ones used in the transition from comoving to conformal Fermi coordinates (\cite{Pajer:2013ana,Dai:2015rda}). See the Appendix.}
\be
x^i \to (1- \psi_L )\, x^i+\frac{1}{2}\,D_{j}^i{}_L\, x^j\,,
\label{gaugetransformation}
\ee
(basically the same of (\ref{Weinbergtransformationeps}) when $\psi_L$ and $D_{ij}$ are constant) the ``long wavelength'' part in (\ref{split}) can be removed by using the transformation rules (\ref{canc}). The
problem is that
\begin{itemize}
\item
the choice that leads to (\ref{canc}) is by no means unique; for instance, by taking $\beta=-\gamma$ and $\alpha=0$ in (\ref{degeneracy1}), then (\ref{canc}) is no longer valid: the transformations of the scalar $B$ and of the transverse vector $C_i$ reproduce (\ref{exactklimit}) with $\psi$ unchanged;
\item the cancellation can take place only in the peculiar case of purely constant $\psi_L$ and $D_{j}^i{}_L$ and this is not the case in any reasonable coordinate transformation.
\end{itemize}
As it will be shown in the next section \ref{regSVT}, when the splitting (\ref{split1}) between long and short parts is done by a physical window function, the ambiguity (\ref{degeneracy1}) disappears and the standard transformation rules (\ref{ijgaugetransfrollo}) are recovered; thus no shift is present when a proper gradient expansion is considered. As already discussed in~\cite{Matarrese:2020why}, the transformation rules (\ref{ijgaugetransfrollo}) are precisely the ones that guarantee the gauge invariant character of scalars related to $\psi$-like fields, the comoving curvature perturbation $\zeta$ and $D_{ij}$ itself.\\
We conclude this section by underlining that the ambiguity just described was used by Weinberg to show that in the large-scale limit, under a number of technical assumptions, there is a least a conserved adiabatic mode~\cite{Weinberg:2008zzc,Weinberg:2003sw} by exploiting the residual gauge-invariance of the perturbed FLRW metric in the Newtonian gauge. But we remark that this is valid only in the exact $k=0$ limit, when $\lambda$ and $\omega$ are pure constants and so the ambiguity (\ref{degeneracy1}) is still present.
\section{Restoring the SVT Decomposition: a discontinuity in the gradient expansion}
\label{regSVT}
Let us consider a deformation of (\ref{Weinbergtransformationeps}) in the sense that now both $\lambda$ and $\omega_{ij}$ can depend on the space point $\vec{x}$, namely
\begin{equation}
\label{def_dil}
\epsilon^i= \lambda(x)\, x^i+\omega^i_{j}(x)\, x^j\,, \qquad
\omega^i_{i}=0 \, , \qquad \partial_i \omega^i_j=0\,.
\end{equation}
To be as general as possible, we consider $\omega$ to be transverse and traceless, but not symmetric. By introducing a suitable window function $W_k$, in Fourier space $\lambda$ and $\omega$ are written as
\be
\lambda=\frac{1}{(2\,\pi)^3}\,\int d^3 k\,e^{i\,
\textbf{k}\cdot\textbf{x}}\, W_k\, \lambda_k\,, \qquad
\omega^{ij} = \frac{1}{(2\,\pi)^3}\, \int d^3 k\,e^{i\,
\textbf{k}\cdot\textbf{x}}\, W_k\, \omega^{ij} _{\vec k} \, .
\ee
A very common choice for the window function $W_k$ is
\be
W_k=\theta \left[\frac{1}{H}\left(k_c-k\right)\right] \ ,
\ee
where $k_c$ is a reference scale for the long-short modes splitting: modes with a wavelength smaller than $k_c>0$ are considered long and they do not contribute. However, keep in mind that the rest of this section is independent of the particular choice of $W_k$. Notice also that we have taken the function $\lambda_k$ such that it depends only on $k=|\vec{k}|$, being (for our purposes) related to the Fourier transform of $\zeta$ on super-horizon scales (\cite{Matarrese:2020why}).\\
Using the transformation (\ref{def_dil}) in (\ref{hijtransfrules}), the variation of $h_{ij}$ results in
\be
\Delta h_{ij}= -a^2 \left( \de_i \epsilon^j+ \de_j \epsilon^i
\right)= -a^2 \left[2 \, \delta_{ij} \, \lambda + 2 \, \omega_{ij}^S
+x^i \, \de_j \lambda +x^j \, \de_i \lambda+ x^\ell\,\left(\de_i
\omega^j_{ \ell}+ \de_j \omega^i_{\ell} \right)
\right] \, .
\label{htrans}
\ee
The Fourier transform of ${x^i}\partial \lambda_j$ entering in (\ref{htrans}), can be written as follows (\cite{Matarrese:2020why}) by using integration by parts:
\begin{equation}
x^i \, \partial_j \lambda= -\frac{1}{(2\,\pi)^3} \,\int d^3 k\,e^{i\, \vec{k}\cdot \vec{x}} \partial_{k^i} \left(k^j\, \lambda_k\right)+\text{BT}\,.
\end{equation}
The boundary term BT is evaluated at very large $k$, where the window function vanishes: thus, terms of such a type do not contribute. Similar considerations apply to the Fourier transform of $x^\ell \de_i \omega_{j \ell}$. As a result, in Fourier space eq. (\ref{htrans}) reads
\be
\begin{aligned}
\Delta h_{ij}(k)
&=a^2\left[2\,\frac{k^i \, k^j}{k}\, \lambda_k'+k^i\; \partial_{k^l} \omega^{j}_l{}_{\vec k}+k^j\;\partial_{k^l} \omega^{i}_l{}_{\vec k}\right] \,,
\end{aligned}
\ee
where $\lambda_k' = \frac{d \lambda_k }{ d k}$. Thus, as claimed, no shift neither in $\psi_k$ nor in $D_{ij}(\vec k)$ is present and, by comparison with eq. (\ref{hijtransfrules}), one gets the following gauge variations
\be
\Delta \psi(k)=0 \ ,\quad \Delta B(k)=-\frac{2}{k}\,\lambda_k' \ , \quad \Delta C_i(k)=\partial_{k^l} \omega^{i}_l{}_{\vec k} \ , \quad \Delta D_{ij}(k)=0\ .
\label{gaugevariationk}
\ee
The SVT decomposition is restored and no ambiguity is present: the would-be shift of $\psi$ is actually a gradient term involving the transformation $B$, while the would-be shift of $D_{ij}$ is turned into a gradient of $\omega_{ij}$
by using
\be
k^i\,\partial_{k^l} \omega^{i}_l{}_{\vec k}=\partial_{k^l}\left (k^i \,\omega^{i}_l{}_{\vec k}\right)=0 \,;
\ee
the first equality is obtained by considering the traceless condition $\delta_{il}\,\omega^{i}_l=0$. As a result, the shifts in (\ref{degeneracy1}) are just the artifact of the very special form (\ref{Weinbergtransformationeps}) where $\lambda$ and $\omega_{ij}$ are taken to be constant. Whenever $\lambda$ and $\omega$ acquire a space dependence, the shifts disappear and the gauge transformation cannot be used to cancel a physical long mode (that is not proportional to $\delta^{(3)}(\vec{k})$).
\section{Gauge Variation of a Correlator}
\label{Bispectrum}
We are interested in the correlation function of an operator $ {\cal O}(\vec{x}_1,...\vec{x}_N)$ built out of $\zeta$ and the tensor mode $D_{ij}$ taken as quantum field operators and evaluated by using the in-in formalism, see for instance \cite{Weinberg:2005vy}; namely
\be
{\cal O}(\vec{x}_1,...\vec{x}_N)= \zeta(\vec x_1)...\,\zeta(\vec x_M)\,D(\vec x_{M+1})... \,D(\vec x_N) \,.
\ee
In Fourier space, the case $N=3$ gives various types of bispectra.~\footnote{In Fourier space it is convenient to strip out the overall delta function according to
\be
\langle {\cal O}(\vec{k}_1,...\vec{k}_N) \rangle=(2\pi)^3 \delta^{(3)} (\vec{k}_1+...+\vec{k}_N)B_{\cal{O}}({k}_1,...,{k}_N) \ .
\nb
\ee
} The infinitesimal coordinate transformation (\ref{def_dil}) can be generalized at the non-linear level by
\be
\label{Nonlinear_dil}
\tilde x^i= e^{\lambda}\, x^i+(1-e^{\omega})|_{ij}\, x^j\,, \qquad g_{ij}=a^2 \, e^{2 \, \zeta}\,\delta_{ij}+h_{ij}+\frac{1}{2}\,h_{il}\,h_{lj}+\frac{1}{6}\,h_{il}\,h_{lm}\,h_{mj}\, ,
\ee
with $\zeta$, $\lambda$ and $\omega$ dependent on the spacetime point. Such a transformation represents the non-linear extension of the linear deformed dilatation used to connect the comoving gauge\footnote{In our convention the comoving gauge is defined as the one in which $B$ and the peculiar velocity are set to zero.} with CFC-like reference frame.
Let us consider the gauge variation of a correlator as the the difference of the expectation value of the operator $ {\cal O'}(\vec{x}_1,...\vec{x}_N)$ in the new coordinates defined by (\ref{Nonlinear_dil}) and the original operator $ {\cal O}(\vec{x}_1,...\vec{x}_N)$. %
\be
\Delta O_\text{gauge}= \left\langle {\cal O}'(\vec{x}_1,...\vec{x}_N)\right\rangle -\left\langle {\cal O}(\vec{x}_1,...\vec{x}_N)\right\rangle \, .
\label{correlatorshift}
\ee
The action describing gravity and the inflaton field is invariant under a coordinate transformation and, up to a boundary term, it can be written in the ADM form as~\cite{PhysRev.116.1322,Maldacena:2002vr}
\be
\label{Action}
S=\int \, d^4x \, \sqrt{h} \, N\,\left[ R^{(3)}+K_{ij} \,
K^{ij}-K^2+{\cal L}_m\right] \equiv \int \, d^4x \, \sqrt{h(x)} \,
{\cal S}(x)\,,
\ee
where $h$ is the spatial metric determinant and $K_{ij}$ is the extrinsic curvature tensor of the hypersurface (see footnote \ref{footnote1}) of equation $t=\text{constant}$, while ${\cal L}_m$ is the Lagrangian for the inflaton field $\phi$. The 3-scalar ${\cal S}$ in (\ref{Action}) can be expanded as
\be
\label{tr_prop}
\tilde {\cal S}(\tilde x) \equiv {\cal S}(x)= \bar {\cal S}(t)+{\cal S}^{(1)}(x)+{\cal S}^{(2)}(x)+\dots\,,
\ee
where $\bar {\cal S}(t)$ contains only background quantities, while ${\cal S}^{(n)}(x)$ is of order $n$ in perturbations. It is convenient to define the following gauge variation
\be
\Delta_{\cal S}=\sqrt{\tilde h (x)}\, \tilde {\cal S}(x)-\sqrt{h(x)}\,\, {\cal S}(x)\,,
\ee
which gives for the change of the action $\Delta_{\text{action}}=\int \, d^4x \, \Delta_{\cal S}$. Splitting also $\Delta_{S}$ into background and $n$-th order perturbations as done for $\cal S$ in (\ref{tr_prop}), the change of the spatial coordinates induces the following variation $\Delta_{\cal S}$ up to third order
\be
\label{dletaS_res_back_1_2}
\begin{cases}
&\bar \Delta_{\cal S}=0\,,\\
& \\
&\Delta_{\cal S}^{(1)}=
a^3 \,\partial_i \left(\bar{{\cal S}}\, \lambda\, x^i \right)\,,\\
& \\
&\Delta_{\cal S}^{(2)}= -a^3 \, \partial_i \left[\left({\cal S}^{(1)}+3\,\bar{{\cal S}}\zeta \right)\,\left(\lambda\, x^i+\omega^i_{j}\,x^j \right)\right]\,,\\
& \\
& \Delta_{\cal S}^{(3)}=- a^3 \, \partial_i \left[\left(\frac{9}{2}\, \bar{{\cal S}}\, \zeta^2+3\, {\cal S}^{(1)}\,\zeta+{\cal S}^{(2)} \right)\,\left(\lambda\, x^i+\omega^i_{j}\,x^j \right)\right]\, ,
\end{cases}
\ee
where for simplicity we have omitted\footnote{Being $\lambda$-$\omega$ defined by long modes only, $\lambda^n$-$\omega^n$ ($n>1$) vertices should imply correlators with two and three squeezed momenta that are not relevant for the consistency relation.} all the quadratic and cubic terms in $\lambda$-$\omega$. The botton line is that all the new terms in the cubic action introduced by the deformed dilation can be written as total spatial derivatives. As shown in~\cite{Matarrese:2020why} the gauge variation $\Delta O_\text{gauge}$ can be written as the commutator of ${\cal O}(\vec{x}_1,...\vec{x}_N)$ with $\Delta_\text{action}$ which vanishes being $\Delta_{\cal S}$ a total spatial derivative.
In conclusion, one cannot gauge away the long modes of the fields at least at first order in perturbation theory, so the squeezed bispectrum cannot be canceled.
\section{Conclusion}
\label{conclusion}
Measuring the primordial non-Gaussianity remains one of the most important goals to study the physics of the Early Universe. In single-field inflationary model the squeezed limit is completely fixed in a model independent way thanks to the consistency relation, but its physical observability has been criticized, limiting the contributions to observed correlations to projection effects such as gravitational lensing and redshift perturbations (\cite{Pajer:2013ana}). As discussed in \cite{Matarrese:2020why}, in this debate it is crucial to determine how a very long perturbation affects the quantities of physical interest. In this paper we have analysed carefully the transformation properties of cosmological observables such as the curvature perturbation $\zeta$, the tensor perturbation $D$ and their correlation functions, thereby generalizing the results of \cite{Matarrese:2020why} where the analysis was done for correlators involving only $\zeta$'s. In this case the infinitesimal diffeomorphism is generalized by equation (\ref{Weinbergtransformationeps}) and, in the same way, the result is that no shift both in $\zeta$ and in $D$ is found, independently of the filter used to select long modes, excluding the case of infinitely long-wavelength (hence non-physical) perturbations. The latter is the main ingredient often stated for canceling tensor-scalar $f_{\text{NL}}$, but we have seen that this is not consistent with a CFC-like transformation. We think that the problem is
the role played by a constant spatial dilatation in single-field inflation\footnote{It is well-known that single-field cubic interactions are conformally symmetric. In \cite{Hinterbichler:2013dpa}, it is shown how to extract the scalar consistency relations by using dilatation symmetry itself and the related Ward identities. }.\\
The transformation rules for the SVT elements in Fourier space presented in \cite{Matarrese:2020why} have been generalized in eq. (\ref{gaugevariationk}), showing once again that no shift is present neither in $\psi$ nor in $D$. It has also been shown explicitly that the equations of motion are not affected by a gauge transformation of type (\ref{Weinbergtransformationeps}), implying that the correlator is unaffected according to eq. (\ref{correlatorshift}). Indeed, to cancel the bispectrum it has been used that this difference gives $-\left\langle {\cal O}(\vec{x}_1,...\vec{x}_N)\right\rangle$ (\cite{Tanaka_2011, Pajer:2013ana, Dai:2015rda, Bravo:2017gct}), but we have shown this to be not the case.\\
The outcome of our study is that all the squeezed bispectra, involving both $\zeta$ and $D$, cannot be gauged away and they remain physical observables, analogously to what obtained in \cite{Matarrese:2020why} for $B_{\zeta\zeta\zeta}$. This has a remarkable impact on future observations of a primordial gravitational-wave background. Consistency relations remain a very important tool to study Early Universe physics.\\
\textbf{Acknowledgements}: N.B. and S.M. acknowledge support from the COSMOS network\\ (www.cosmosnet.it) through the ASI (Italian Space Agency) Grants 2016-24-H.0, 2016-24-H.1-2018 and 2019-9-HH.0.
\appendix
\section{Appendix: CFC transformation}
\noindent In this appendix we give the structure of $\lambda$ and $\omega$ related to the CFC expansion. At first order of the CFC transformation, $\lambda$ and $\omega$ reduce to be exactly $\zeta$ and $\frac{D_{ij}}{2}$, as in eq. (\ref{gaugetransformation}). However, the first-order analysis gives rise to two main issues:
\begin{enumerate}
\item as just demonstrated, we get a discontinuity in the gradient expansion;
\item the typical structure of the CFC metric ($g_{ij}^F \sim O(x_F^2)$) is not reproduced.
\end{enumerate}
For this reason, we are forced to consider the transformation to the CFC frame up %
to third order in the CFC series. The scalar part has already been discussed in \cite{Matarrese:2020why}, so we can consider only the tensor perturbation part of the transformation:
\be
\begin{aligned}
\Delta x^k_F=&\Delta x^k+\frac{1}{2} D^k_i\bar\Delta x^i\bigg|_p +\frac{1}{4}\bar\Delta x^i \bar\Delta x^ j ( \partial_i D^k_{j}+\partial_j D^k_{i}-\partial^k D_{ij} ) \bigg|_p+\\
&+\frac{1}{12}\bar\Delta x^i \bar\Delta x^ j \bar\Delta x^ l ( \partial_l\partial_i D^k_{j}+\partial_l\partial_j D^k_{i}-\partial_l\partial^k D_{ij} ) \bigg|_p \ ,
\end{aligned}
\label{tensorCFC}
\ee
where $\Delta x=x(\tau)-p(\tau)$ is the deviation from a central world-line and $\bar\Delta$ is its background value. The transformation (\ref{tensorCFC}) can be SVT decomposed in Fourier space as follows,
\be
\epsilon_k=-\frac{1}{12}\,\frac{1}{k^2}\, \sum_{s=\pm2} \left(D_k^{(s)}-k \, D_{k}^{(s)}{}'\right)\,,
\ee
where we considered $D_{ij}{}_{\vec{k}}=\sum_{s=\pm2} \varepsilon^{(s)}_{ij}\,D_k^{(s)}$ in the standard convention for spin-2 polarization tensors, $\varepsilon^{(s)}_{ij} \,\varepsilon^{(s')\,*}_{ij}= 2 \, \delta^{ss'}$ and
\be
\epsilon^i_V=-\frac{i}{12}\left (10 \partial_{k_l} D_l^i+2k_m \partial_{k_m}\partial_{k_j}D_{ij}\right) \ .
\label{epsilonVk}
\ee
This allows the extension of the results presented in \cite{Matarrese:2020why}, where only the scalar sector was considered. Notice that, since $\partial_i \epsilon^i_V=0$, (\ref{epsilonVk}) can be always rewritten as $\epsilon^i_V=A^i_k x^k$ with $A$ depending on the space-time point but transverse and traceless, so playing the role of $\omega$ in (\ref{def_dil}). Thus, to reproduce the proper local structure of the metric tensor $g_{ij}^F \sim O(x_F^2)$ we find an interesting mixing: the tensor degrees of freedom start to influence both the scalar and vector sectors.
\printbibliography
|
Title:
A reanalysis of the latest SH0ES data for $H_0$: Effects of new degrees of freedom on the Hubble tension |
Abstract: We reanalyze the recently released SH0ES data for the determination of $H_0$.
We focus on testing the homogeneity of the Cepheid+SnIa sample and the
robustness of the results in the presence of new degrees of freedom in the
modeling of Cepheids and SnIa. We thus focus on the four modeling parameters of
the analysis: the fiducial luminosity of SnIa $M_B$ and Cepheids $M_W$ and the
two parameters ($b_W$ and $Z_W$) standardizing Cepheid luminosities with period
and metallicity. After reproducing the SH0ES baseline model results, we allow
for a transition of the value of any one of these parameters at a given
distance $D_c$ or cosmic time $t_c$ thus adding a single degree of freedom in
the analysis. When the SnIa absolute magnitude $M_B$ is allowed to have a
transition at $D_c\simeq 50Mpc$ (about $160Myrs$ ago), the best fit value of
the Hubble parameter drops from $H_{0}=73.04\pm1.04\,km\,s^{-1}\,Mpc^{-1}$ to
$H_0=67.32\pm 4.64\, km\,s^{-1}\,Mpc^{-1}$ in full consistency with the Planck
value. Also, the best fit SnIa absolute magnitude $M_B^>$ for $D>D_c$ drops to
the Planck inverse distance ladder value $M_{B}^>=-19.43\pm 0.15$ while the low
distance best fit $M_B^<$ parameter remains close to the original distance
ladder calibrated value $M_{B}^<=-19.25\pm 0.03$. Similar hints for a
transition behavior is found for the other three main parameters of the
analysis ($b_W$, $M_W$ and $Z_W$) at the same critical distance $D_c\simeq
50\,Mpc$ even though in that case the best fit value of $H_0$ is not
significantly affected. When the inverse distance ladder constraint on $M_B^>$
is included in the analysis, the uncertainties for $H_0$ reduce dramatically
($H_0= 68.2\pm 0.8\, km\,s^{-1}\,Mpc^{-1}$) and the $M_B$ transition model is
strongly preferred over the baseline SH0ES model ($\Delta \chi^2 \simeq -15$,
$\Delta AIC \simeq -13$) according to AIC and BIC model selection criteria.
| https://export.arxiv.org/pdf/2208.11169 |
\section{Introduction}
\subsection{The current status of the Hubble tension and its four assumptions}
Measurements of the Hubble constant using observations of type Ia supernovae (SnIa) with Cepheid calibrators by the SH0ES Team has lead to a best fit value $H_{0}^{R21}=73.04\pm1.04$~km~s$^{-1}$~Mpc$^{-1}$ \cite{Riess:2021jrx} (hereafter R21). This highly precise but not necessarily accurate measurement is consistent with a wide range of other less precise local measurements of $H_0$ using alternative SnIa calibrators \cite{Freedman:2021ahq,Gomez-Valent:2018hwc,Pesce:2020xfe,Freedman:2020dne}, gravitational lensing \cite{Wong:2019kwg,Chen:2019ejq,Birrer:2020tax,Birrer:2018vtm}, gravitational waves \cite{LIGOScientific:2018gmd,Hotokezaka:2018dfi,LIGOScientific:2017adf,DES:2020nay,DES:2019ccw}, gamma-ray bursts as standardizable candles \cite{Cao:2022wlg,Cao:2022yvi,Dainotti:2022rea,Dainotti:2022wli,Dainotti:2013cta}, quasars as distant standard candles \cite{Risaliti:2018reu}, type II supernovae \cite{deJaeger:2022lit,deJaeger:2020zpb}, $\gamma-$ray attenuation \cite{Dominguez:2019jqc} etc. (for recent reviews see Refs. \cite{DiValentino:2021izs, Perivolaropoulos:2021jda}). This measurement is based on two simple assumptions:
\begin{itemize}
\item There are no significant systematic errors in the measurements of the properties (period, metallicity) and luminosities of Cepheid calibrators and SnIa.
\item The physical laws involved and the calibration properties of Cepheids and SnIa in all the rungs of the distance ladder are well understood and modelled properly.
\end{itemize}
This measurement however is at $5\sigma$ tension (Hubble tension) with the corresponding measurement from {\em Planck} observations of the CMB angular power spectrum (early time inverse distance ladder measurement) $H_0^{P18}=67.36\pm0.54$~km~s$^{-1}$~Mpc$^{-1}$ \cite{Planck:2018vyg} (see also Refs. \cite{Perivolaropoulos:2021jda,Abdalla:2022yfr,DiValentino:2021izs,Shah:2021onj,Knox:2019rjx,Vagnozzi:2019ezj,Ishak:2018his,Mortsell:2018mfj,Huterer:2017buf,Bernal:2016gxb} for relevant recent reviews). This inverse distance ladder measurement is also based on two basic assumptions:
\begin{itemize}
\item The scale of the sound horizon at the last scattering surface is the one calculated in the context of the standard cosmological model with the known degrees of freedom (cold dark matter, baryons and radiation) and thus it is a reliable distance calibrator.
\item The evolution of the Hubble free expansion rate $E(z)\equiv H(z)/H_0$ from the time of recombination (redshift $z=z_{rec}$) until the present time ($z=0$) is the one predicted by the standard \lcdm model as defined by the best fit Planck parameters (Planck18$/\Lambda$CDM) \cite{Planck:2018vyg,eBOSS:2020yzd}.
\end{itemize}
A wide range of approaches have been implemented in efforts to explain this Hubble tension (for reviews see Refs. \cite{Bernal:2016gxb,Perivolaropoulos:2021jda,Verde:2019ivm,DiValentino:2021izs,Schoneberg:2021qvd,Abdalla:2022yfr,Krishnan:2020obg,Jedamzik:2020zmd}). These approaches introduce new degrees of freedom that violate at least one of the above four assumptions and may be classified in accordance with the particular assumption they are designed to violate.
Thus, early time sound horizon models introduce new degrees of freedom at the time just before recombination (e.g. early dark energy \cite{Poulin:2018cxd,Niedermann:2019olb,Verde:2019ivm,Smith:2022hwi,Smith:2020rxx,Chudaykin:2020acu,Fondi:2022tfp,Sabla:2022xzj,Herold:2021ksg,McDonough:2021pdg,Hill:2020osr,Sakstein:2019fmf,Niedermann:2020qbw,Rezazadeh:2022lsf},
radiation \cite{Green:2019glg,Schoneberg:2022grr,Seto:2021xua,CarrilloGonzalez:2020oac} or modified gravity \cite{Braglia:2020auw,Abadi:2020hbr,Renk:2017rzu,Nojiri:2022ski,Lin:2018nxe,CANTATA:2021ktz}) to change the expansion rate at that time and thus decrease the sound horizon scale $r_s$ (early time distance calibrator) to increase $H_0$, which is degenerate with $r_s$, to a value consistent with local measurements.
The mechanism proposed by these models attempts to decrease the scale of the sound horizon at recombination which can be calculated as
\be
r_s =\int_0^{t_*} \frac{c_s(a)}{a(t)}dt=\int_{z_{*}}^\infty \frac{c_s(z)}{H(z;\rho_b,\rho_{\gamma},\rho_c)}dz
=\int_0^{a_*}\frac{c_s(a)}{a^2H(a;\rho_b,\rho_{\gamma},\rho_c)}da
\label{rsdef}
\ee
where the recombination redshift $z_*$ corresponds to time $t_*$, $\rho_b$, $\rho_c$ and $\rho_\gamma$ denote the densities for baryon, cold dark matter and radiation (photons) respectively and $c_s$ is the sound speed in the photon-baryon fluid.
The angular scale of the sound horizon is measured by the peak locations of the CMB perturbations angular power spectrum and may be expressed in terms of $r_s$ as
\be
\theta_s=\frac{r_s}{d_A}=\frac{H_0 r_s}{c\int_0^{z_{*}} \frac{dz'}{E(z')}}
\label{thetas}
\ee
where $d_A$ is the comoving angular diameter distance to last scattering (at redshift $z\approx 1100$) and $E(z)$ is the dimensionless normalized Hubble parameter which for a flat $\Lambda$CDM model is given by
\be
E(z)\equiv \frac{H(z)}{H_0}=\left[\Omega_{0m}(1+z)^3 +(1-\Omega_{0m})\right]^{1/2}
\ee
Eq. (\ref{thetas}) indicates that there is a degeneracy between $r_s$, $H_0$ and $E(z)$ given the measured value of $\theta_s$. A decrease of $r_s$ would lead to an increase of the predicted value of $H_0$ (early time models) and a late time deformation of $E(z)$ could lead to an increase of the denominator of Eq. (\ref{thetas}) leading also to an increase of $H_0$ (late time models).
Early dark energy models have the problem of predicting stronger growth of perturbations than implied by dynamical probes like redshift space distortion (RSD) and weak lensing (WL) data and thus may worsen the $\Omega_m$-$\sigma_8$ growth tension \cite{Benisty:2020kdt,Heymans:2020gsg,Kazantzidis:2018rnb,Joudaki:2017zdt,Kazantzidis:2019nuh,Skara:2019usd,Avila:2022xad, Kohlinger:2017sxk,Nunes:2021ipq,Clark:2021hlo} and reduce consistency with growth data and with other cosmological probes and conjectures \cite{Ivanov:2020ril,Hill:2021yec,Hill:2020osr,Clark:2021hlo,Jedamzik:2020zmd,Herold:2021ksg,Vagnozzi:2021gjh,Krishnan:2020obg,Philcox:2022sgj,McDonough:2021pdg}. Thus, a compelling and full resolution of the Hubble tension may require multiple (or other) modifications beyond the scale of the sound horizon predicted by $\Lambda$CDM cosmology. Even though these models are severely constrained by various cosmological observables, they currently constitute the most widely studied class of models \cite{Smith:2020rxx,Chudaykin:2020igl,Sakstein:2019fmf,Reeves:2022aoi,Chudaykin:2020acu,Smith:2022hwi}.
Late time $H(z)$ deformation models introduce new degrees of freedom (e.g. modified gravity \cite{SolaPeracaula:2020vpg,Braglia:2020iik,Pogosian:2021mcs,Bahamonde:2021gfp} dynamical late dark energy \cite{DiValentino:2020naf,Alestas:2020mvb,DiValentino:2019jae,Pan:2019hac,Li:2019yem,Zhao:2017cud,Keeley:2019esp,SolaPeracaula:2018wwm,Yang:2018qmz,Krishnan:2020vaf, Dainotti:2021pqg,Colgain:2022nlb,Colgain:2022rxy,Zhou:2021xov} or interacting dark energy with matter \cite{DiValentino:2019ffd,Yang:2018euj,Vattis:2019efj,DiValentino:2019jae,Yang:2018uae,Ghosh:2019tab}) to deform $E(z)$ at redshifts $z\sim O(1)$ so that the present time value of $H(z=0)=H_0$ increases and becomes consistent with the local measurements. This class of models is even more severely constrained \cite{Brieden:2022lsd,Alestas:2020mvb,Alestas:2021xes,Keeley:2022ojz,Clark:2020miy,DES:2020mpv,Anchordoqui:2022gmw,DES:2022doi,Cai:2022dkh,Heisenberg:2022gqk,Vagnozzi:2021tjv,Davari:2022uwd}, by other cosmological observables (SnIa, BAO and growth of perturbations probes) which tightly constrain any deformation \cite{Alam:2020sor} from the \plcdm shape of $E(z)\equiv H(z)/H_0$.
The third approach to the resolution of the Hubble tension is based on a search for possible unaccounted systematic effects including possible issues in modelling
the Cepheid data such as non-standard dust induced color correction \cite{Mortsell:2021nzg}, the impact of outliers \cite{Efstathiou:2013via,Efstathiou:2020wxn,Efstathiou:2021ocp}, blending effects, SnIa color properties \cite{Wojtak:2022bct}, robustness of the constancy of the SnIa absolute magnitude in the Hubble flow \cite{Benisty:2022psx,Martinelli:2019krf,1968ApJ...151..547T,Kang:2019azh,Rose:2019ncv,Jones:2018vbn,Rigault:2018ffm,2018ApJ...854...24K,Colgain:2019pck,Kazantzidis:2019nuh,Kazantzidis:2020tko,Sapone:2020wwz,Koo:2020ssl,Kazantzidis:2020xta,Lukovic:2019ryg,Tutusaus:2018ulu,Tutusaus:2017ibk,Drell:1999dx} etc. There is currently a debate about the importance of these potential systematic effects \cite{Kenworthy:2019qwq,Riess:2022mme,Yuan:2022kxa,Riess:2018byc}. Also the possibility for redshift evolution of Hubble constant was studied by Ref. \cite{Dainotti:2022bzg}. In \cite{Dainotti:2022bzg} the Hubble tension has been analyzed with a binned analysis on the Pantheon sample of SNe Ia through a multidimensional MCMC analysis. Finally the need for new standardizable candles with redshift values far beyond the SnIa ($3\lesssim z\lesssim 9$) has been studied (see Refs. \cite{Cao:2022wlg,Cao:2022yvi,Dainotti:2022rea,Dainotti:2022wli,Dainotti:2013cta} for gamma-ray bursts and Refs. \cite{Bargiacchi:2021hdp,Dainotti:2022rfz} for quasars).
A fourth approach related to the previous one is based on a possible change of the physical laws (e.g. a gravitational transition\cite{Khosravi:2021csn,Perivolaropoulos:2022txg}) during the past $200\,Myrs$ ($z\lesssim 0.01$) when the light of the Cepheid calibrator hosts was emitted \cite{Alestas:2021luu,Desmond:2019ygn,Perivolaropoulos:2021bds,Odintsov:2022eqm,Alestas:2020zol,Marra:2021fvf,Alestas:2021nmi,Perivolaropoulos:2022vql,Perivolaropoulos:2022txg,Odintsov:2022umu}. In this context, new degrees of freedom should be allowed for the modeling of either Cepheid calibrators or/and SnIa to allow for the possibility of this physics change. If these degrees of freedom are shown not to be excited by the data then this approach would also be severely constrained. It is possible however that nontrivial values of these new parameters are favored by the data while at the same time the best fit value of $H_0$ shifts to a value consistent with the inverse distance ladder measurements of $H_0$ using the sound horizon at recombination as calibrator. In this case \cite{Perivolaropoulos:2021bds}, this class of models would be favored especially in view of the severe constraints that have been imposed on the other three approaches.
The possible new degrees of freedom that could be allowed in the Cepheid+SnIa modeling analysis are clearly infinite but the actual choices to be implemented in a generalized analysis may be guided by three principles: {\it simplicity, physical motivation and improvement of the quality of fit to the data}.
In a recent analysis \cite{Perivolaropoulos:2021bds} using a previous release of the SH0ES data \cite{Riess:2016jrr,Riess:2019cxk,Riess:2020fzl} we showed that a physically motivated new degree of freedom in the Cepheid calibrator analysis allowing for a transition in one of the Cepheid modelling parameters $R_W$ or $M_W$, is mildly favored by the data and can lead to a reduced best fit value of $H_0$. Here we extend that analysis in a detailed and comprehensive manner, to a wider range of transition degrees of freedom using the latest publicly available SH0ES data described in R21.
\subsection{The new SH0ES Cepheid+SnIa data}
\begin{table}
\caption{A comparison of the latest SH0ES data release (R21) with previous data updates.}
\label{tab:sh0es}
\vspace{1.3mm}
\setlength{\tabcolsep}{0.5em}
\begin{adjustwidth}{0.cm}{1cm}
{\footnotesize\begin{tabular}{ccrcc}
\hhline{=====}
&&&& \\
SH0ES & Cepheid + SnIa & Cepheids\qquad \qquad & Calibrator & Hubble flow \\
Year/Ref. & host galaxies & & SnIa & SnIa\\ &&&& \\
\hhline{=====}
&&&& \\
& &MW\qquad \quad \quad 15\, & & \\
& &LMC$^a$\qquad \quad 785\, & & \\
2016& 19 &N4258\quad \quad \quad 139\, &19 & 217 \\
R16 \cite{Riess:2016jrr}& $ z < 0.01$ & M31\quad \quad \quad\, 372\, & $ z < 0.01$& $0.0233< z < 0.15$ \\
\cline{3-3}
& &Total\qquad \quad1311\, & & \\
& &In SnIa hosts\quad\, 975\, & & \\
\cline{3-3}
& &Total All\quad \quad2286\,& & \\
&&&& \\
\hline
&&&& \\
& &MW\qquad \quad \quad 15\, & & \\
& &LMC$^b$\quad\, 785+70\, & & \\
2019&19&N4258 \quad \quad \quad 139\, &19&217 \\
R19 \cite{Riess:2019cxk}& $ z < 0.01$ & M31\quad \quad \quad\, 372\, &$ z < 0.01$ & $0.0233< z < 0.15$ \\
\cline{3-3}
& &Total\qquad \quad 1381\, & & \\
& &In SnIa hosts\quad\, 975\,& & \\
\cline{3-3}
& &Total All\quad \quad 2356\,& & \\
&&&& \\
\hline
&&&& \\
& &MW\qquad \quad \quad 75\, & & \\
& &LMC$^b$ \quad\,785+70\, & & \\
2020& 19 &N4258\quad \quad \quad 139\, &19 &217 \\
R20 \cite{Riess:2020fzl}& $ z < 0.01$ & M31\quad \quad \quad\, 372\, &$ z < 0.01$ & $0.0233< z < 0.15$ \\
\cline{3-3}
& &Total\qquad \quad 1441\, & & \\
& &In SnIa hosts\quad\, 975\, & & \\
\cline{3-3}
& &Total All \quad \quad2416\,& & \\
&&&& \\
\hline
&&&& \\
& &LMC$^b$\quad\, 270+69\,&& \\
& &SMC$^a$\qquad \quad 143\, & & \\
2021 &37&N4258\quad \quad \quad 443\,& 42 & 277\\
R21 \cite{Riess:2021jrx}&$0.0015\lesssim z < 0.011 $ &M31\quad \quad \quad\,\,\,\, 55\,&$0.0015\lesssim z < 0.011 $& $0.023< z < 0.15$\\
\cline{3-3}
&&Total\qquad \quad\,\, 980\,&& \\
&&In SnIa hosts\quad\,2150\,&(77 lightcurve meas.)& \\
\cline{3-3}
&&Total All\quad \quad 3130\,&& \\
&&&& \\
\hhline{=====}
&&&& \\
\end{tabular} }
\end{adjustwidth}
{\footnotesize NOTE - (a) From the ground. (b) From the ground+HST.}
\end{table}
The new Cepheid+SnIa data release and analysis from the SH0ES collaboration in R21 includes a significant increase of the sample of SnIa calibrators from 19 in Ref. \cite{Riess:2016jrr} to 42. These SnIa reside in 37 hosts observed between 1980 and 2021 in a redshift range $0.0015\lesssim z<0.011$ (see Table \ref{tab:sh0es} for a more detailed comparison of the latest SH0ES data release with previous updates). These SnIa are calibrated using Cepheids in the SnIa host galaxies. In turn, Cepheid luminosities are calibrated using geometric methods in calibrator nearby galaxies (anchors). These anchor galaxies include the megamaser host NGC$\,$4258\footnote{At a distance $D=7.6Mpc$ \cite{Reid:2019tiq} NGC$\,$4258 is the closest galaxy, beyond the
Local Group, with a geometric distance measurement.}, the Milky Way (MW) where distances are measured with several parallaxes, and the Large Magellanic Cloud (LMC) where distances are measured via detached eclipsing binaries \cite{Riess:2019cxk} as well as two supporting anchor galaxies ($M31$ \cite{Li:2021qkc} and the Small Magellanic Cloud (SMC). These supporting galaxies are pure Cepheid hosts and do not host SnIa but host large and well measured Cepheid samples. However, geometric measurements of their distances are not so reliable and thus are not directly used in the analysis\footnote{A differential distance measurement of the SMC with respect to the LMC is used and thus LMC+SMC are considered in a unified manner in the released data.}. The calibrated SnIa in the Hubble flow ($z\gtrsim 0.01$) are used to measure $H_0$ due to to their high precision (5\% in distance per source) and high luminosity which allows deep reach and thus reduces the impact of local velocity flows.
The new SH0ES data release includes a factor of 3 increase in the sample of Cepheids within NGC$\,$4258. In total it has 2150 Cepheids in SnIa hosts\footnote{45 Cepheids in N1365 are mentioned in R21 but there are 46 in the fits files of the released dataset at Github repository: \href{https://github.com/PantheonPlusSH0ES/DataRelease}{PantheonPlusSH0ES/DataRelease}.}, 980 Cepheids in anchors or supporting galaxies\footnote{A total of 3130 Cepheids have been released in the data fits files but 3129 are mentioned in R21 (see Table \ref{tab:props}). These data are also shown concisely in Table \ref{tab:hoscep} of Appendix \ref{AppendixE} and may be download in electronic form.}, 42 SnIa (with total 77 lightcurve dataset measurements) in 37 Cepheid+SnIa hosts with redshifts $0.0015\lesssim z<0.011$ and 277 SnIa in the Hubble flow in the redshift range $0.023<z<0.15$. In addition 8 anchor based constraints (with uncertainties included) constrain the following Cepheid modeling parameters: $M_W$ (the Cepheid absolute magnitude zeropoint), $b_W$ (the slope of the Cepheid Period-Luminosity P-L relation), $Z_W$ (the slope of the Cepheid Metallicity-Luminosity M-L relation), a zeropoint parameter $zp$ used to refine the Cepheid P-L relation by describing the difference between the ground and HST zeropoints in LMC Cepheids (zp is set to 0 for HST observations), the distance moduli of the anchors NGC$\,$4258 and LMC and a dummy parameter we call $X$ which has been included in the R21 data release and is set to 0 with uncertainty $10^{-9}$ \footnote{This parameter is not defined in R21 but is included in the data release fits files. We thank A. Riess for clarifying this point.}.
The parameters fit with these data include the four modeling parameters $M_W$, $b_W$, $Z_W$, $M_B$ (the SnIa absolute magnitude), the 37 distance moduli of SnIa/Cepheid hosts, the distance moduli to the 2 anchors (NGC$\,$4258, LMC) and to the supporting Cepheid host M31, the zeropoint $zp$ of the Cepheid P-L relation in the LMC ground observations, the Hubble parameter and the dummy parameter mentioned above (tightly constrained to 0). This is a total of 47 parameters (46 if the dummy parameter $X$ is ignored).
In addition to these parameters, there are other modeling parameters like the color and shape correction slopes of SnIa (usually denoted as $\beta$ and $\alpha$) as well as the Wesenheit dust extinction parameter $R_W$ which have been incorporated in the released SnIa and Cepheid apparent magnitudes and thus can not be used as independent parameters in the analysis, in contrast to the previous data release.
The provided in R21 Cepheid Wesenheit dust corrected dereddened apparent magnitudes $m_H^W$ are connected with the Wesenheit dust extinction parameter $R_W$ as \cite{1982ApJ...253..575M} (see also Refs. \cite{Riess:2016jrr,Riess:2019cxk})
\be
m_H^W\equiv m_H-R_W(V-I)
\label{wesmag}
\ee
where $m_H$ is the observed apparent magnitude in the near-infrared $H$ (F160W) band, $V$ (F555W) and $I$ (F814W) are optical mean apparent magnitudes in the corresponding bands. The empirical parameter $R_W$ is also called 'the reddening-free "Wesenheit" color ratio' and is different from $R_H$ which can be derived from a dust law (e.g. the Fitzpatrick law \cite{Fitzpatrick:1998pb}). The parameter $R_W$ corrects for both dust and intrinsic variations applied to observed blackbody colors $V-I$.
The provided in R21 SnIa apparent magnitudes $m_B^0$, standardized using light curve color c and shape $x_1$ corrections are defined as
\be
m_B^0 \equiv m_B-\alpha\; x_{1}-\beta\; c = \mu+M_{B}
\label{mhwdef}
\ee
where $m_B$ is the peak apparent magnitude, $\mu$ is the SnIa distance modulus while the B-band absolute magnitude, $M_{B}$, and correction coefficients $\alpha$ and $\beta$ are fit directly using the SnIa data. The latest SH0ES data release provides the measured values of $m_H^W$ and $m_B^0$ for Cepheid and SnIa respectively which are also used in the corresponding analysis while the parameters $R_W$, $\alpha$ and $\beta$ are fit previously and independently by the SH0ES team to provide the values of $m_H^W$ and $m_B^0$.
\subsection{The prospect of new degrees of freedom in the SH0ES data analysis}
The homogeneity of the SH0ES data with respect to the parameters $R_W$, $\alpha$ and $\beta$ has been analysed in previous studies with some interesting results. In particular, using the data of the previous SH0ES data release \cite{Riess:2016jrr,Riess:2019cxk,Riess:2020fzl}, it was shown \cite{Perivolaropoulos:2021bds} (see also \cite{Mortsell:2021nzg} for a relevant study) that if the parameter $R_W$ is allowed to vary among Cepheid and SnIa hosts then the fit quality is significantly improved and the best fit value of $H_0$ is lowered to a level consistent with the inverse distance ladder best fit. In addition, a more recent analysis has allowed the parameter $\beta$ to have a different value in Hubble flow SnIa ($\beta=\beta_{HF}$ for $z>0.02$) compared to calibrating SnIa ($\beta=\beta_{cal}$ for $z<0.01$). A reanalysis allowing for this new degree of freedom has indicated a tension between the the two best fit values ($\beta_{HF}$ and $\beta_{cal}$) at a level of up to $3.8\sigma$ \cite{Wojtak:2022bct}.
Motivated by these hints for inhomogeneities in the SH0ES data, in what follows we introduce new degrees of freedom in the analysis that are designed to probe the origin of these inhomogeneities. We thus accept three of the four above mentioned assumptions that have lead to the Hubble tension and test the validity of the fourth assumption. In particular we keep the following assumptions:
\begin{enumerate}
\item
There are no significant systematics in the SH0ES data and thus they are reliable.
\item
The CMB sound horizon scale used as a calibrator in the inverse distance ladder approach is correctly obtained in the standard model using the known particles.
\item
The Hubble expansion history from the time of recombination up to $z=0.01$ (or even $z=0$) used in the inverse distance ladder measurement of $H_0$ is provided correctly by the standard Planck18$/\Lambda$CDM cosmological model.
\end{enumerate}
As discussed above there are several studies in the literature that support the validity of these assumptions (e.g. \cite{Fondi:2022tfp,Keeley:2022ojz}). If these assumptions are valid then the most probable source of the Hubble tension is the violation of the fourth assumption stated above namely {\it 'the physical laws involved and the calibration properties of Cepheids and SnIa in all the rungs of the distance ladder are well understood and modelled properly'}.
If this assumption is violated then the modeling of Cepheids+SnIa should be modified to take into account possible changes of physics by introducing new degrees of freedom that were suppressed in the original (baseline) SH0ES analysis. In the context of this approach, if these degrees of freedom are properly introduced in the analysis then the best fit value of $H_0$ will become consistent with the corresponding inverse distance ladder value of $H_{0}=67.36\pm0.54$~km~s$^{-1}$~Mpc$^{-1}$.
In an effort to pursue this approach for the resolution of the Hubble tension we address the following questions:
\begin{itemize}
\item How can new degrees of freedom (new parameters) be included in the SH0ES data analysis for the determination of $H_0$?
\item What are the new degrees of freedom that can expose internal tensions and inhomogeneities in the Cepheid/SnIa data?
\item What new degrees of freedom can lead to a best fit value of $H_0$ that is consistent with Planck?
\end{itemize}
The main goal of the present analysis is to address these questions. The new degree of freedom we allow and investigate is a transition of any one of the four Cepheid/SnIa modeling parameters at a specific distance $D_c$ or equivalently (in the context of the cosmological principle) at a given cosmic time $t_c$ such that $t_0-t_c=D_c/c$ where $t_0$ is the present cosmic time (age of the Universe). In the context of this new degree of freedom we reanalyse the SH0ES data to find how does the quality of fit to the data and the best fit value of $H_0$ change when the new degree of freedom is excited. The possible introduction of new constraints included in the analysis is also considered.
The structure of this paper is the following: In the next section \ref{sec:standard analysis} we describe the standard analysis of the SH0ES Cepheid+SnIa data in a detailed and comprehensive manner stressing some details of the released dataset that are not described in R21. We also describe some tension between the values of the best fit Cepheid modeling parameters $b_W$ and $Z_W$ obtained in anchor or pure Cepheid host galaxies and the corresponding mean values obtained in SnIa host galaxies. In section \ref{sec:Generalized analysis} we present our generalized analysis with new degrees of freedom that allow a transition of the main modeling parameters at specific distances (cosmic times of radiation emission). We also investigate the effect of the inverse distance ladder constraint on $M_B$ \cite{Camarena:2021jlr, Marra:2021fvf,Gomez-Valent:2021hda} for both the baseline SH0ES analysis and for our analysis involving the $M_B$ transition degree of freedom. Finally in section \ref{sec:Conclusion} we conclude, discuss the implications of our results and the possible extensions of our analysis.
\section{The new SH0ES data and their standard analysis: Hints for intrinsic tensions}
\label{sec:standard analysis}
\subsection{The original baseline SH0ES analysis: a comprehensive presentation}
\label{sub:baseline}
The main equations used to model the Cepheid SnIa measured apparent magnitudes with parameters that include $H_0$ are described as follows:
\begin{itemize}
\item The equation that connects the measured Wesenheit magnitude of the $j$th Cepheid in the $i$th galaxy, with the host distance moduli $\mu_i$ and the modeling parameters $M_W$, $b_W$ and $Z_W$ is of the form\footnote{For Cepheids in the LMC/SMC anchor observed from the ground the zeropoint parameter $zp$ is added on the RHS and thus Eq. (\ref{wesmagcep}) becomes $m_{H,i,j}^W
=\mu_i+M_{H,i}^W+b_W[P]_{i,j}+Z_W[O/H]_{i,j}+zp$ to allow for a different P-L zeropoint between ground and HST observations.}
\be
m_{H,i,j}^W
=\mu_i+M_{H}^W+b_W[P]_{i,j}+Z_W[O/H]_{i,j}
\label{wesmagcep}
\ee
where $\mu_i$ is the inferred distance modulus to the galaxy, $M_H^W$ is the zeropoint Cepheid absolute magnitude of a period $P = 10\,d$ Cepheid ($d$ for days), and $b_W$-$Z_W$ are the slope parameters that represent the dependence of magnitude on both period and metallicity. The $[O/H]$ is a measure of the metallicity of the Cepheid. The usual bracket shorthand notation for the metallicity $[O/H]$ represents the Cepheid metal abundance compared to that of the Sun
\be
[O/H]\equiv \log(O/H)-\log(O/H)_{\odot}=\Delta \log(O/H)
\ee
Here O and H is the number of oxygen and hydrogen atoms per unit of volume respectively. The unit often used for metallicity is the dex (decimal exponent) defined as $n\, dex \equiv 10^n$. Also, the bracket shorthand notation for the period $[P]$ is used as ($P$ in units of days)
\be
[P]\equiv \log P-1
\ee
\item
The color and shape corrected SnIa B-band peak magnitude in the $i$th host is connected with the distance modulus $\mu_i$ of the $i$th host and with the SnIa absolute magnitude $M_B$ as shown in Eq. (\ref{mhwdef}) i.e.
\be
m_{B,i}^0=\mu_i+M_B
\label{magsnia}
\ee
The distance modulus is connected with the luminosity distance $d_L$ in $Mpc$ as
\be
\mu= 5 \log (d_L/Mpc) + 25
\label{mudef}
\ee
where in a flat universe
\be
d_L(z)=c (1+z) \int_0^z \frac{dz'}{H(z')}=c H_0^{-1} (1+z)\int_0^z \frac{H_0 \; dz'}{H(z')} \equiv H_0^{-1} \;D_L(z)
\label{dlhz}
\ee
where $D_L(z)$ is the Hubble free luminosity distance which is independent of $H_0$.
\item
Using Eqs. (\ref{magsnia})-(\ref{dlhz}) it is easy to show that that $H_0$ is connected with the SnIa absolute magnitude and the Hubble free luminosity distance as
\be
5 \log H_0=M_B + 5 \log D_L(z) - m_B^0(z) +25
\label{logh0}
\ee
In the context of a cosmographic expansion of $H(z)$ valid for $z<<1$ we have
\be
\log D_L(z)_c \simeq \log \left[cz\left(1+\frac{1}{2}(1-q_0)z
-\frac{1}{6}(1-q_0-3q_0^2+j_0)z^2+\mathcal{O}(z^3)\right) \right]
\label{dlcosmogr}
\ee
where $q_0\equiv -\frac{1}{H_0^2}\frac{d^2a(t)}{dt^2}\Big|_{t=t_0}$ and $j_0\equiv \frac{1}{H_0^3}\frac{d^3a(t)}{dt^3}\Big|_{t=t_0}$ are the deceleration and jerk parameters respectively.
Thus Eqs. (\ref{logh0}) and (\ref{dlcosmogr}) lead to the equation that connects $H_0$ with the SnIa absolute magnitude $M_B$ which may be expressed as
\be
5 \log H_0=M_B + 5 \log D_L(z) - m_B^0(z) +25 \equiv M_B +5\; a_B +25
\label{abdef}
\ee
where we have introduced the parameter $a_B\equiv \log D_L(z) - 0.2 m_B^0(z)$ as defined in the SH0ES analysis \cite{Riess:2016jrr}.
\end{itemize}
Thus the basic modeling equations used in the SH0ES analysis for the measurement of $H_0$ are Eqs. (\ref{wesmagcep}), (\ref{magsnia}) and (\ref{abdef}). In these equations the input data are the measured apparent magnitudes (luminosities) of Cepheids $ m_{H,i,j}^W$ and the SnIa apparent magnitudes $m_{B,i}^0$ (in Cepheid+SnIa hosts and in the Hubble flow). The parameters to be fit using a maximum likelihood method are the distance moduli $\mu_i$ (of the anchors and supporting hosts, the Cepheid+SnIa hosts and Hubble flow SnIa), the four modeling parameters ($M_H^W$, $b_W$, $Z_W$ and $M_B$), the Hubble constant $H_0$, the zeropoint $zp$ of the Cepheid P-L relation in the LMC ground measurements and the dummy parameter $X$. This is a total of 47 parameters. The actual data have been released by the SH0ES team as a .fits file in the form of a column vector $Y$ with 3492 entries which includes 8 constraints on the parameters obtained from measurements in anchor galaxies where the distance moduli are measured directly with geometric methods.
The entries of the provided $Y$ data column vector do not include the pure measured apparent magnitudes. Instead its entries are residuals defined by subtracting specific quantities. In particular:
\begin{itemize}
\item The Cepheid Wesenheit magnitudes are presented as residuals with respect to a fiducial P-L term as
\be
{\bar m}_{H,i,j}^W \equiv m_{H,i,j}^W - b_W^0 [P]
\label{residmw}
\ee
where $b_W^0=-3.286$ is a fiducial Cepheid P-L slope. As a result of this definition the derived best fit slope is actually a residual slope $\Delta b_W \equiv b_W - b_W^0$.
\item The residual Cepheid Wesenheit magnitudes of the Cepheids in the anchors $N4258$, $LMC$ and the supporting pure Cepheid host $SMC$ (non SnIa hosts), are presented after subtracting a corresponding fiducial distance modulus obtained with geometric methods.\footnote{In the case of SMC a differential distance with respect to LMC is used.}
\item The SnIa standardized apparent magnitudes in the Hubble flow are presented as residuals after subtracting the Hubble free luminosity distance with cosmographic expansion $5\log D_L(z)_c+25$ (see Eq. (\ref{dlcosmogr})).
\end{itemize}
Thus the released data vector $Y$ has the following form
\be
\nonumber
\begin{tabular}{ccc}
\(\bf {Y}=
\begin{pmatrix}
{\bar m}_{H,1}^W\\
\ldots\\
{\bar m}_{H,2150}^W\\
\hline
{\bar m}_{H,N4258,1}^W-\mu_{0,N4258}\\
\ldots\\
{\bar m}_{H,N4258,443}^W-\mu_{0,N4258}\\
{\bar m}_{H,M31,1}^W\\
\ldots\\
{\bar m}_{H,M31,55}^W\\
{\bar m}_{H,LMC,ground,1}^W-\mu_{0,LMC}\\
\ldots\\
{\bar m}_{H,LMC,ground,270}^W-\mu_{0,LMC}\\
{\bar m}_{H,SMC,ground,1}^W-\mu_{0,SMC}\\
\ldots\\
{\bar m}_{H,SMC,ground,143}^W-\mu_{0,SMC}\\
{\bar m}_{H,LMC,HST,1}^W-\mu_{0,LMC}\\
\ldots\\
{\bar m}_{H,LMC,HST,69}^W-\mu_{0,LMC}\\
\hline
{\bar m}_{B,1}^0\\
\ldots\\
m_{B,77}^0\\
\hline
-5.803 \; (M_{H,HST}^W)\\
-5.903 \; (M_{H,Gaia}^W)\\
-0.21 \; (Z_{W,Gaia})\\
0 \; (X) \\
0 \; (\Delta zp)\\
0 \; (\Delta b_W)\\
0 \; (\Delta \mu_{N4258}) \\
0 \; (\Delta \mu_{LMC}) \\
\hline
m_{B,1}^0-5\log [cz_1(...)]-25\\
\ldots\\
m_{B,277}^0-5\log [cz_{277}(...)]-25\\
\end{pmatrix}\)
& &
$\begin{matrix}
\left.\begin{matrix}
\\
\\
\\
\end{matrix}\right\}\,2150\,Cepheids\, in \,37 \,SnIa\, hosts\\
\left.\begin{matrix}
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\end{matrix}\right\}\,980\,Cepheids\, in\,non\, SnIa\, hosts \qquad\\
\left.\begin{matrix}
\\
\\
\\
\end{matrix}\right\} \,77\,SnIa\, in \,Cepheid\, hosts\; \quad \; \; \;\;\;\;\\
\left.\begin{matrix}
\\
\\
\\
\\
\\
\\
\\
\\
\end{matrix}\right\}\,8\, External\, constraints \quad\;\; \; \; \;\; \;\;\;\;\\
\left.\begin{matrix}
\\
\\
\\
\end{matrix}\right\}\,277\, SnIa\, in\, Hubble\, flow \;\quad\;\; \; \; \; \\
\end{matrix}$
\\
\end{tabular}
\ee
The 8 external anchor constraints on the parameters that appear in this vector are the following:
\ba
M_H^W&=&-5.803\pm0.082 \nn \\
M_H^W&=&-5.903\pm 0.025 \nn \\
Z_W&=&-0.21\pm 0.12 \nn \\
X&=&0\pm0.00003 \label{constr} \\
\Delta zp &=&0\pm 0.1 \nn \\
\Delta b_W&=&0\pm 10 \nn \\
\Delta \mu_{N4258}&=&0\pm 0.03 \nn \\
\Delta \mu_{LMC}&=&0\pm 0.026 \nn
\ea
The parameters to be fit using the $Y$ vector data may also be expressed as a vector $q$ with 47 entries of the following form
\begin{center}
\begin{tabular}{ccc}
\bf{q}=
$\begin{pmatrix}
\mu_1\\
\ldots\\
\mu_{37}\\
\Delta\mu_{N4258}\\
M_H^W\\
\Delta\mu_{LMC}\\
\mu_{M31}\\
\Delta b_W\\
M_B\\
Z_W\\
X\\
\Delta zp\\
5\log H_0
\end{pmatrix}$& &$\left.\begin{matrix}
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\end{matrix}\right\}$\,47\, parameters \;\quad \;\; \;\;\;\; \;\;\; \\
\end{tabular}
\end{center}
Using the column vectors {\bf Y} and {\bf q}, Eqs. (\ref{wesmagcep}), (\ref{magsnia}) and (\ref{abdef}) and the constraints stated above, can be expressed in matrix multiplication form as
\be
\bf{Y} = \bf{Lq }
\label{syst1}
\ee
with $\bf{Y}$ the matrix of measurements (data vector), $\bf{q}$ the matrix of parameters and $\bf{L}$ a model (or design) matrix which has 3492 rows corresponding to the entries of the $\bf{Y}$ data vector and 47 columns corresponding to the entries of the parameter vector $\bf{q}$. The model matrix $\bf{L}$ also includes some data (Cepheid period and metallicity) and in the context of this baseline modeling of data has the form
\begin{adjustwidth}{-4.3cm}{1cm}
\setlength{\tabcolsep}{1pt}
\begin{tabular}{ccc}
\(\bf {L}={\footnotesize
\left( \begin{array}[c]{ccccccccccccc}
1&\ldots&0&0&1&0&0& [P]_1&0&[O/H]_1&0&0&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&1&0&1&0&0&[P]_{2150}&0&[O/H]_{2150}&0&0&0\\
\hline
0&\ldots&0&1&1&0&0&[P]_{N4258,1}&0&[O/H]_{N4258,1}&0&0&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&1&1&0&0&[P]_{N4258,443}&0&[O/H]_{N4258,443}&0&0&0\\
0&\ldots&0&0&1&0&1&[P]_{M31,1}&0&[O/H]_{M31,1}&0&0&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&1&0&1&[P]_{M31,55}&0&[O/H]_{M31,55}&0&0&0\\
0&\ldots&0&0&1&1&0&[P]_{LMC,ground,1}&0&[O/H]_{LMC,ground,1}&0&1&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&1&1&0&[P]_{LMC,ground,270}&0&[O/H]_{LMC,ground,270}&0&1&0\\
0&\ldots&0&0&1&1&0&[P]_{SMC,ground,1}&0&[O/H]_{SMC,ground,1}&0&1&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&1&1&0&[P]_{SMC,ground,143}&0&[O/H]_{SMC,ground,143}&0&1&0\\
0&\ldots&0&0&1&1&0&[P]_{LMC,HST,1}&0&[O/H]_{LMC,HST,1}&0&0&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&1&1&0&[P]_{LMC,HST,69}&0&[O/H]_{LMC,HST,69}&0&0&0\\
\hline
1&\ldots&0&0&0&0&0&0&1&0&0&0&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&1&0&0&0&0&0&1&0&0&0&0\\
\hline
0&\ldots&0&0&1&0&0&0&0&0&0&0&0\\
0&\ldots&0&0&1&0&0&0&0&0&0&0&0\\
0&\ldots&0&0&0&0&0&0&0&1&0&0&0\\
0&\ldots&0&0&0&0&0&0&0&0&1&0&0\\
0&\ldots&0&0&0&0&0&0&0&0&0&1&0\\
0&\ldots&0&0&0&0&0&1&0&0&0&0&0\\
0&\ldots&0&1&0&0&0&0&0&0&0&0&0\\
0&\ldots&0&0&0&1&0&0&0&0&0&0&0\\
\hline
0&\ldots&0&0&0&0&0&0&1&0&0&0&-1\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&0&0&0&0&1&0&0&0&-1\\
\end{array} \right) }
$ & {\footnotesize} &
{\footnotesize $\begin{matrix}
\left.\begin{matrix}
\\
\\
\\
\end{matrix}\right\} \,2150\,Cepheids\, in \,37 \,SnIa\, hosts\\
\left.\begin{matrix}
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\\
\end{matrix}\right\}\,980\,Cepheids\, in\,non\,SnIa\,hosts \qquad\\
\left.\begin{matrix}
\\
\\
\\
\end{matrix}\right\} \,77\,SnIa\, in \,Cepheid\, hosts\,\, \quad \; \; \;\;\;\; \\
\left.\begin{matrix}
\\
\\
\\
\\
\\
\\
\\
\\
\end{matrix}\right\}\,8\, External\, constraints \;\quad \;\; \;\;\;\; \;\;\; \\
\left.\begin{matrix}
\\
\\
\\
\end{matrix}\right\}\,277\, SnIa\, in\, Hubble\, flow \;\quad\;\; \; \; \; \\
\end{matrix}$}
\\
\end{tabular}
\end{adjustwidth}
The system (\ref{syst1}) has 3492 equations and 47 unknown parameter values. Thus it is overdetermined and at best it can be used in the context of the maximum likelihood analysis to find the best fit parameter values that have maximum likelihood and thus minimum $\chi^2$. For the definition of $\chi^2$ the measurement error matrix (covariance matrix) $\bf{C}$ is needed and provided in the data release as square matrix with dimension $3492\times 3492$\footnote{The $\bf{Y}$, $\bf{L}$ and $\bf{C}$ matrices are publicly available as fits files by SH0ES team at Github repository: \href{https://github.com/PantheonPlusSH0ES/DataRelease}{PantheonPlusSH0ES/DataRelease}.}. In Appendix \ref{AppendixA} we present the schematic form of the matrix $\bf{C}$ which also includes the standard uncertainties of the constraints as diagonal elements. Using the covariance matrix that quantifies the data uncertainties and their correlation, the $\chi^2$ statistic may be constructed as
\be
\chi^2=(\bf{Y}-\bf{Lq})^T\bf{C}^{-1}(\bf{Y}-\bf{Lq})
\label{chi21}
\ee
The numerical minimization of $\chi^2$ in the presence of 47 parameters that need to be fit would be very demanding computationally even with the use of Markov chain Monte Carlo (MCMC) methods. Fortunately, the linear form of the system (\ref{syst1}) allows the analytical minimization of $\chi^2$ and the simultaneous analytic evaluation of the uncertainty of each parameter. In Appendix \ref{AppendixB} we show that the analytic minimization of $\chi^2$ of Eq. (\ref{chi21}) leads to the best fit parameter maximum likelihood vector\footnote{The results for the parameters in $\bf{q_{best}}$ are the same as the results obtained using numerical minimization of $\chi^2$ (see the absolute matching of the results in the numerical analysis file "Baseline1 structure of system" in the \href{https://github.com/FOTEINISKARA/A-reanalysis-of-the-SH0ES-data-for-H_0}{A reanalysis of the SH0ES data for $H_0$} GitHub repository).}
\be
\bf{q_{best}}=(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}\bf{L}^T\bf{C}^{-1}\bf{Y}
\label{bfpar1}
\ee
The $1\sigma$ standard errors for the parameters in $\bf{q_{best}}$ are obtained as the square roots of the 47 diagonal elements of the transformed error matrix
\be
\bf{\varSigma}=(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}
\label{errmat}
\ee
For example the best fit of the parameter\footnote{We use the standard notation $\log\equiv \log_{10}$ which is important in the error propagation to $H_0$.} $5\log H_0$ is obtained as the 47th entry of the best fit parameter vector $\bf{q_{best}}$ and the corresponding $1\sigma$ standard error is the $\sqrt{\bf{\varSigma_{47,47}}}$ element of the error matrix. Using equations (\ref{bfpar1}) and (\ref{errmat}) and the latest released data of the SH0ES team presented in R21 we find full agreement with all values of the best fit parameters. For example for $H_0$ we find (after implementing error propagation) $H_0=73.04\pm 1.04km\; s^{-1}\; Mpc^{-1}$ fully in agreement with the published result of R21.
\begin{table}
\caption{ Best fit parameter values in the absence and in the presence of the inverse distance ladder constraint for baseline model.}
\label{tab:resall}
\vspace{2.5mm}
\setlength{\tabcolsep}{1.8em}
\begin{adjustwidth}{0.3cm}{1.2cm}
{\footnotesize\begin{tabular}{cc ccc}
\hhline{=====}
& \\
Parameter & Best fit value&$\sigma$ & Best fit value$^{a}$& $\sigma\,^{a}$\\
& \\
\hhline{=====}
& \\
$\mu_{M101}$& 29.16& 0.04& 29.20 & 0.04\\
$\mu_{M1337}$& 32.92& 0.08 & 32.97 & 0.08 \\
$\mu_{N0691}$& 32.82& 0.09& 32.87 & 0.09\\
$\mu_{N1015}$& 32.62& 0.069& 32.67 & 0.06\\
$\mu_{N0105}$& 34.49& 0.12& 34.56 & 0.12\\
$\mu_{N1309}$& 32.51& 0.05& 32.56& 0.05\\
$\mu_{N1365}$& 31.33& 0.05& 31.37& 0.05\\
$\mu_{N1448}$& 31.3& 0.04& 31.33& 0.03\\
$\mu_{N1559}$& 31.46& 0.05& 31.51& 0.05\\
$\mu_{N2442}$& 31.47& 0.05& 31.51& 0.05\\
$\mu_{N2525}$& 32.01& 0.06& 32.08& 0.06\\
$\mu_{N2608}$& 32.63& 0.11& 32.69& 0.11\\
$\mu_{N3021}$& 32.39& 0.1& 32.45& 0.1\\
$\mu_{N3147}$& 33.09& 0.09& 33.16& 0.08\\
$\mu_{N3254}$& 32.4& 0.06& 32.46& 0.05\\
$\mu_{N3370}$& 32.14& 0.05& 32.19& 0.04\\
$\mu_{N3447}$& 31.94& 0.03& 31.98& 0.03\\
$\mu_{N3583}$& 32.79& 0.06& 32.84& 0.06\\
$\mu_{N3972}$& 31.71& 0.07& 31.76& 0.07\\
$\mu_{N3982}$& 31.64& 0.06& 31.69& 0.05\\
$\mu_{N4038}$& 31.63& 0.08& 31.69& 0.08\\
$\mu_{N4424}$& 30.82& 0.11& 30.87& 0.11\\
$\mu_{N4536}$& 30.84& 0.05& 30.87& 0.05\\
$\mu_{N4639}$& 31.79& 0.07& 31.83& 0.07\\
$\mu_{N4680}$& 32.55& 0.15& 32.61& 0.15\\
$\mu_{N5468}$& 33.19& 0.05& 33.25& 0.05\\
$\mu_{N5584}$& 31.87& 0.05& 31.91& 0.04\\
$\mu_{N5643}$& 30.51& 0.04& 30.56& 0.04\\
$\mu_{N5728}$& 32.92& 0.1& 32.98& 0.1\\
$\mu_{N5861}$& 32.21& 0.08& 32.26& 0.07\\
$\mu_{N5917}$& 32.34& 0.08& 32.4& 0.08\\
$\mu_{N7250}$& 31.61& 0.1& 31.65& 0.1\\
$\mu_{N7329}$& 33.27& 0.07& 33.33& 0.07\\
$\mu_{N7541}$& 32.58& 0.08& 32.64& 0.08\\
$\mu_{N7678}$& 33.27& 0.08& 33.33& 0.08\\
$\mu_{N0976}$& 33.54& 0.08& 33.61& 0.08\\
$\mu_{U9391}$& 32.82& 0.05& 32.86& 0.05\\
$\Delta \mu_{N4258}$& -0.01& 0.02& 0.01& 0.02\\
$M_H^W$& -5.89& 0.02& -5.92& 0.02\\
$\Delta \mu_{LMC}$& 0.01& 0.02& 0.03& 0.02\\
$\mu_{M31}$& 24.37& 0.07& 24.4& 0.07\\
$\Delta b_W$& -0.013& 0.015& -0.026& 0.015\\
$M_B$& -19.25& 0.03& -19.33& 0.02\\
$Z_W$& -0.22& 0.05& -0.22& 0.05\\
$ X$& 0.& 0.& 0.& 0.\\
$ \Delta zp$& -0.07& 0.01& -0.07& 0.01\\
$ 5\log H_0$& 9.32& 0.03& 9.24& 0.02\\
$H_0$& 73.04& 1.04& 70.5& 0.7\\
& \\
\hhline{=====}
\\
\end{tabular} }
{\footnotesize NOTE - (a) With constraint $M_B=-19.401 \pm 0.027 $ included in the data vector $\bf{Y}$ and in the model matrix $\bf{L}$ with included uncertainty in the extended covariance matrix $\bf{C}$.}
\end{adjustwidth}
\end{table}
In Table \ref{tab:resall} we show the best fit values and uncertainties for all 47 parameters of the vector $\bf{q}$ including the four Cepheid modeling parameters ($b_W=b_W^0+\Delta b_W\equiv -3.286+\Delta b_W$, $M_H^W$, $Z_W$ and $M_B$) along with best fit value of $H_0$ for the baseline SH0ES model. The corresponding best fit values and uncertainties with an additional constraint on $M_B$ from the inverse distance ladder is also included and discussed in detail in the next section. The agreement with the corresponding results of R21 (left three columns) is excellent.
Before closing this subsection we review a few points that are not mentioned in R21 but are useful for the reproduction of the results and the analysis
\begin{itemize}
\item The number of entries of the parameter vector ${\bf q}$ is 47 and not 46 as implied in Ref. R21 due to the extra dummy parameter $X$ which is included in the released data $\bf{Y}$, $\bf{L}$ and $\bf{C}$ but not mentioned in R21.
\item The entry referred as $b_W$ in R21 in the parameter vector ${\bf q}$ should be $\Delta b_W$ because it is actually the residual $b_W$ ($\Delta b_W \equiv b_W - b_W^0$) as stated above and not the original slope of the P-L relation (its best fit value is $\Delta b_W=-0.014\pm 0.015$).
\item The number of constraints shown in the definition of the ${\bf Y}$ vector in R21 is 5 while in the actual released data $\bf{Y}$, $\bf{L}$ and $\bf{C}$ we have found the 8 constraints defined above.
\end{itemize}
\subsection{Homogeneity of the Cepheid modeling parameters} \label{seccephomog}
Before generalizing the model matrix $\bf{L}$ with new degrees of freedom and/or constraints, it is interesting to test the self-consistency of the assumed universal modeling parameters $b_W$ and $Z_W$ of the Cepheids. These parameters can be obtained separately for each one of the $i=1,...,40$ Cepheid hosts\footnote{37 SnIa/Cepheid hosts and 3 pure Cepheid hosts of which 2 are anchors (N4258 and LMC).} by fitting linear relations.
In particular, in order to obtain the best fit P-L slope $b_{W,i}$ of the $ith$ Cepheid host we fit the $m_{H,i,j}^W-\log P_{i,j}$ data (where $P_{i,j}$ is the period in units of days of the $jth$ Cepheid in the $ith$ host) with a linear relation of the form
\be
m_{H,i,j}^W=s_i+b_{W,i} \log P_{i,j}
\label{bwi}
\ee
with parameters to be fit in each host $s_i-b_{W,i}$ (intercept and slope).
These equations may be expressed as a matrix equation of the form
\be
\bf{Y_i} = \bf{A_iX_i }
\label{syst}
\ee
with $\bf{Y_i}$ the vector of measurements, $\bf{X_i}$ the vector of parameters and $\bf{A_i}$ the model (or design) matrix. These are defined as
\be
\begin{tabular}{ccccc}
\(\bf {Y_i}=
\begin{pmatrix}
m_{H,i,1}^W\\
m_{H,i,2}^W\\
\vdots\\
m_{H,i,N}^W
\end{pmatrix}\)
&, &
\(
\bf{X_i}=
\begin{pmatrix}
s_i\\
b_{W,i}
\end{pmatrix}\)
&, &
\(
\bf{A_i}=
\begin{pmatrix}
1& log P_{i,1}\\
1&log P_{i,2}\\
\vdots&\vdots\\
1&log P_{i,N}
\end{pmatrix}\) \\
&&&&
\end{tabular}
\ee
The analytically (along the lines of the previous subsection and of Appendix \ref{AppendixB}) obtained minimum of
\be
\chi_i^2=(\bf{Y_i}-\bf{A_iX_i})^T\bf{C_i}^{-1}(\bf{Y_i}-\bf{A_iX_i})
\label{chi2i}
\ee
with respect to the slope $b_{W,i}$ and intercept $s_i$ leads to the best fit values and standard errors for these parameters. For all Cepheids we adopt the $ \mathrm{N\times N}$ covariance matrix $\bf{C_i}$ of standard errors of the magnitude measurements from R21.
Thus, the analytic minimization of $\chi^2$ of Eq. (\ref{chi2i}) leads to the best fit parameter maximum likelihood vector
\be
\bf{X_{i,best}}=(\bf{A_I}^T\bf{C_i}^{-1}\bf{X_i})^{-1}\bf{A_i}^T\bf{C_i}^{-1}\bf{Y_i}
\label{bfpari}
\ee
The $1\sigma$ standard errors for $b_{W,i}$ slope and intercept in $\bf{X_{i,best}}$ are obtained as the square roots of the 2 diagonal elements of the error matrix
\be
\bf{\varSigma}_i=(\bf{A_i}^T\bf{C_i}^{-1}\bf{A_i})^{-1}
\label{errmati}
\ee
A similar analysis was implemented for the other Cepheid modeling parameter, the metallicity slopes $Z_{W,i}$. In this case the linear fit considered was of the form
\be
m_{H,i,j}^W=s_i+Z_{W,i}[O/H]_{i,j}
\label{zwi}
\ee
The best fit values of the slopes $b_{W,i}$ and $Z_{W,i}$ for each one of the 40 Cepheid hosts are shown in Figs. \ref{figb10dnocov} and \ref{figz10dnocov} in terms of the host distance respectively. The actual Cepheid data in each host with the best fit $\log P_{i,j}-m_{H,i,j}^W$ and $[O/H]_{i,j}-m_{H,i,j}^W$ straight lines are shown in Figs.
\ref{figballnocov} and \ref{figzallnocov} in Appendix \ref{AppendixC} for each one of the 40 Cepheid hosts $i$. The $b_{W,i}$ slopes shown in Fig. \ref{figb102bnocov} for each host in sequence of increasing uncertainties, is in excellent agreement with a corresponding Figure 10 shown in R21 \footnote{As discussed in Appendix \ref{AppendixC}, 2 slopes corresponding to the hosts N4038 and N1365 are slightly shifted in our analysis compared to R21 due a small disagreement in the best fit slope and a typo of R21 in transferring the correct slope to Fig. 10 while the slope corresponding to the host N4424 is missing in Fig. 10 of R21.}. The corresponding numerical values of the best fit $b_{W,i}$ and $Z_{W,i}$ are shown in Table \ref{tab:slopes}.
Some inhomogeneities are visible in the distance distribution of both the individual Cepheid slopes in Figs. \ref{figb10dnocov} and \ref{figz10dnocov} as well as in Fig. \ref{figb102bnocov}. For example in Figs. \ref{figb10dnocov} and \ref{figb102bnocov} there is a consistent trend of most non-anchor hosts to have an absolute best fit slope $b_W$ that is smaller than the corresponding best fit $b_W$ of the anchor hosts (above the dotted line). Similarly, in Fig. \ref{figz10dnocov} there seems to be a trend of most $Z_{W,i}$ absolute slopes to be larger than the corresponding slope of the the Milky Way (MW) (below the dotted line).
A careful inspection of Figs. \ref{figb10dnocov} and \ref{figz10dnocov} indicates that the scatter is significantly larger than the standard uncertainties of the slopes. Indeed a $\chi^2$ fit to a universal slope for $Z_W$ of the form
\be
\chi^2(Z_{W})=\sum_{i=1}^{N}\frac{(Z_{W,i}-Z_{W})^2}{\sigma_{Z_{W,i}}^2}
\label{chizw1}
\ee
leads to a minimum $\chi_{min}^2$ per degree of freedom ($dof$)\footnote{The number of degrees of freedom is $dof=N-M=42-1=41$ for $b_W$ and $dof=N-M=40-1=39$ for $Z_W$. The smaller number of $N$ for $Z_W$ is due to the fact that metallicities were not provided for individual Cepheids in the LMC and SMS in R21 and thus we evaluated a smaller number of slopes for $Z_W$. See also Table \ref{tab:slopes}. The $N=42$ points shown in the plot \ref{figb10dnocov} include the two additional points corresponding to MW and SMC (which is degenerate with LMC).} $\frac{\chi^2_{ZW,min}}{dof}=22$ (with a best fit $Z_{W,bf}\simeq -1$) while $\frac{\chi^2_{min}}{dof}=O(1)$ would be expected for an acceptable fit. Also for $b_W$ we find $\frac{\chi^2_{bW,min}}{dof}=1.55$ ($b_{W,bf}=-3.3$) which is more reasonable but still larger than 1 indicating a relatively poor quality of fit to a universal slope.
There are two possible causes for this poor quality of fit to universal slopes: either many of uncertainties of the individual host slopes have been underestimated or the universal slope model is not appropriate. In view of the fact that the uncertainties of the Cepheid periods and metallicities have not been included in the $\chi^2$ fit because they were not provided by R21 released data we make the working hypothesis that the uncertainties have been underestimated and thus we add a scatter error adjusted so that $\frac{\chi^2_{bW,min}}{dof}\simeq \frac{\chi^2_{ZW,min}}{dof} \simeq 1$. Thus for $\chi^2(Z_{W})$ we have
\be
\chi^2(Z_{W})=\sum_{i=1}^{N}\frac{(Z_{W,i}-Z_{W})^2}{\sigma_{Z_{W,i}}^2+\sigma_{Z,scat}^2}
\label{chizw2}
\ee
For $\frac{\chi^2_{ZW,min}}{dof} \simeq 1$ we must set $\sigma_{Z,scat}\simeq 3.2$ which is significantly larger than most uncertainties of individual host $Z_W$ slopes. Similarly, for $\frac{\chi^2_{bW,min}}{dof} \simeq 1$ we must set $\sigma_{b,scat}\simeq 0.18$ which is comparable or smaller than most uncertainties of individual host $b_W$ slopes as shown in Table \ref{tab:slopes}.
In order to quantify possible inhomogeneities of Cepheid hosts $b_{W,i}$ and $Z_{W,i}$ slopes, we have split each sample of slopes in two bins: a low distance bin with Cepheid host distances $D$ smaller than a critical distance $D_c$ and a high distance bin with $D>D_c$. Given $D_c$, for each bin we find the best fit slope and its standard $1\sigma$ error using the maximum likelihood method. For example for a low distance $b_W$ bin we minimize
\be
\chi^2(b_{W}^<)=\sum_{i=1}^{N^<}\frac{(b_{W,i}-b_{W}^<)^2}{\sigma_{b_{W,i}}^2+\sigma_{scat}^{2\,<}}
\label{chibw}
\ee
where $N^<$ is the number of hosts in the low distance bin and $\sigma_{scat}$ is the additional uncertainty (scatter error) chosen in each bin so that $\chi^2\simeq 1$ so that the fit becomes consistent with a constant $b_W$ in each bin. Thus we find the best fit low distance bin slope $b_{W}^<$ and its standard error and similarly for the high distance bin $D>D_c$, $b_W^>$ and for the slopes $Z_W^<$, $Z_W^>$. Thus for each $D_c$ we obtain two best fit slopes (one for each bin) and their standard errors. The level of consistency between the two binned slopes at each $D_c$ determines the level of homogeneity and self consistency of the full sample. The results for the best fit binned slopes for $b_W$ and $Z_W$ are shown in Figs. \ref{figbfsets} and \ref{figwslnocovz} respectively as functions of the dividing critical distance $D_c$.
Interestingly, for a range of $D_c$ there is a mild tension between the best fit values of the high and low distance bins which reaches levels of $2-3\sigma$ especially for the metallicity slopes $Z_W$. For both $b_W$ and $Z_W$, the absolute value of the difference between the high and low distance bin slopes is maximized for $D_c>47Mpc$. For the case of $Z_W$ this difference is significant statistically as it exceeds the level of $2\sigma$. The level of statistical consistency between high and low distance bins for both slopes $b_W$ and $Z_W$ is shown in Fig. \ref{figsdistnocovzb}. In the range of $D_c$ between $40Mpc$ and $50Mpc$ and also between $10Mpc$ and $20Mpc$,
it is shown that in the case of $Z_W$ the $\sigma$-distance
\be
\sigma_d \equiv \frac{\vert Z_W^>-Z_W^< \vert}{\sqrt{\sigma_{Z_W^>}^2+\sigma_{Z_W^<}^2}}
\label{zsigdist}
\ee
between the best fit binned $Z_W$ metallicity slopes can reach a level beyond $2\sigma$ (see Figs \ref{figwslnocovz} and \ref{figsdistnocovzb}).
An additional test of possible intrinsic tensions of the Cepheid properties in the SH0ES Cepheid sample is obtained by comparing the probability distributions of the Cepheid period and metallicity for the full sample of the 3130 Cepheid with high and low distance subsamples. In Figs. \ref{fighisp} and \ref{fighism} we show histograms of the probability distributions of the Cepheid period and metallicity for the whole Cepheid sample and for the high ($D>D_c$) and low distance ($D<D_c$) subsamples for $D_c=50Mpc$. The two subsamples for each observable are clearly inconsistent with each other and with the full sample. This is demonstrated visually and also through the Kolmogorov-Smirnov consistency test which quantifies the inconsistency and gives a p-value very close to 0 for the three sample pairs. However, as communicated to us by members of the SH0ES team (private communication) this inconsistency can be justified by observational selection effects and does not necessarily indicate a physics change at $D_c=50 Mpc$. For example bright Cepheids have longer periods and they are more easily observed at high distances. Thus it is expected that there will be higher period (brighter) Cepheids observed at higher distances. Other variables such as the timespan of the observations also play a role. For more distant galaxies there is a trend to allow a longer window of observations so that longer period Cepheids can be found. Also the star formation history of the galaxies dictates whether one will have very long period Cepheids which come from massive, short lived stars. However, even though the observed inconsistency in the Cepheid properties probability distributions may be explained using observational biases and anticipated galactic properties, it may also be a hint of interesting new physics and/or systematic effects.
In view of the above identified level of mild inhomogeneities identified is the SH0ES data, it would be interesting to extend the SH0ES modeling of the Cepheids and SnIa with new degrees of freedom that can model the data taking into account these inhomogeneities. This is the goal of the next section.
\section{Generalized analysis: New degrees of freedom allowing for a physics transition.}
\label{sec:Generalized analysis}
An obvious generalization of the SH0ES analysis described in subsection \ref{sub:baseline} which models the Cepheid-SnIa luminosities using four main parameters is to allow for a transition of any one of these parameters at a particular distance or equivalently cosmic time when radiation was emitted. Such a transition could be either a result of a sudden change in a physics constant (e.g. the gravitational constant) during the last $200Myrs$ in the context e.g. of a first order phase transition or a result of the presence of an unknown systematic.
Another modification of the standard SH0ES analysis in R21 would be to extend the number of constraints imposed on the data vector $\bf{Y}$ taking into account other cosmological data like the inverse distance ladder estimate of $M_B$ \cite{Camarena:2021jlr,Marra:2021fvf,Gomez-Valent:2021hda}. Both of these generalizations will be implemented in the present section.
\subsection{Allowing for a transition of a Cepheid calibration parameter}
A transition of one of the Cepheid calibration parameters can be implemented by replacing one of the four modeling parameters in the $\bf{q}$ vector by two corresponding parameters: one for high and one for low distances (recent cosmic times). Thus, in this approach the number of parameters and entries in the parameter vector $\bf{q}$ increases by one from 47 to 48. Since one of the entries of $\bf{q}$ is replaced by two entries, the corresponding column of the modeling matrix $\bf{L}$ should be replaced by two columns. The high distance parameter column should be similar to the original column it replaced but with zeros in entries corresponding to low distance data (or constraints) and the reverse should happen for the low distance parameter column. This process is demonstrated in the following schematic diagram where the $q_j$ parameter is replaced by the two parameters $q_j$ (or $q_j^<$) and $q_{j+1}$ (or $q_j^>$) and the $j$ column of $\bf{L}$ is replaced by the $j$ and $j+1$ corresponding columns which have zeros in the low or high distance data (or constraint) entries.
\begin{adjustwidth}{-3.9cm}{0.5cm}
\be
\setlength{\tabcolsep}{0.2em}
\begin{tabular}{ccccc}
\(\bf {L_{(3492\times47)}}=
\left( \arraycolsep=1.6 pt\def\arraystretch{1.8}\begin{array}[c]{ccc}
\ldots&L_{1,j}&\ldots\\
\ldots&L_{2,j}&\ldots\\
\ldots&L_{3,j}&\ldots\\
\ldots&\ldots&\ldots\\
\ldots&\ldots&\ldots\\
\ldots&L_{3491,j}&\ldots\\
\ldots&L_{3492,j}&\ldots\\
\end{array}\right)$
&\arraycolsep=1.6 pt\def\arraystretch{1.8}$\left.\begin{matrix}
&\\
&\\
&\\
&\\
&\\
&\\
&\\
\end{matrix}\right\}$ &\arraycolsep=1.6 pt\def\arraystretch{1.8} $\begin{array}[c]{c}
D_{Y_1}<D_c\\
D_{Y_2}>D_c\\
D_{Y_3}>D_c\\
\ldots\\
\ldots\\
D_{Y_{3491}}<D_c\\
D_{Y_{3492}}>D_c\\
\end{array}$ & $\implies$ &$
{\bf L_{(3492\times48)}}=\arraycolsep=1.6 pt\def\arraystretch{1.8}\left( \begin{array}[c]{cccc}
\ldots&L_{1,j}&0&\ldots\\
\ldots&0&L_{2,j+1}&\ldots\\
\ldots&0&L_{3,j+1}&\ldots\\
\ldots&\ldots&\ldots&\ldots\\
\ldots&\ldots&\ldots&\ldots\\
\ldots&L_{3491,j} &0&\ldots\\
\ldots&0&L_{3492,j+1} &\ldots\\
\end{array}\right)\)
\end{tabular}
\ee
\end{adjustwidth}
\begin{adjustwidth}{0cm}{0.6cm}
\be
\begin{tabular}{ccc}
\({\bf q_{(47\times 1)}}=\arraycolsep=1.6 pt\def\arraystretch{1.8}
\begin{pmatrix}
\ldots\\
\ldots\\
q_j\\
\ldots\\
\ldots\\
\end{pmatrix}
\rightarrow
\bf q_{(48\times 1)}= $\arraycolsep=1.6 pt\def\arraystretch{1.8}$\begin{pmatrix}
\ldots\\
\ldots\\
q_j\\
q_{j+1}\\
\ldots\\
\ldots\\
\end{pmatrix}\)
\end{tabular}
\ee
\end{adjustwidth}
In this manner, if the parameter $b_W$ was to be split to $b_{W}^>$ and $b_{W}^<$ for example, Eq. (\ref{wesmagcep}) would be replaced by
\be
m_{H,i,j}^W (D)
=\mu_i+M_{H}^W+b_{W}^>\Theta(D-D_c) [P]_{i,j}+b_{W}^<\Theta(D_c-D) [P]_{i,j} +Z_W[O/H]_{i,j}
\label{wesmagcep1}
\ee
and similarly for splittings of each one the other three parameters $M_H^W$, $Z_W$ and $M_B$. In (\ref{wesmagcep1}) $D$ is a distance that may be assigned to every entry of the data-vector $\bf{Y}$ (Cepheids, SnIa and constraints). Notice that the splitting of any parameter to a high and low distance version does not affect the form of the $\bf{Y}$ data vector and the covariance matrix $\bf{C}$ of the data. In order to properly place the 0 entries of each one of the new columns, a distance must be assigned to every entry of $\bf{Y}$. We have specified this distance for each host using the literature resources or the best fit distance moduli of each host. These distances along with other useful properties of the Cepheids used in our analysis are shown in Table \ref{tab:props}.
\begin{table}
\caption{Properties of the Cepheid hosts considered in the analysis.}
\label{tab:props}
\vspace{2mm}
\setlength{\tabcolsep}{0.2em}
\begin{adjustwidth}{-2. cm}{1.5cm}
{\footnotesize\begin{tabular}{cc ccccc cc}
\hhline{=========}
& \\
Galaxy &SnIa &Ranking in&Ranking in& Ranking in& $D$ $^{a}$ & Number of fitted &Initial point&Final point \\
& &Fig. \ref{figb102bnocov} & Table 3 in R21& Data Vector Y &$[Mpc]$ &Cepheids&in Vector Y&in Vector Y\\
& \\
\hhline{=========}
& \\
M101& 2011fe &8 &1 &1 &6,71 &259 &1 &259\\
M1337& 2006D &34 &2 &2 &38,53 &15 &260 &274\\
N0691&2005W &28 &4 &3 &35,4 &28 &275 &302\\
N1015&2009ig &24 &6 &4 &35,6 &18 &303 &320\\
N0105&2007A &42 &3 &5 &73,8 &8 &321 &328\\
N1309&2002fk &25 &7 &6 &29,4 &53 &329 &381\\
N1365&2012fr &20 &8 &7 &22,8 &46 &382 &427\\
N1448&2001el,2021pit &7 &9 &8 &16,3 &73 &428 &500\\
N1559&2005df &11 &10 &9 &19,3 &110 &501 &610\\
N2442&2015F &13 &11 &10 &20,1 &177 &611 &787\\
N2525&2018gv &21 &12 &11 &27,2 &73 &788 &860\\
N2608&2001bg &31 &13 &12 &35,4 &22 &861 &882\\
N3021&1995al &35 &14 &13 &26,6 &16 &883 &898\\
N3147&1997bq,2008fv,2021hpr &32 &15 &14 &42,5 &27 &899 &925\\
N3254&2019np &16 &16 &15 &24,4 &48 &926 &973\\
N3370&1994ae &14 &17 &16 &24 &73 &974 &1046\\
N3447&2012ht &9 &18 &17 &20,8 &101 &1047 &1147\\
N3583&2015so &15 &19 &18 &34,4 &54 &1148 &1201\\
N3972&2011by &17 &20 &19 &15,1 &52 &1202 &1253\\
N3982&1998aq &19 &21 &20 &19 &27 &1254 &1280\\
N4038&2007sr &29 &22 &21 &29,3 &29 &1281 &1309\\
N4424&2012cg &40 &23 &22 &16,4 &9 &1310 &1318\\
N4536&1981B &12 &24 &23 &31,9 &40 &1319 &1358\\
N4639&1990N &27 &25 &24 &19,8 &30 &1359 &1388\\
N4680&1997bp &38 &26 &25 &42,1 &11 &1389 &1399\\
N5468&1999cp,2002cr&18 &27 &26 &46,3 &93 &1400 &1492\\
N5584&2007af &10 &28 &27 &28 &165 &1493 &1657\\
N5643&2013aa,2017cbv &6 &29 &28 &20,7 &251 &1658 &1908\\
N5728&2009Y &41 &30 &29 &44,9 &20 &1909 &1928\\
N5861&2017erp &26 &31 &30 &30,3 &41 &1929 &1969\\
N5917&2005cf &37 &32 &31 &31 &14 &1970 &1983\\
N7250&2013dy &22 &33 &32 &12,8 &21 &1984 &2004\\
N7329&2006bh &33 &34 &33 &46,8 &31 &2005 &2035\\
N7541&1998dh &30 &35 &34 &34,4 &33 &2036 &2068\\
N7678&2002dp &36 &36 &35 &46,6 &16 &2069 &2084\\
N0976&1999dq &39 &5 &36 &60,5 &33 &2085 &2117\\
U9391&2003du &23 &37 &37 &29,4 &33 &2118 &2150\\
\hline
Total& & & & & &2150&\;\;\;\;\;1 &2150\\
\hline
N4258&Anchor &4 &38 &38 &7,4 &443 &2151 &2593\\
M31&Supporting &5 &39 &39 &0,86 &55 &2594 &2648\\
LMC$^b$&Anchor &2 &40 &40 &0,05 &270&2649 &2918\\
SMC$^b$& Supporting &3 &41 &41 &0,06 &143 &2919 &3061\\
LMC$^c$& Anchor &2 &40 &42 &0,05 &69 &3062&3130\\
\hline
Total& & & & & &980&2151 &3130\\
\hline
\hline
Total All& & & & & &3130&\;\;\;\;\;1 &3130\\
\hline
\hhline{=========}
\\
\end{tabular} }
\\
{\footnotesize NOTE - (a) Distances from \href{https://ned.ipac.caltech.edu/}{NASA/IPAC Extragalactic Database}. (b) From the ground. (c) From HST.}
\end{adjustwidth}
\end{table}
We have thus considered four generalizations of the baseline SH0ES analysis each one corresponding to a high-low distance split of each one of the four modeling parameters. For each generalization we obtained the best fit values and uncertainties of all 48 parameters of the generalized parameter vector $\bf{q}$ for several values of the critical distance $D_c\in[1,60Mpc]$ which defined in each case the high-low distance data bins. The best fit values with uncertainties for the high-low $D$ split parameter (green and blue points) and for $H_0$ (red points), for each generalization considered, is shown in Figs \ref{figmb}, \ref{figbw}, \ref{figmw} and \ref{figzw} in terms of $D_c$. Dotted lines correspond to the SH0ES R21 $H_0$ best fit and to the \plcdm best fit for $H_0$. The following comments can be made on the results of Figs \ref{figmb}, \ref{figbw}, \ref{figmw} and \ref{figzw}.
\begin{itemize}
\item When the SnIa absolute magnitude $M_B$ is allowed to change at $D_c$ ($M_B$ transition) and for $D_c > 47Mpc$, the best fit value of $H_0$ drops spontaneously to the best fit \plcdm value albeit with larger uncertainty $67.33\pm 4.65$ (see Fig. \ref{figmb} and second row of Table \ref{tab:res}). This remarkable result appears with no prior or other information from the inverse distance ladder results. Clearly, there are increased uncertainties of the best fit parameter values for this range of $D_c$ because the most distant usable Cepheid hosts are at distances $46.8Mpc$ (N7329), $60.5Mpc$ (N0976) and $73.8Mpc$ (N0105) and only two of them are at distance beyond $47Mpc$. These hosts (N0975 and N0105) have a total of 41 Cepheids and 4 SnIa. Due to the large uncertainties involved there is a neutral model preference for this transition degree of freedom at $D_c>47Mpc$ (the small drop of $\chi^2$ by $\Delta \chi^2 \simeq -1.5$ is balanced by the additional parameter of the model). This however changes dramatically if the inverse distance ladder constraint on $M_B$ is included in the vector $\bf{Y}$ as discussed below.
\item For all four modeling parameters considered, there is a sharp increase in the absolute difference between the high-low distance best fit parameter values for $D_c\gtrsim 47Mpc$. The statistical significance of this split however is low due to the relatively small number of available Cepheids at $D>47Mpc$.
\item The best fit value of $H_0$ changes significantly when the SnIa absolute magnitude $M_B$ is allowed to make a transition at $D_c>47Mpc$ but is not significantly affected if the three Cepheid modeling parameters $M_W$, $b_W$ and $Z_W$ are allowed to make a transition at any distance. This is probably due to the large uncertainties involved and due to the fact that $H_0$ is only indirectly connected with the three Cepheid modeling parameters.
\item Each one of the transition degrees of freedom mildly reduces $\chi^2$ thus improving the fit to the data but these transition models are not strongly preferred by model selection criteria which penalize the extra parameter implied by these models. This is demonstrated in Figs. \ref{figchiminall}, \ref{figdaicdbicall} which show the values of $\Delta \chi^2_{min}$, $\Delta AIC$ and $\Delta BIC$ of the four transition models with respect to the baseline SH0ES model for various $D_c$ transition distances. As discussed below this changes dramatically if the inverse distance ladder constraint on $M_B$ is introduced in the analysis.
\end{itemize}
\subsection{The effect of the inverse distance ladder constraint}
In view of the spontaneous transition of the SnIa absolute magnitude $M_B$ that appears to lead to a Hubble tension resolution, while being mildly favored by the SH0ES data ($\Delta \chi^2 \simeq -1.5$) without any inverse distance ladder information included in the fit, the following question arises:
\begin{center}
{\it How does the level of preference for the $M_B$ transition model (compared to the SH0ES baseline model) change if an additional constraint is included in the analysis obtained from the inverse distance ladder best fit for $M_B$?}
\end{center}
The inverse distance ladder constraint on $M_B$ is \cite{Camarena:2021jlr, Marra:2021fvf,Gomez-Valent:2021hda}
\be
M_B^{P18} = -19.401 \pm 0.027
\label{mbconstr}
\ee
In order to address this question we modify the analysis by adding one more constraint to the $\bf{Y}$ data-vector: after entry 3215 which corresponds to the 8th constraint we add the value -19.401 for the $M_B$ constraint. We also add a line to the model matrix $\bf{L}$ after line 3215 with all entries set to zero except the entry at column 43 corresponding to the parameter $M_{B}^<$. A column after column 43 is added in $\bf{L}$ to accommodate the high distance parameter $M_B^>$ as described above. The new constraint is assigned a large distance (larger than the distance of the most distant SnIa of the sample) so that it only affects the high distance parameter $M_{B}^>$ (the entry at line 3216 column 43 of $\bf{L}$ is set to 0 for all $D_c$ while the entry at column 44 of the same line is set to 1 for all $D_c$). To accommodate the corresponding uncertainty of the new constraint we also add a line at the covariance matrix after line 3215 with a single nonzero entry at the diagonal equal to $\sigma_{MB}^2=0.027^2=0.000729$. Thus after the implementation of the constraint in the $M_B$ transition model the $\bf{Y}$ vector has 3493 entries, the $\bf{L}$ model matrix has dimensions $3493\times 48$, the $\bf{q}$ parameter vector has $48$ entries and the covariance matrix $\bf{C}$ matrix has dimensions $3493\times 3493$. In a similar way we may implement the constraint (\ref{mbconstr}) in the SH0ES model without allowing for the additional transition degree of freedom and implement model selection criteria to compare the baseline with the transition model in the presence of the inverse distance ladder constraint.
The new constraints on $H_0$ and on the parameters $M_{B}^<$, $M_{B}^>$ emerging from this modified modeling analysis are shown in Fig. \ref{figmb2} in terms of $D_c$. The corresponding quality of fit expressed via the value of $\chi_{min}^2$ and model selection \cite{Kerscher:2019pzk,Liddle:2007fy} expressed via the AIC \cite{akaike1974new} and BIC criteria is shown in Fig. \ref{figmbcon}. The definitions and properties of the AIC and BIC criteria are described in Appendix \ref{AppendixD}
Clearly, the transition degree of freedom at $D_c\simeq 50Mpc$ which is mildly preferred by the data even in the absence of the inverse distance ladder constraint, is strongly preferred by the data in the presence of the constraint while the $\sigma$-distance between the low distance best fit parameter $M_{B}^<$ and the high distance $M_B^>$ reaches a level close to $4\sigma$. These results are described in more detail in Table \ref{tab:res}. The full list of the best fit values of all the parameters of the vector $\bf{q}$ for the SH0ES baseline model with and without the inverse distance ladder constraint in the data vector $\bf{Y}$ is shown in Table \ref{tab:resall} along with the corresponding uncertainties.
\begin{table}
\caption{A model comparison (baseline vs transitions) and best fit parameter values in the absence (top five rows) and in the presence (last two rows) of the inverse distance ladder constraint. Notice that in the presence of the inverse distance ladder constraint and the transition degree of freedom, the best fit value of $H_0$ is identical with the inverse distance ladder result of the completed SDSS-IV extended BAO survey \cite{eBOSS:2020yzd}.}
\label{tab:res}
\vspace{2mm}
\begin{adjustwidth}{-3.8cm}{0.3cm}
\setlength{\tabcolsep}{0.25em}
{\footnotesize\begin{tabular}{cccccccccc}
\hhline{==========}
& \\
Model & $\chi^2_{min}$ & $\chi_{red}^2$ $^a$ & $\Delta AIC $& $\Delta BIC$ & $H_0$ &$M_B$&$M_H^W$&$\Delta b_W$&$Z_W$\\
& &&& &$[Km\,s^{-1}\,Mpc^{-1}]$&$[mag]$&$[mag]$&$[mag/dex]$&$ [mag/dex]$\\
& \\
\hhline{==========}
& \\
Baseline&3552.76&1.031&0 &0 &$73.043\pm 1.007$ &$-19.253\pm0.029 $&$-5.894\pm 0.018$ & $-0.013\pm0.015$ &$ -0.217\pm 0.045$\\
\\
\hline
\\
Transition$^b$ $M_B$&3551.31 &1.031&0.55 & 6.71&$67.326\pm 4.647$ & $-19.250\pm 0.029$& $-5.894\pm 0.018$&$-0.013\pm 0.015$&$-0.217 \pm 0.045$ \\
& & & && &$ -19.430\pm 0.150$& & & \\
& & & && &$1.2\sigma$& & & \\
\hline
\\
Transition$^b$ $M_H^W$&3551.31 & 1.031 &0.55&6.71&$73.162\pm 1.014$ & $-19.250\pm 0.029$&$-5.894\pm 0.018$ &$-0.013\pm 0.015$ &$-0.217\pm 0.045$ \\
& & &&& &&$-5.713 \pm 0.151$ & & \\
& & & && & &$1.2\sigma$& & \\
\hline
\\
Transition$^b$ $Z_W$&3549.99 &1.030 &-0.77&5.39&$72.981\pm 1.007 $ & $-19.255 \pm 0.029 $ & $-5.894\pm 0.018 $ & $-0.014 \pm 0.015 $ & $-0.217 \pm 0.045 $\\
& & &&& && & & $\;\;2.588\pm 1.686 $ \\
& & & && && & & $1.7\sigma$ \\
\hline
\\
Transition$^b$ $b_W$&3550.86 &1.030 &0.10& 6.26 & $73.173 \pm 1.013$ & $-19.249 \pm 0.029 $ & $-5.894 \pm 0.018$ & $-0.013\pm 0.015$ & $-0.217\pm 0.045$ \\
& & & && && & $\;\;\:0.315\pm 0.239$ & \\
& & & && && &$1.4\sigma$ & \\
\hline
\\
{\bf Baseline+Constraint$^c$}&3566.78&1.035&0 &0 &$70.457\pm 0.696$ &$-19.332\pm0.020 $&$-5.920\pm 0.017$ & $-0.026\pm0.015$ &$ -0.220\pm 0.045$\\
\\
\hline
\\
{\bf Transition$^{b,c}$ $M_B$+Constraint}&3551.34 &1.031&-13.44 &-7.27&$68.202\pm 0.879$ & $-19.249\pm 0.029$& $-5.893\pm 0.018$&$-0.013\pm 0.015$&$-0.217 \pm 0.045$ \\
& & & && &$ -19.402\pm 0.027$& & & \\
& & & && &$3.9\sigma$& & & \\
\hhline{==========}
\\
\end{tabular} }
{\footnotesize NOTE - (a) $\chi_{red}^2=\chi_{min}^2/dof$, where $dof=N-M$ is typically the number of degrees of freedom (with $N$ the number of datapoints used in the fit and $M$ the number of free parameters) for each model. (b) At critical distance $D_c\simeq 50Mpc$. (c) With constraint $M_B=-19.401 \pm 0.027 $.}
\end{adjustwidth}
\end{table}
The following comments can be made on the results shown in Table \ref{tab:res}.
\begin{itemize}
\item The $M_B$ transition degree of freedom resolves the $H_0$ tension both in the absence of the inverse distance ladder constraint (second raw of Table \ref{tab:res}) and in the presence of it (last row of Table \ref{tab:res}).
\item In the presence of the inverse distance ladder constraint, the model with the $M_B$ transition degree of freedom at $D_c=50Mpc$ is strongly preferred over the baseline $SH0ES$ model as indicated by the model selection criteria despite the additional parameter it involves (comparison of the last two rows of Table \ref{tab:res}).
\item The transition degree of freedom when allowed in each one of the other three modeling parameters does not lead to a spontaneous resolution of the Hubble tension since the best fit value of $H_0$ is not significantly affected. However, it does induce an increased absolute difference between the best fit values of the high distance and low distance parameters which however is not statistically significant due to the large uncertainties of the bin with $D>D_c\simeq 50Mpc$ (see also Figs. \ref{figbw}, \ref{figmw} and \ref{figzw}).
\end{itemize}
The above comments indicate that interesting physical and or systematic effects may be taking place at distances at or beyond $50Mpc$ in the SH0ES data and therefore more and better quality Cepheid/SnIa data are needed at these distances to clarify the issue. This point is further enhanced by the recent study of Ref. \cite{Wojtak:2022bct} indicating that SnIa in the Hubble flow (at distances $D>90Mpc$) appear to have different color calibration properties than SnIa in Cepheid hosts (at distances $D<75Mpc$).
The hints for a transition in the SnIa absolute luminosity and magnitude $M_B$ are also demonstrated in Fig. \ref{figMB2} where we show the mean\footnote{Some hosts have more than one SnIa and more than one light curves and thus averaging with proper uncertainties was implemented in these cases.} SnIa absolute magnitude $M_{Bi}$ for each Cepheid+SnIa host $i$ obtained from the equation
\be
M_{Bi}=m_{Bi}^0-\mu_i
\label{mbi}
\ee
where $m_{Bi}^0$ is the measured apparent magnitude of the SnIa and $\mu_i$ are the best fit distance host distance moduli obtained using the SH0ES baseline model (left panel of Fig. \ref{figMB2}) and the $M_B$ transition model (right panel) which allows (but does not enforce) an $M_B$ transition at $D_c=50 Mpc$. Notice that when the $M_B$ transition degree of freedom is allowed in the analysis the best fit values of $M_{Bi}$ for the more distant hosts N0976 ($D=60.5Mpc$) and N0105 ($D=73.8 Mpc$) spontaneously drop to the inverse distance ladder calibrated value range.
The inverse distance ladder calibrated values of the absolute magnitudes $M_{Bi}$ of SnIa in the Hubble flow are obtained by assuming $H_0=H_0^{P18}=67.36\pm0.54$~km~s$^{-1}$~Mpc$^{-1}$ and using the following equation
\be
M_{Bi}=m(z_{Bi}^0)+5\log_{10}\left[H_0^{P18}\cdot Mpc/c \right]-5\log_{10}\left[D_L(z_i) \right]-25
\ee
where $D_L(z_i)$ is the Hubble free luminosity distance in the context of \plcdm and $m(z_{Bi}^0)$ are the binned corrected SnIa apparent magnitudes of the Pantheon sample. The corresponding binned Cepheid+SnIa host values of $M_{B}$ obtained assuming the baseline Sh0ES model (red points) and the $M_B$ transition model ($D_c=50Mpc$, green points) are shown in Fig. \ref{figMbbin} along with the inverse distance ladder calibrated binned $M_{B}$ of the Hubble flow SnIa of the Pantheon dataset (blue points). When the transition dof is allowed, the data excite it and a hint for a transition appears (the green data point is of the transition model is clearly below the red point corresponding to the constant $M_B$ SH0ES baseline model) even though the statistical significance of the indicated transition is low due to the small number of Cepheids (41) included in the last bin with $D\in [50Mpc,75Mpc]$.
\section{Conclusion}
\label{sec:Conclusion}
We have described a general framework for the generalization of the Cepheid+SnIa modeling in the new SH0ES data. Such modeling generalization approach is motivated by the increased statistical significance of the Hubble tension and may hint towards new degrees of freedom favored by the data. Such degrees of freedom may be attributed to either new physics or to unknown systematics hidden in the data.
In the present analysis we have focused on a particular type of new modeling degree of freedom allowing for a transition of any one of the four Cepheid/SnIa modeling parameters at a distance $D_c$. However, our analysis can be easily extended to different degrees of freedom with physical motivation. Examples include possible modeling parameter dependence on properties other than distance (e.g. dust extinction, color and stretch calibration properties etc.). In addition other degrees of freedom that could be excited by the data could involve the modeling parameters that have been absorbed in the R21 SH0ES data like the Cepheid dust extinction parameter $R_W$ and the color and stretch SnIa calibration parameters $\beta$ and $\alpha$. These degrees of freedom may also be probed and excited by the fit to the data if allowed by the modeling \cite{Wojtak:2022bct}.
We have demonstrated that our proposed transition degrees of freedom are mildly excited by the SH0ES data and in the case of the SnIa absolute magnitude $M_B$ transition degree of freedom at $D_c\simeq 50Mpc$, the best fit value of $H_0$ shifts spontaneously to a value almost identical with the \plcdm best fit value thus annihilating the Hubble tension. However, the high distance bin in this case involves only 41 Cepheids and 4 SnIa and therefore the best fit value of $M_B^>$ which effectively also fixes $H_0$ involves significant uncertainties.
In the presence of the inverse distance ladder constraint on the high distance bin parameter $M_B^>$ of the $M_B$ transition model and on the single $M_B$ parameter of the SH0ES model, the uncertainties reduce dramatically. The Hubble tension is fully resolved only in the presence of the $M_B$ transition degree of freedom at $D_c\simeq 50Mpc$. This transition model is also strongly selected over the baseline SH0ES model with the constraint, by both model selection criteria AIC and BIC despite the penalty imposed by AIC and especially BIC on the additional parameter involved in the new modeling. This behavior along with other recent studies \cite{Wojtak:2022bct} hints towards the need of more detailed Cepheid+SnIa calibrating data at distances $D\gtrsim 50Mpc$ i.e. at the high end of rung 2 on the distance ladder.
Our comprehensively described generalized modeling approach opens a new direction in the understanding of the Hubble tension and may be extended by the introduction of a wide range of new degrees of freedom in the SH0ES data analysis and also by the introduction of new constraints motivated by other cosmological data. Such extensions could for example involve more distance bins in the distance dependence of the four main modeling parameters. In that case the relevant column of the modeling matrix $\bf{L}$ would have to be replaced by more than two columns (one for each distance bin). Such bins could also be defined not in terms of distance but in terms of other Cepheid/SnIa properties (e.g. Cepheid metallicity or period and/or SnIa light curve color or stretch). In addition, other parameters beyond the four basic modeling parameters may be considered including the dust extinction parameter $R_W$ and/or the SnIa light curve color and stretch parameters. Also, similar modeling generalizations may be implemented on different distance calibrator data used for the measurement of $H_0$ such as the TRGB data \cite{Freedman:2019jwv}.
Physical motivation is an important factor for the evaluation of any new modeling degree of freedom especially if it is favored by the data. A transition of the SnIa luminosity at a particular recent cosmic time could be induced by a sudden change of the value of a fundamental physics constant e.g. the gravitational constant in the context of a recent first order phase transition \cite{Coleman:1977py,Callan:1977pt,Patwardhan:2014iha} to a new vacuum of a scalar-tensor theory or in the context of a generalization of the symmetron screening mechanism \cite{Perivolaropoulos:2022txg}. A similar first order transition is implemented in early dark energy models \cite{Niedermann:2020dwg} attempting to change the last scattering sound horizon scale without affecting other well constrained cosmological observables. Thus, even though no relevant detailed analysis has been performed so far, there are physical mechanisms that could potentially induce the SnIa luminosity transition degree of freedom.
The emerging new puzzles challenging our standard models may soon pave the way to exciting discoveries of new physical laws. The path to these discoveries goes through the deep and objective understanding of the true information that is hidden in the cosmological data. The present analysis may be a step in that direction.
-----------------------------------------------------
\funding{This work was supported by the Hellenic Foundation for Research and
Innovation (HFRI - Progect No: 789). }
\dataavailability{The numerical analysis files for the reproduction of the figures can be found in the \href{https://github.com/FOTEINISKARA/A-reanalysis-of-the-SH0ES-data-for-H_0}{A reanalysis of the SH0ES data for $H_0$} GitHub repository under the MIT license.}
\acknowledgments{We thank Adam Riess, Dan Scolnic, Eoin Colgain, and Radoslaw Wojtak for useful comments.}
\conflictsofinterest{The authors declare no conflict of interest.}
\appendixtitles{yes} %
\newpage
\appendixstart
\appendix
\section[\appendixname~\thesection]{Covariance Matrix}
\label{AppendixA}
A schematic form of the non-diagonal covariance matrix used in our analysis is shown below. The symbols $\sigma_{tot,i}^2$ indicate the (non-diagonal in general) covariance submatrix within the $i$ host while the symbols $Z_{cov}$ indicate submatrices that correlate the uncertainties between different hosts.
\begin{adjustwidth}{0.cm}{1cm}
\be
\nonumber
\begin{turn}{90}
{\bf {C}=\setcounter{MaxMatrixCols}{50}{\footnotesize
$\left(\arraycolsep=1.2pt\def\arraystretch{1.8}\begin{array}[c]{ccccccccccccccccccccc}
\sigma_{tot,1}^2&\ldots&Z_{cov}&Z_{cov}&0&0&0&\ldots&0&0&0&0&0&0&0&0&0&0&\ldots&0\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
Z_{cov}&\ldots&\sigma_{tot,37}^2&Z_{cov}&0&0&0&\ldots&0&0&0&0&0&0&0&0&0&0&\ldots&0\\
\hline
Z_{cov}&\ldots&Z_{cov}&\sigma_{tot,N4258}^2&0&0&0&\ldots&0&0&0&0&0&0&0&0&0&0&\ldots&0\\
0&\ldots&0&0&\sigma_{tot,M31}^2&0&0&\ldots&0&0&0&0&0&0&0&0&0&0&\ldots&0\\
0&\ldots&0&0&0&\sigma_{tot,LMC}^2&0&\ldots&0&0&0&0&0&0&0&0&0&0&\ldots&0\\
\hline
0&\ldots&0&0&0&0&\sigma_{M_B,1}^2&\ldots&Sn_{cov}&0&0&0&0&0&0&0&0&Sn_{cov}&\ldots&Sn_{cov}\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&0&0&Sn_{cov}&\ldots&\sigma_{M_B,77}^2&0&0&0&0&0&0&0&0&Sn_{cov}&\ldots&Sn_{cov}\\
\hline
0&\ldots&0&0&0&0&0&\ldots&0&\sigma_{M_H^W,HST}^2&0&0&0&0&0&0&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&\sigma_{M_H^W,Gaia}^2&0&0&0&0&0&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&0&\sigma_{Z_W,Gaia}^2&0&0&0&0&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&0&0&\sigma_x^2&0&0&0&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&0&0&0&\sigma_{ground,zp}^2&0&0&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&0&0&0&0&\sigma_{b_W}^2&0&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&0&0&0&0&0&\sigma_{\mu,N4258}^2&0&0&\ldots&0\\
0&\ldots&0&0&0&0&0&\ldots&0&0&0&0&0&0&0&0&\sigma_{\mu,LMC}^2&0&\ldots&0\\
\hline
0&\ldots&0&0&0&0&Sn_{cov}&\ldots&Sn_{cov}&0&0&0&0&0&0&0&0&\sigma_{M_B,z,1}^2&\ldots&Sn_{cov}\\
\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots&\ldots\\
0&\ldots&0&0&0&0&Sn_{cov}&\ldots&Sn_{cov}&0&0&0&0&0&0&0&0&Sn_{cov}&\ldots&\sigma_{M_B,z,277}^2\\
\end{array}
\right)$ }}
\end{turn}
\ee
\end{adjustwidth}
\appendix
\renewcommand\thefigure{\thesection.\arabic{figure}}
\section[\appendixname~\thesection]{: Analytic minimization of $\chi^2$.}
\label{AppendixB}
\setcounter{figure}{0}
The proof of Eqs. (\ref{bfpar1}) and (\ref{errmat}) that lead to the best fit parameter values and their uncertainties through the analytic minimization of $\chi^2$ may be sketched as follows\footnote{For more details see \url{https://people.duke.edu/~hpgavin/SystemID/CourseNotes/linear-least-squres.pdf}}:\\
Using the matrix of measurements (data vector) $\bf{Y}$, the matrix of parameters $\bf{q}$ and the equation (or design) matrix $\bf{L}$ with the measurement error matrix (covariance matrix) $\bf{C}$ the $\chi^2$ statistic is expressed as
\be
\chi^2=(\bf{Y}-\bf{Lq})^T\bf{C}^{-1}(\bf{Y}-\bf{Lq})=\bf{q}^T\bf{L}^T\bf{C}^{-1}\bf{L}\bf{q}-2\bf{q}^T\bf{L}^T\bf{C}^{-1}\bf{Y}+\bf{Y}^T\bf{C}^{-1}\bf{Y}
\label{chi2p}
\ee
The $\chi^2$ is minimized with respect to the parameters $\bf{q}$, by solving the equation
\be
\frac{\partial\chi^2}{\partial \bf{q}}\Big|_{{\bf q}_{best}}=0=>2\bf{L}^T\bf{C}^{-1}\bf{L}{\bf q}_{best}-2\bf{L}^T\bf{C}^{-1}\bf{Y}=0
\label{chi2m}
\ee
Thus the maximum-likelihood parameters are given as
\be
{\bf q}_{best}=(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}\bf{L}^T\bf{C}^{-1}\bf{Y}
\label{qbest}
\ee
We have tested the validity of this equation numerically by calculating $\chi^2$ for the SH0ES baseline model and showing that indeed it has a minimum at the analytically predicted parameter values provided by (\ref{qbest}). This is demonstrated in Fig. \ref{plchi2h02} where we show $\chi^2(H_0)$ with the rest of the parameters fixed at their analytically predicted best fit values. As expected the minimum is obtained at the analytically predicted value of $H_0$.
The standard errors squared of the parameters in $\bf{q_{best}}$ are given as the diagonal elements of the transformed covariance matrix
\be
\bf{\varSigma_{kl}}=\sum_{i}\sum_{j}\left[\frac{\partial {\bf q}_{best,k}}{\partial Y_i}\right] \bf{C_{ij}}\left[\frac{\partial {\bf q}_{best,l}}{\partial Y_j}\right]
\ee
or
\be
\bf{\varSigma}=\left[\frac{\partial {\bf q}_{best}}{\partial Y}\right] \bf{C}\left[\frac{\partial {\bf q}_{best}}{\partial Y}\right]^T
\ee
Thus
\begin{eqnarray}{}
\bf{\varSigma}&=(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}\bf{L}^T\bf{C}^{-1}C\left[(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}\bf{L}^T\bf{C}^{-1}\right]^T\\
&=(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}\bf{L}^T\bf{C}^{-1}C\bf{C}^{-1}\bf{L}(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}\\
&=(\bf{L}^T\bf{C}^{-1}\bf{L})^{-1}
\end{eqnarray}
The standard errors provided by the this equation for the SH0ES baseline model are consistent and almost identical with the published results of R21.
\renewcommand\thefigure{\thesection.\arabic{figure}}
\renewcommand\thetable{\thesection.\arabic{table}}
\section[\appendixname~\thesection]{: Reanalysis of Individual Cepheid $m_H^W$ slope data.}
\label{AppendixC}
\setcounter{figure}{0}
\setcounter{table}{0}
We use the released Cepheid P-L data ($m_H^W-\log P$) to find the best fit slope $b_W$ in each one of the 40 Cepheid hosts where the 3130 Cepheid magnitudes are distributed. We thus reproduce Figs. 9 and 10 of R21. Since the released Cepheid magnitude data due not include outliers we focus on the reproduction of the black points of Fig. 10 of R21. Our motivation for this attempt includes the following:
\begin{itemize}
\item It is evident by inspection of Fig. 10 of R21 (see also Fig. \ref{figb102bnocov} below) that most of the better measured $b_W$ slopes have a smaller absolute slope than the corresponding slopes obtained from anchors+MW hosts. This makes it interesting to further test the homogeneity of these measurements and their consistency with the assumption of a global value of $b_W$ for both anchors and SnIa hosts.
\item Most of the measurements that include outliers (red points of Fig. 10 of R21) tend to amplify the above mentioned effect ie they have smaller absolute slopes $b_W$ than the slopes obtained from the anchors.
\item The slope corresponding to the host N4424 is missing from Fig. 10 of R21 while the host M1337 appears to show extreme behavior of the outliers.
\end{itemize}
Based on the above motivation we have recalculated all the $b_W$ slopes of the Cepheid hosts and also extended the analysis to metallicity slopes for the Cepheids of each hosts. The individual metallicity slopes $Z_{W,i}$ for each host have been obtained for the first time in our analysis and thus no direct comparison can be made with R21 for these slopes.
The Cepheid magnitude period and magnitude metallicity data used to calculate the corresponding best fit slopes $b_W$ and $Z_W$ are shown in Figs. \ref{figballnocov} and \ref{figzallnocov} for each host along with the best fit straight lines from which the individual best fit $b_{W,i}$ and $Z_{W,i}$ were obtained. The numerical values of the derived best fit slopes along with the corresponding uncertainties are shown in Table \ref{tab:slopes}.
The derived best fit slopes $b_{W,i}$ for each host along with their standard errors (obtained using Eqs. (\ref{bfpari}) and (\ref{errmati}) ) are shown in Fig. \ref{figb102bnocov} (purple points) along with the corresponding points of Fig. 10 of R21 (green points).
The best fit values of the slopes $b_{W,i}$ we have obtained using the raw Cepheid data are consistent with Fig. 10 of R21 as shown in Fig. \ref{figb102bnocov}. The agreement with the results of R21 is excellent except of three points. Two slopes corresponding to the hosts N4038 and N1365 are slightly shifted in our analysis compared to R21 due a small disagreement in the best fit slope and a typo of R21 in transferring the correct slope to Fig. 10.
In addition, the slope corresponding to the host N4424 is missing in Fig. 10 of R21. In the $m_H^W-\log P$ plot of R21 (their Fig. 9 where the magnitudes decrease upwards on the vertical axis) corresponding to N4424, the indicated best fit line is the green line shown in the upper right inset plot of Fig. \ref{figb102bnocov} corresponding to the green point. The correct best fit however corresponds to the purple line of the upper right inset plot with slope given by the purple point pointed by the arrow.
Thus our Fig. \ref{figb102bnocov} is in excellent agreement with Fig. 10 of R21 with the exception of three points where our plot corrects the corresponding plot of R21. In any case we stress that these three points do not play a significant role in our conclusion about the inhomogeneities of $b_W$ and $Z_W$ discussed in section \ref{seccephomog}.
\begin{table}
\caption{The best fit individual slopes $b_W$ and $Z_W$ with their standard errors $\sigma$ and scatter uncertainties $\sigma_{scat}$. The scatter uncertainty secures that the model of a universal best fit slope becomes an acceptable model with $\chi_{min}^2/dof \simeq 1$, as described in subsection \ref{seccephomog}. The number of hosts in the Table is $N=42$ including the Milky Way (MW). For LMC and SMC the metallicities were not provided individually for each Cepheid in R21 and thus no $Z_W$ slope could be estimated in our analysis.}
\label{tab:slopes}
\vspace{2.5mm}
\setlength{\tabcolsep}{0.6em}
\begin{adjustwidth}{0cm}{1.5cm}
{\footnotesize\begin{tabular}{ccccccccc}
\hhline{=========}
& & & & & & & & \\
Galaxy & $\qquad D^{a}\quad $ & $b_W$&$\sigma$ &$\sigma_{scat}$ &$Z_W$&$\sigma$ &$\sigma_{scat}$\\
&[Mpc] & [mag/dex] & [mag/dex] & [mag/dex] & [mag/dex]&[mag/dex] & [mag/dex] \\
& & & & & & & & \\
\hhline{=========}
& & & & & & & & \\
M101& 6.71& -2.99& 0.14& 0.18& -1.14& 0.86& 3.2\\
M1337& 38.53& -4.& 0.85& 0.18& -19.44& 6.27& 3.2\\
N0691& 35.4& -2.64& 0.57& 0.18& -0.73& 2.54& 3.2\\
N1015& 35.6& -3.63& 0.43& 0.18& 2.57& 1.3& 3.2\\
N0105& 73.8& -5.13& 2.13& 0.18& 6.21& 3.8& 3.2\\
N1309& 29.4& -3.23& 0.44& 0.18& -0.77& 0.59& 3.2\\
N1365& 22.8& -2.9& 0.37& 0.18& -1.45& 0.39& 3.2\\
N1448& 16.3& -3.21& 0.12& 0.18& -1.3& 0.39& 3.2\\
N1559& 19.3& -3.27& 0.18& 0.18& -2.55& 1.62& 3.2\\
N2442& 20.1& -3.05& 0.2& 0.18& 3.94& 0.58& 3.2\\
N2525& 27.2& -3.2& 0.37& 0.18& -1.75& 2.74& 3.2\\
N2608& 35.4& -3.24& 0.73& 0.18& 1.82& 2.33& 3.2\\
N3021& 26.6& -3.3& 0.89& 0.18& -1.03& 2.85& 3.2\\
N3147& 42.5& -4.72& 0.73& 0.18& -11.43& 9.13& 3.2\\
N3254& 24.4& -3.05& 0.31& 0.18& -0.22& 0.57& 3.2\\
N3370& 24& -3.55& 0.23& 0.18& -1.46& 0.4& 3.2\\
N3447& 20.8& -3.48& 0.13& 0.18& -0.62& 0.66& 3.2\\
N3583& 34.4& -2.57& 0.31& 0.18& -0.06& 0.62& 3.2\\
N3972& 15.1& -3.58& 0.32& 0.18& 4.9& 1.58& 3.2\\
N3982& 19& -2.47& 0.36& 0.18& 0.46& 0.47& 3.2\\
N4038& 29.3& -2.63& 0.47& 0.18& 0.54& 2.07& 3.2\\
N4424& 16.4& 1.01& 1.26& 0.18& 0.29& 4.33& 3.2\\
N4536& 31.9& -3.25& 0.19& 0.18& -2.82& 0.56& 3.2\\
N4639& 19.8& -3.08& 0.51& 0.18& -2.12& 0.64& 3.2\\
N4680& 42.1& -3.12& 1.22& 0.18& 1.48& 3.27& 3.2\\
N5468& 46.3& -2.62& 0.35& 0.18& 1.12& 0.74& 3.2\\
N5584& 28& -3.26& 0.16& 0.18& 0.4& 0.36& 3.2\\
N5643& 20.7& -3.1& 0.12& 0.18& 2.9& 1.17& 3.2\\
N5728& 44.9& -5.09& 1.34& 0.18& -15.19& 5.71& 3.2\\
N5861& 30.3& -2.5& 0.45& 0.18& 1.65& 0.72& 3.2\\
N5917& 31& -4.54& 1.17& 0.18& -17.82& 6.56& 3.2\\
N7250& 12.8& -2.75& 0.39& 0.18& -8.33& 4.44& 3.2\\
N7329& 46.8& -3.04& 0.76& 0.18& 6.74& 1.86& 3.2\\
N7541& 34.4& -3.54& 0.71& 0.18& -0.04& 1.08& 3.2\\
N7678& 46.6& -2.73& 1.03& 0.18& 2.16& 2.27& 3.2\\
N0976& 60.5& -1.79& 1.24& 0.18& 3.71& 2.73& 3.2\\
U9391& 29.4& -3.33& 0.43& 0.18& -4.2& 1.51& 3.2\\
N4258& 7.4& -3.32& 0.06& 0.18& -7.67& 0.31& 3.2\\
M31& 0.86& -3.29& 0.07& 0.18& -4.85& 0.35& 3.2\\
LMC$^b$& 0.05& -3.31&0.017& 0.18& -& -& 3.2\\
SMC$^b$& 0.06& -3.31&0.017& 0.18& -& -& 3.2\\
MW$^c$& 0& -3.26& 0.05& 0.18& -0.2& 0.12& 3.2\\
&&&&&&&\\
\hhline{=========}
&&&&&&&\\
\end{tabular} }
\end{adjustwidth}
{\footnotesize NOTE - (a) Distances from \href{https://ned.ipac.caltech.edu/}{NASA/IPAC Extragalactic Database}. (b) No individual Cepheid metallicities were provided for LMC and SMC in R21 and thus we could not estimate the slope $Z_W$ for these hosts. (c) For the Milky Way (MW) we have used directly the values provided in R21 obtained from Gaia EDR3+HST.}
\end{table}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\renewcommand{\thetable}{\thesection.\arabic{table}}
\section[\appendixname~\thesection]{Model selection criteria}
\label{AppendixD}
\setcounter{equation}{0}
\setcounter{table}{0}
Various methods for model selection have been developed and model comparison techniques used \cite{Liddle:2004nh,Liddle:2007fy,Arevalo:2016epc,Kerscher:2019pzk}. The reduced $\chi^2$ is a very popular method for model comparison. This is defined by
\be
\chi_{red}^2=\frac{\chi_{min}^2}{dof}
\ee
where $\chi_{min}^2$ is the minimum $\chi^2$ and $dof=N-M$ is typically the number of degrees of freedom (with $N$ is the number of datapoints used in the fit and $M$ is the number of free parameters) for each model.
The model selection methods like Akaike Information Criterion (AIC) \cite{akaike1974new} and the Bayesian Information Criterion (BIC) \cite{Schwarz:1978tpv} that penalize models with additional parameters are used. For a model with $M$ parameters and a dataset with $N$ total observations these are defined through the relations \cite{Liddle:2004nh,Liddle:2007fy,Arevalo:2016epc}
\be
AIC=-2ln\mathcal{L}_{max}+2M=\chi_{min}^2+2M
\label{aic}
\ee
\be
BIC=-2ln\mathcal{L}_{max}+Mln{N}=\chi_{min}^2+Mln{N}
\label{bic}
\ee
where $\mathcal{L}_{max}\equiv e^{-\chi_{min}^2/2}$ (e.g. \cite{John:2002gg,Nesseris:2012cq}) is the maximum likelihood of the model under consideration.
The "preferred model" is the one which minimizes AIC and BIC. The absolute values of the AIC and BIC are not informative. Only the relative values between different competing models are relevant. Hence when comparing one model versus the baseline-SH$0$ES we can use the model differences $\Delta$AIC and $\Delta$BIC.
The differences $\Delta$AIC and $\Delta$BIC with respect to the baseline-SH$0$ES model defined as
\be
\Delta AIC=AIC_i-AIC_s=\Delta \chi_{min}^2+2 \Delta M
\label{daic}
\ee
\be
\Delta BIC=BIC_i-BIC_s=\Delta \chi_{min}^2+\Delta M (ln{N})
\label{dbic}
\ee
where the subindex i refers to value of AIC (BIC) for the model i and $AIC_s$ ($BIC_s$) is the value of AIC (BIC) for the baseline-SH$0$ES model. Note that a positive value of $\Delta$AIC or $\Delta$BIC means a preference for baseline-SH$0$ES model.
According to the calibrated Jeffreys’ scales \cite{Jeffreys:1961} showed in the Table \ref{jefsc} (see also Refs. \cite{Liddle:2004nh,Nesseris:2012cq,BonillaRivera:2016use,Perez-Romero:2017njc,Camarena:2018nbr}) a range $0<|\Delta AIC|<2$ means that the two comparable models have about the same support from the data, a range $4<|\Delta AIC|<7$ means this support is considerably less for the model with the larger $AIC$ while for $|\Delta AIC|>10$ the model with the larger $AIC$ have no support i.e. the model is practically irrelevant. Similarly, for two competing models a range $0<|\Delta BIC|<2$ is regarded as weak evidence, a range $2<|\Delta BIC|<6$ is regarded as positive evidence, while for $|\Delta BIC|>6$ the evidence is strong against the model with the larger value.
We attribute the difference between $\Delta$AIC and $\Delta$BIC for the models considered to the fact that the BIC penalizes additional parameters more strongly than the AIC as inferred by the Eqs. (\ref{aic}) and (\ref{bic}) for the used dataset with $ln N>2$ (see Refs. \cite{Liddle:2004nh,Arevalo:2016epc,Rezaei:2019xwo}.
\begin{table}
\caption{The interpretation of differences $\Delta AIC$ and $\Delta BIC$ according to the calibrated Jeffreys’ scale \cite{Jeffreys:1961} (see also Refs. \cite{Liddle:2004nh,Nesseris:2012cq,BonillaRivera:2016use,Perez-Romero:2017njc,Camarena:2018nbr}). However, it should be noted that the Jeffreys’ scale has to be interpreted with care \cite{Nesseris:2012cq} because it has been shown to lead to diverse qualitative conclusions.}
\label{jefsc}
\vspace{2mm}
\setlength{\tabcolsep}{0.1em}
\begin{adjustwidth}{0 cm}{1.cm}
\begin{tabular}{cccc|cccc}
\hhline{========}
&& && && & \\
\multicolumn{3}{c}{$\Delta AIC$ } && \multicolumn{4}{c}{$\Delta BIC$ } \\
&& && && & \\
\hline
&& && && & \\
\multicolumn{3}{c}{\footnotesize Level of empirical support for
the model with the smaller $AIC$} && \multicolumn{4}{c}{\footnotesize Evidence against the model with the larger $BIC$} \\
&& && && & \\
\hline
&& && && & \\
$\;$0-2 &$\qquad$ 4-7 &$ >10$ &&$\quad$ 0-2 &$\quad$2-6& $\quad$6-10 &$ >10$ \\
$\,\,$ \footnotesize Substantial$\;$ & $\qquad$ \footnotesize Strong & \footnotesize Very strong&& $\quad$ \footnotesize Weak &$\quad$ \footnotesize Positive& $\quad$ \footnotesize Strong &$\quad$ \footnotesize Very strong \\
&& && && & \\
\hhline{========}
\end{tabular}
\end{adjustwidth}
\end{table}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\renewcommand{\thetable}{\thesection.\arabic{table}}
\section[\appendixname~\thesection]{}
\label{AppendixE}
\setcounter{equation}{0}
\setcounter{table}{0}
Here we present in a comprehensive and concise manner the 3130 Cepheid data that appear in the vector $\bf{Y}$ and in the modeling matrix $\bf{L}$. The sequence is the same as the one appearing in the top 3130 entries of the vector $\bf{Y}$ released in the form of fits files by R21. Even though no new information is presented in the following Table \ref{tab:hoscep} compared to the released fits files of R21, the concise presentation of these data may make them more useful in various applications and studies.\\
\vspace{1.8mm}
\setlength{\tabcolsep}{1.7em}
\begin{adjustwidth}{-2. cm}{2.5cm}
{\footnotesize \begin{longtable}[c]{ c c c c c c }
\caption{Data for 3130 Cepheids in the host SnIa host galaxies and in the anchor or supporting galaxies N4258, M31, LMC, SMC from fits files in R21. An electronic version of the complete table is available at the \href{https://github.com/FOTEINISKARA/A-reanalysis-of-the-SH0ES-data-for-H_0}{A reanalysis of the SH0ES data for $H_0$} GitHub repository under the MIT license.} \\
\label{tab:hoscep}\\
\hhline{======}
& & & & & \\
Galaxy & Ranking & ${\bar m}_H^W$&$\sigma$ &[P] &[O/H]\\
&in Vector Y& [mag] & [mag] & [dex] & [dex] \\
& & & & & \\
\hhline{======}
& & & & & \\
M1337& 1& 27.52& 0.39& 0.45& -0.2\\
M1337& 2& 28.05& 0.47& 0.65& -0.19\\
M1337& 3& 26.56& 0.35& 0.6& -0.2\\
M1337& 4& 26.97& 0.27& 0.78& -0.2\\
M1337& 5& 27.01& 0.47& 0.72& -0.18\\
M1337& 6& 26.29& 0.5& 0.9& -0.17\\
M1337& 7& 26.12& 0.6& 0.89& -0.17\\
M1337& 8& 26.94& 0.45& 0.75& -0.17\\
M1337& 9& 27.78& 0.61& 0.69& -0.16\\
M1337& 10& 26.17& 0.55& 0.7& -0.14\\
M1337& 11& 26.63& 0.65& 0.49& -0.22\\
M1337& 12& 27.4& 0.39& 0.85& -0.19\\
M1337& 13& 27.26& 0.53& 0.52& -0.22\\
M1337& 14& 27.34& 0.34& 0.85& -0.17\\
M1337& 15& 27& 0.54& 0.7& -0.19\\
N0691& 1& 26.59& 0.57& 0.4& 0.09\\
N0691& 2& 27.1& 0.46& 0.61& 0.06\\
N0691& 3& 26.79& 0.47& 0.71& 0.04\\
N0691& 4& 26.83& 0.5& 0.78& 0.14\\
N0691& 5& 26.23& 0.44& 0.51& 0.05\\
N0691& 6& 27.16& 0.48& 0.63& 0.1\\
N0691& 7& 26.89& 0.46& 0.7& 0.03\\
N0691& 8& 26.75& 0.59& 0.43& 0.03\\
N0691& 9& 27.11& 0.28& 0.88& 0.04\\
N0691& 10& 26.99& 0.68& 0.62& 0.1\\
N0691& 11& 27.6& 0.35& 0.81& 0.07\\
N0691& 12& 27.33& 0.38& 0.75& 0.08\\
N0691& 13& 26.6& 0.53& 0.54& 0.14\\
N0691& 14& 26.34& 0.45& 0.74& 0.09\\
N0691& 15& 26.91& 0.3& 0.96& 0.11\\
N0691& 16& 26.79& 0.49& 0.84& 0.13\\
N0691& 17& 27.08& 0.48& 0.63& 0.1\\
N0691& 18& 27.07& 0.59& 0.63& 0.1\\
N0691& 19& 26.69& 0.63& 0.53& 0.07\\
N0691& 20& 27.94& 0.58& 0.49& 0.11\\
N0691& 21& 26.15& 0.62& 0.48& 0.12\\
N0691& 22& 26.46& 0.43& 0.74& 0.11\\
N0691& 23& 27.16& 0.6& 0.42& 0.14\\
N0691& 24& 26.96& 0.38& 0.8& 0.14\\
N0691& 25& 26.59& 0.44& 0.65& 0.04\\
N0691& 26& 26.61& 0.52& 0.5& 0.09\\
N0691& 27& 26.21& 0.6& 0.62& 0.11\\
N0691& 28& 27.42& 0.65& 0.65& 0.1\\
...& ...& ...& ...& ...& ...\\
\hhline{======}
\end{longtable} }
\end{adjustwidth}
\begin{adjustwidth}{-\extralength}{0cm}
\printendnotes[custom]
\reftitle{References}
\externalbibliography{yes}
\bibliography{Bibliography.bib}
\end{adjustwidth}
|
Title:
Surface brightness-colour relations of dwarf stars from detached eclipsing binaries -- I. Calibrating sample |
Abstract: Surface brightness -- colour relations (SBCRs) are very useful tools for
predicting the angular diameters of stars. They offer the possibility to
calculate very precise spectrophotometric distances by the eclipsing binary
method or the Baade-Wesselink method. Double-lined Detached Eclipsing Binary
stars (SB2 DEBs) with precisely known trigonometric parallaxes allow for a
calibration of SBCRs with unprecedented precision. In order to improve such
calibrations, it is important to enlarge the calibration sample of suitable
eclipsing binaries with very precisely determined physical parameters.
We carefully chose a sample of ten SB2 DEBs in the solar neighbourhood which
contain inactive main-sequence components. The components have spectral types
from early A to early K. All systems have high-precision parallaxes from the
Gaia mission. We analysed high precision ground- and space-based photometry
simultaneously with the radial velocity curves derived from HARPS spectra. We
used spectral disentangling to obtain the individual spectra of the components
and used these to derive precise atmospheric parameters and chemical
abundances. For almost all components, we derived precise surface temperatures
and metallicities.
We derived absolute dimensions for 20 stars with an average precision of 0.2%
and 0.5% for masses and radii, respectively. Three systems show slow apsidal
motion. One system, HD 32129, is most likely a triple system with a much
fainter K6V companion. Also three systems contain metallic-line components and
show strong enhancements of barium and ittrium. The components of all systems
compare well to the SBCR derived before from the detached eclipsing binary
stars. With a possible exception of HD 32129, they can be used to calibrate
SBCRs with a precision better than 1% with available Gaia DR3 parallaxes.
| https://export.arxiv.org/pdf/2208.07257 |
\newcommand\mr[1]{{{\color{mmr} #1}}}\newcommand\mrstart{\color{mmr} }\newcommand\mrstop{ \rm \normalsize \color{black} }
\definecolor{mmr}{rgb}{0.7,0.0,0.2}
\def\pre{\mr}
\newcommand\mg[1]{{{\color{mmg} #1}}}\newcommand\mgstart{\color{mmg} }\newcommand\mgstop{ \rm \normalsize \color{black} }
\definecolor{mmg}{rgb}{0.0,0.7,0.3}
\def\pre{\mg}
\newcommand\mb[1]{{{\color{mmb} #1}}}\newcommand\mbstart{\color{mmb} }\newcommand\mbstop{ \rm \normalsize \color{black} }
\definecolor{mmb}{rgb}{0.2,0.0,0.7}
\def\pre{\mb}
\title{Surface brightness-colour relations of dwarf stars from detached eclipsing binaries - I. Calibrating sample}
\titlerunning{}
\author{D.~Graczyk\inst{1},
G.~Pietrzy\'nski\inst{2},
C.~Galan\inst{2},
J.~Southworth\inst{3},
W.~Gieren\inst{4},
M.~Ka{\l}uszy{\'n}ski\inst{2},
B.~Zgirski\inst{2},\\
A.~Gallenne\inst{4,5},
M.~G{\'o}rski\inst{2},
G.~Hajdu\inst{2},
P.~Karczmarek\inst{4},
P.~Kervella\inst{6},
P.F.L.~Maxted\inst{3},
N.~Nardetto\inst{7},\\
W.~Narloch\inst{4},
B.~Pilecki\inst{2},
W.~Pych\inst{2},
G.~Rojas Garcia\inst{2},
J.~Storm\inst{8},
K.~Suchomska\inst{2},
M.~Taormina\inst{2},
\and
P.~Wielg{\'o}rski\inst{2}
}
\authorrunning{D. Graczyk et al.}
\institute{Centrum Astronomiczne im. Miko{\l}aja Kopernika, Polish Academy of Sciences, Rabia{\'n}ska 8, 87-100, Toru{\'n}, Poland
\and Centrum Astronomiczne im. Miko{\l}aja Kopernika, Polish Academy of Sciences, Bartycka 18, 00-716, Warsaw, Poland
\and Astrophysics Group, Keele University, Staffordshire, ST5 5BG, UK
\and Departamento de Astronom{\'i}a, Universidad de Concepci{\'o}n, Casilla 160-C, Concepci{\'o}n, Chile
\and Unidad Mixta International Franco-Chilena de Astronom{\'i}a (CNRS UMI 3386), Departamento de Astronom{\'i}a, Universidad de Chile, Camino El Observatorio 1515, Las Condes, Santiago, Chile
\and LESIA, Observatoire de Paris, Universit\'e PSL, CNRS, Sorbonne Universit\'e, Universit\'e de Paris, 5 place Jules Janssen, 92195 Meudon, France
\and Universit\'e C$\hat{\rm o}$te d'Azur, Observatoire de la C$\hat{\rm o}$te d'Azur, CNRS, Laboratoire Lagrange, Nice, France
\and Leibniz-Institut f\"{u}r Astrophysik Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany
}
\abstract{}
{Surface brightness -- colour relations (SBCRs) are very useful tools for predicting the angular diameters of stars. They offer the possibility to calculate very precise spectrophotometric distances by the eclipsing binary method or the Baade-Wesselink method. Double-lined Detached Eclipsing Binary stars (SB2 DEBs) with precisely known trigonometric parallaxes allow for a calibration of SBCRs with unprecedented precision. In order to improve such calibrations, it is important to enlarge the calibration sample of suitable eclipsing binaries with very precisely determined physical parameters.}
{We carefully chose a sample of ten SB2 DEBs in the solar neighbourhood which contain inactive main-sequence components. The components have spectral types from early A to early K. All systems have high-precision parallaxes from the {\it Gaia} mission. We analysed high precision ground- and space-based photometry simultaneously with the radial velocity curves derived from HARPS spectra. We used spectral disentangling to obtain the individual spectra of the components and used these to derive precise atmospheric parameters and chemical abundances. For almost all components, we derived precise surface temperatures and metallicities. }
{We derived absolute dimensions for 20 stars with an average precision of 0.2\% and 0.5\% for masses and radii, respectively. Three systems show slow apsidal motion. One system, HD\,32129, is most likely a triple system with a much fainter K6V companion. Also three systems contain metallic-line components and show strong enhancements of barium and ittrium.}
{The components of all systems compare well to the SBCR derived before from the detached eclipsing binary stars. With a possible exception of HD\,32129, they can be used to calibrate SBCRs with a precision better than 1\% with available {\it Gaia} DR3 parallaxes.}
\keywords{binaries: spectroscopic, eclipsing -- stars: fundamental parameters, distances}
\titlerunning{Analysis of ten detached eclipsing binary stars}
\authorrunning{Graczyk et al.}
\section{Introduction}
The purpose of this work is to increase the number of Double-lined Detached Eclipsing Binary stars (SB2 DEBs)\footnote{For the purposes of this paper, and at the request of an anonymous referee, we refer to eclipsing binaries for which the spectroscopic orbit of both components have been measured as SB2 DEBs even if no lines of a secondary component could be identified in spectra.} with very precise measurements of their geometrical, dynamical, and radiative properties. Gradually expanding compilations of such eclipsing binaries have been published over the last three decades \citep{and91,tor10,sou15} as they are a very useful tool in many areas of astrophysics. The well-known mass--luminosity relation for stars is calibrated with visual and eclipsing binary stars \citep[e.g.][]{mal07,eke15}. Empirical relations for the estimation of radii and masses of stars are usually derived from samples of stars based mostly on SB2 DEBs \citep[e.g.][]{tor10,eke18,moy18}. Detached eclipsing binaries provide near model-independent masses and radii of stars, and because of this they serve as prime source for calibrating and testing stellar evolutionary models. Specific subsamples of eclipsing binaries allow to test and calibrate the amount of core overshooting in intermediate-mass stars, albeit with conflicting results \citep[e.g.][]{Con18,val18,cla19,cos19}, and to predict stellar masses and ages \citep[e.g.][]{dBu18}. In some cases even a single eclipsing binary provides a stringent test of evolutionary models \citep[e.g.\ TZ For;][]{gal16,val17}.
Other applications of DEBs include the age determination of globular clusters \citep[e.g.][]{tho01,kal15} and open clusters \citep[e.g.][]{mei09,bav16}, and the determination of the helium content of a stellar cluster \citep{bro21}. They can also be used to establish bench stars with precise and accurate effective temperatures measured directly from the stars' angular dimaters and bolometric fluxes \citep{Max20,Max22}. Recently another important application was presented: calibration of the precise surface brightness -- colour relations (SBCRs) for main sequence stars based solely on DEBs \citep{gra17,gra21}.
The concept of the stellar surface brightness parameter $S\!$ is useful in astrophysics because it connects the stellar absolute magnitude with the stellar radius $R$ by a very simple relation \citep{wes69}. It is very convenient to express the $S\!$ parameter as a function of an intrinsic stellar colour -- this is a SBCR -- giving a powerful tool in predicting the angular diameters of stars \citep[e.g.][]{bar76,VBe99,ker04}. When the distance (or the trigonometric parallax) to a particular star is known the application of an SBCR immediately gives its radius \citep{lac77a}. Alternatively, when the radius of a star is known, an application of SBCR gives a robust distance \citep{lac77b}. The latter approach, in particular, has resulted in very precise distance determinations to the Magellanic Clouds \citep[e.g.][]{pie19,gra20}, setting the zero-point of the extragalactic distance ladder with a precision of $\sim$1\%.
Here we present a detailed analysis of ten new SB2 DEBs which can be used as additional calibrators of SBCRs. The sample was based on a list of eclipsing binary stars identified in data from the {\it Hipparcos} mission \citep{kru99}. This paper is one in a series of papers devoted to analysis of southern and equatorial DEBs useful in the calibration of SBCRs \citep{gra15,gra16,gra17,gra21}.
\section{Observations}\label{observ}
\subsection{Sample of stars}
Table~\ref{tab:basic} contains names and basic parameters of ten eclipsing binary stars selected for the present study. All systems are classified as double-lined spectroscopic binaries (SB2) with a possible exception of V362 Pav for which no lines of a secondary component could be directly detected and a sophisticated method was needed in order to derive its spectroscopic orbit. Because the systems are well-detached, close to the Sun and have no significant spot activity (with the exception of V963 Cen and QR Hya which both have small stellar spots), we included them in our sample. The magnitudes given are averages from catalogues listed in the SIMBAD/Vizier database, after removing outliers and they represent out-of-eclipse brightness of the systems.
GW Eri (=HR 1300), UW LMi, QR Hya, V963 Cen, LX Mus and V362 Pav were discovered as variable stars during the \textit{Hipparcos} space mission \citep{per97}, classified as eclipsing binaries and given names in the General Catalogue of Variable Stars (GCVS) by \cite{kaz99}. HD~32129 was identified as an eclipsing binary by our team while inspecting photometry from the K2 mission campaigns \citep{how14}. V788 Cen (=HR 4624) was discovered to be an eclipsing binary by \cite{Cou71} and its name was given by \cite{kuk77}. V338 Vir was identified as an eclipsing binary by \cite{Kaz07} while CQ Ind was identified as an eclipsing binary by \cite{Ote04}; both systems were given variable star designations by \cite{Kaz08}.
Six of the objects in our sample have not previously been studied in detail, but four systems have been the subject of analysis in the past. GW Eri was reported to be a double-lined spectroscopic binary by \cite{BuM61} and a first spectroscopic orbit was given by \cite{AbL77}. The only combined analysis of spectroscopy and photometry of GW Eri before the current work was performed by \cite{Ver06}, but only an abstract has been published. A $V$-band light curve of V788 Cen was presented by \cite{Cou74}, showing two shallow and almost equal eclipses. \cite{And77} reported that this is an Am-type star and a double-lined spectroscopic binary. A preliminary analysis of V963 Cen and UW LMi based on Str{\"o}mgren $uvby$ photometry was presented by \cite{cla01}. Low quality light and radial velocity curves were used in an analysis of UW LMi \citep{mar04} as a case study of the expected performance of \textit{Gaia}. A higher-quality spectroscopic orbit based on CORAVEL spectrophotometric observations was published by \cite{Gri01}. For V963~Cen a study of its spin-axis orbital alignment and spectroscopic orbit was presented by \cite{syb18}.
\begin{table*}
\begin{center}
\caption{Basic data on the eclipsing binary stars studied in the current work.}
\label{tab:basic}
\begin{tabular}{lccccccc}
\hline \hline
ID & R.A. (2000) & Dec (2000) & $\varpi_{Gaia/EDR3}$ & $B$ & $V$ & Orbital period & Spectral \\
& & & (mas) & (mag) & (mag) & (days) & type\tablefootmark{a} \\
\hline
GW Eri & 04 11 36.20 & $-$20 21 22.2 & $\!\!\!$11.747$\pm$0.037 & 5.977$\pm$0.017 & 5.800$\pm$0.014 & 3.659 &A1mA2-A8 \\
HD 32129 & 05 01 28.28 & +15 05 28.7 & 5.635$\pm$0.033 & 9.630$\pm$0.028 &9.093$\pm$0.025 & 16.41 &F5V \\
UW LMi & 10 43 30.20 & +28 41 09.1 & 9.670$\pm$0.026 &8.906$\pm$0.021 & 8.321$\pm$0.017 & 3.874&G0V \\
QR Hya & 10 56 31.15 & $-$34 33 50.2 & $\!\!\!$10.672$\pm$0.024 &9.033$\pm$0.023 & 8.403$\pm$0.016 & 5.006 & G1V \\
V788 Cen & 12 08 53.80 & $-$44 19 33.6 & $\!\!\!$10.908$\pm$0.045 &5.993$\pm$0.012& 5.743$\pm$0.012 & 4.966 &A2mA5-F2 \\
V338 Vir & 13 11 17.41 & $-$11 06 21.3 & 3.905$\pm$0.020 &9.619$\pm$0.021 & 9.147$\pm$0.024 & 5.985 &F5V \\
V963 Cen & 13 18 44.36 & $-$58 16 01.3 & 8.725$\pm$0.018 &9.239$\pm$0.019 & 8.603$\pm$0.015 & 15.27&G2V \\
LX Mus & 13 40 11.53 & $-$74 04 45.0 & 6.966$\pm$0.016 &9.292$\pm$0.015 & 8.782$\pm$0.020 & 11.75&F5V \\
V362 Pav & 18 49 03.48 & $-$63 16 10.3 & 6.713$\pm$0.029 &7.587$\pm$0.012 & 7.403$\pm$0.014 & 2.748 &A2mA5-A9 \\
CQ Ind & 21 31 03.29 & $-$50 50 48.9 & 9.011$\pm$0.022 &8.887$\pm$0.016 &8.360$\pm$0.016 & 8.974&F7V \\
\hline
\end{tabular}
\tablefoot{\tablefoottext{a}{From SIMBAD database. Refined spectral types are given in Section~\ref{WD_results}}}
\end{center}
\end{table*}
\subsection{Photometry}\label{photo}
\subsubsection{Ground-based Str{\"o}mgren photometry}
We used Str{\"o}mgren $uvby$ photometry of UW~LMi and V963~Cen secured with the Str\"omgren Automated Telescope (SAT) at ESO, La Silla \citep{cla01}. The data for both stars were taken between February 1997 and March 1999. The photometry of UW~LMi comprises 734 differential magnitudes with respect to three comparison stars (HD 94218, HD 94426 and HD 91546) in each filter. The photometry of V963~Cen consists of 975 differential magnitudes in each filter with respect to HD 115031, HD 114250 and HD 117214. The photometry was detrended and normalised separately in each filter (see Table~\ref{tab:uwlmi}).
\subsubsection{Space-based photometry}
\label{sub:space}
GW~Eri was observed by the TESS space mission \citep{ric15} in short-cadence during sectors 5 and 31. For the analysis we chose the photometry from sector 31 because it has a smaller number of artifacts and outliers. The short-cadence data were downloaded, as in other cases, from the Mikulski Archive for Space Telescopes (MAST) archive\footnote{\texttt{https://mast.stsci.edu/portal/Mashup/Clients/Mast/\\Portal.html}} and contains 17\,272 photometric points. We used the Simple Aperture Photometry (SAP; \verb"SAP_FLUX"), and the data were detrended from instrumental long-term drifts using a third-order spline then normalised. We retained datapoints in eclipses and every tenth point outside eclipse, resulting in 4452 datapoints.
HD~32129 was within the field of campaign 13 of the K2 mission \citep{how14}, the extension of the {\it Kepler} space mission \citep{koch10}. The long-cadence normalised data were downloaded using the K2SFF portal on the MAST archive\footnote{\texttt{https://archive.stsci.edu/prepds/k2sff/}}. There are 3489 datapoints and for our analysis we used 321 points in and around the eclipses. HD~32129 was observed also by the TESS in sectors 5 (long-cadence), 32 and 43 (short-cadence). The short-cadence data from sector 32 cover only two secondary eclipses and we used in our analysis only data from sector 43. We used the Pre-search Data Conditioning SAP (PDCSAP; \verb"PDCSAP_FLUX") fluxes of HD~32129 containing 15\,698 photometric points. The light curve was detrended and most of the out-of-eclipse data were removed, leaving 2967 short-cadence datapoints.
QR~Hya was observed by TESS in sectors 9 and 36 in the short-cadence mode. For the analysis we used the light curve from sector 9 as it is less affected by brightness modulation due to starspots. The PDCSAP fluxes were converted into magnitudes and the light curve was detrended for the stellar activity (a modulation of $\sim$0.01\,mag) using a cubic spline -- see Fig.~\ref{qrhya}. This detrending process completely flattened the out-of-eclipse light curve, removing both the spot-modulation and the proximity effects. The latter are expected to be small ($\sim$0.001\,mag) so we decided to analyse only datapoints in the phase intervals [$-$0.05,0.05] and [0.45,0.55]. These intervals include 3137 of the original 15\,851 datapoints.
V788~Cen was observed by TESS in sectors 10 and 37 in short-cadence mode. For our analysis we used the SAP fluxes from both sectors. In the case of sector 37 we used only the second part of the light curve, as it is less affected by starspots. In our initial analysis we applied no detrending in order to retain the out-of-eclipse proximity effects. Once a satisfactory model of the system had been obtained we corrected for instrumental trends (see Section~\ref{v788cen}). We kept datapoints in eclipses and every tenth point outside of eclipses, leaving 7837 points in total.
V338~Vir was observed by the K2 in short- and long-cadence during campaign 6. However the short-cadence light curve shows a large number of instrumental drifts which proved difficult to correct. We finally used only long-cadence data, which was detrended and cleaned of outliers. In total 3309 datapoints were used in the analysis.
V963~Cen was observed by the TESS in sector 38 in short-cadence. The light curve contains 18\,495 photometric points and shows significant spot activity which affects the depth of some eclipses. After detrending and removing outliers we retained only 1010 points within and around the last two eclipses covered in sector 38.
LX~Mus was observed by TESS in two sectors 38 and 39, both in short-cadence. For our analysis we chose the PDCSAP fluxes from sector 38 because their photometric precision was higher. The full light curve contains 17\,549 points from which we removed most of the out-of-eclipse points and ended up with 1957 datapoints.
V362~Pav was observed by TESS in sector 13 in short-cadence, giving 19\,747 datapoints. We used the SAP fluxes from the second part of the sector in our analysis. We kept all points within eclipse and every 15th point outside eclipse, resulting in a total of 2939 datapoints.
CQ~Ind was observed by TESS in short-cadence in three sectors: 1, 27 and 28. For our analysis we used the PDCSAP fluxes from sector 27. In order to follow the apsidal motion in the system we included also SAP fluxes from sector 1 covering first two eclipses. The data were detrended and normalised. We kept 3108 points within and around eclipses from sector 27, and 756 points from sector 1.
\begin{table*}
\centering
\caption{The $uvby$ photometry of UW LMi and V963 Cen. The full data will be available at CDS.}
\label{tab:uwlmi}
\begin{tabular}{@{}ccccc@{}}
\hline \hline
Date & \multicolumn{4}{c}{Normalised Flux} \\
HJD $-$ 2450000 & $u$ & $v$ & $b$ & $y$ \\
\hline
\multicolumn{5}{c}{UW LMi}\\
503.70509 & 0.98901& 0.98810& 0.99724& 0.99357 \\
503.70965 & 1.00647& 0.99816& 0.99816& 0.99724 \\
503.71338 & 0.99541& 1.00092& 1.00000& 1.00000 \\
503.74394 & 1.00369& 0.99357& 0.99541& 0.99632 \\
503.74853 & 1.00092& 0.99908& 1.00000& 1.00184 \\
\hline
\end{tabular}
\end{table*}
\subsection{Spectroscopy}
\label{harps}
We obtained spectra of the systems with the High Accuracy Radial velocity Planet Searcher \citep[HARPS;][]{may03} on the European Southern Observatory 3.6-m telescope in La Silla, Chile.
\begin{table*}
\begin{center}
\caption{Summary of spectroscopic observations on HARPS.}
\label{tab:harps}
\begin{tabular}{lcccc}
\hline \hline
ID & Number of spectra & Start & End & Mean S/N \\
\hline
GW Eri & 17 & 2009 August 17 & 2021 October 25 & 110 \\
HD 32129 & 17 & 2017 December 10 & 2021 August 13 & 44 \\
UW LMi & 10 & 2017 May 27 & 2018 January 30 & 34 \\
QR Hya & 12 & 2009 February 25 & 2021 June 7 & 48 \\
V788 Cen & 18 & 2009 February 25 & 2021 August 14 & 91 \\
V338 Vir & 18 & 2017 June 10 & 2021 August 14 & 32 \\
V963 Cen & 21 & 2009 August 17 & 2016 September 2 & 32 \\
LX Mus & 24 & 2009 February 25 & 2017 June 11 & 44 \\
V362 Pav & 21 & 2009 February 26 & 2016 August 17& 100 \\
CQ Ind & 11 & 2017 June 10 & 2021 October 24& 30 \\
\hline
\end{tabular}
\end{center}
\end{table*}
In total we collected 170 spectra between 2009 August 17 and 2021 October 25 (see Table~\ref{tab:harps}). The targets are bright and typical integration times were shorter than 10\,min; they were often used as back-up targets and also during bright sky conditions (e.g.\ near twilight). The spectra were reduced on-site using the HARPS Data Reduction Software (DRS).
\section{Analysis of spectra}
\label{analys}
\subsection{Radial velocities \label{rad_vel}}
\begin{table*}
\centering
\caption{RV measurements for eclipsing binary stars. The full data will be available at CDS.}
\label{tab_rv}
\begin{tabular}{@{}lcrcrc@{}}
\hline \hline
Object & BJD & $RV_1$ & $RV_1$ error & $RV_2$ & $RV_2$ error \\
& -2450000& (km s$^{-1}$) & (km s$^{-1}$) & (km s$^{-1}$) &(km s$^{-1}$) \\
\hline
UW LMi & 7901.45834 & 46.111 &0.092 & $-$115.558& 0.097 \\
UW LMi & 7901.50942 & 42.756 &0.094& $-$112.095& 0.098 \\
UW LMi & 7914.44475 & $-$111.967& 0.093 & 45.645& 0.097 \\
UW LMi & 7915.45642 & $-$70.087& 0.092 & 3.063& 0.097 \\
UW LMi & 7916.44635 & 47.939 & 0.093& $-$117.436& 0.098 \\
\hline
\end{tabular}
\end{table*}
\label{sec:rv}
We used the RaveSpan code \citep{pil17} to measure the radial velocities of the components in all systems via the broadening function (BF) formalism \citep{ruc92,ruc99}. We used templates from the library of synthetic LTE spectra by \citet{col05} matching the mean values of the estimated effective temperatures and surface gravities of the component stars. The abundances were assumed to be solar.
The line profiles of the components of HD 32129, QR Hya, V338~Vir, V963~Cen, LX~Mus and CQ~Ind are Gaussian and suggest small rotational velocities. The line profiles of UW LMi and V788~Cen are rotationally broadened with $v_1\sin{i}\approx v_2\sin{i}\approx20$ km~s$^{-1}$, while both components of GW~Eri rotate even faster with $v\sin{i}\approx 30$ km~s$^{-1}$.
The line profiles of the components of V362~Pav are also rotationally broadened with $v_1\sin{i}\approx40$ km~s$^{-1}$ and $v_2\sin{i}\approx20$ km~s$^{-1}$. The primary of V362~Pav is about 70 times brighter in the $V$-band than the secondary, which makes difficult to determine radial velocities of both components simultaneously. We stacked all the spectra of V362~Pav by applying radial velocity shifts to them to get a ``master'' spectrum of the primary. This spectrum was subtracted from all spectra, making the BF profile of the faint secondary much more clearly identifiable. The typical precision of an individual radial velocity measurement was about 110 m~s$^{-1}$ for the primary and 1.5 km~s$^{-1}$ for the secondary. The radial velocity measurements are summarised in Table~\ref{tab_rv}.
\subsection{Spectral disentangling}
\label{harps}
The radial velocities we derived in Section~\ref{sec:rv} were used to decompose the observed spectra of each system into the spectra of the individual components. For disentangling we used all HARPS spectra with the exception of a few spectra with a very low S/N or with very prominent solar features (when taken at bright evening/morning sky). We used the RaveSpan code which utilizes a method presented by \cite{gon06}. We ran two iterations choosing a median value for the normalization of the spectra. The disentangled spectra cover a spectral range from 4300~\AA\ up to 6900~\AA.
\subsection{Stellar atmospheric analysis}\label{abu}
\subsubsection{Methods}
\label{temp:met}
To derive the atmospheric parameters of the components of the binary systems we fitted the high-resolution (R$\sim$80000) HARPS disentangled spectra (Section\,\ref{harps}) with the `Grid Search in Stellar Parameters' {\sl GSSP} software package \citep{Tka2015}. The code uses the spectrum synthesis method by employing the {\sl S$_{YNTH}$V} LTE-based radiative transfer code \citep{Tsy1996}. We used the {\sl LL$_{MODELS}$} grid of atmosphere models \citep{Shu2004} provided with the {\sl GSSP} code. Only the {\sl binary} mode of the {\sl GSSP} code was used to analyse the disentangled spectra. In that mode the spectra did not undergo flux renormalisation. The wavelength-dependent flux ratio $f_{\rm i}$ was calculated with the code utilizing the ratio of the components radii ($r_{1}/r_{2}$) obtained from light curve fit using the Wilson-Devinney code (see Table\,\ref{tab_par_orb}). The {\sl binary} version does not enable the calculation of the macroturbulent velocity ($\zeta$) due to a strong correlation with the rotational velocity ($V_{\rm{rot}}$). Instead the value of $\zeta$ was estimated using published relations \citep{Sma2014, Gray2005} and held fixed at that value (Table\,\ref{T_atm_par}).
The free parameters were metallicity ([M/H]), effective temperature ($T_{\rm{eff}}$), and microturbulent velocity ($\xi$). In a few cases of objects containing metallic-lined components (GW~Eri, V788~Cen, and V362~Pav), which show strong lines mainly from ionized yttrium and barium, abundances were also calculated individually for $\sim$30 chemical elements. The abundance analysis of atmospheres of these stars will be published separately (Galan et al.\ 2022 -- in prep.). The {\sl GSSP binary} code calculates synthetic spectra for a grid of parameter values and provides the $\chi^2$ value for each pair with observed spectrum. This allowed us to judge the goodness of each fit and to choose the best-matching (corresponding to the minimum $\chi^2$) values within the grid of synthetic spectra.
Regions around the H$\alpha$, H$\beta$ and H$\gamma$ lines were excluded from the analysis. The part of the spectrum bluewards of H$\gamma$ was also excluded in most cases because it had a significantly lower S/N. The observed spectra contain in some regions the lines that have no counterparts in the line lists as well as there are the cases in the synthetic spectra that have bad data for atomic transitions. Individual masks were prepared for each object to exclude these lines from the analysis. Also, spectral regions containing artefacts from imperfectly removed features from water (H$_2$O: mainly $\lambda \sim$ 5880--6000\,\AA) and oxygen (O$_2$: $\lambda \sim$ 6274--6330\,\AA) molecules in Earth's atmosphere were skipped.
\begin{table*}[h!]
\centering
\caption{The best-fitting atmospheric parameters together with their 1$\sigma$ uncertainties estimated using the reduced $\chi^2$ and the 1$\sigma$ level in $\chi^2$ ($\chi^2_{1\sigma}$).}
\label{T_atm_par}
\begin{tabular}{@{}l|@{\hskip 1mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 1mm}|l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 2mm}l@{\hskip 1mm}l@{}}
\hline \hline
& \multicolumn{5}{c|}{Primary} & \multicolumn{5}{c}{Secondary} \\
ID & $[$M/H$]$ & $T_{\rm{eff}}$ & $\xi$ & $\zeta^{\star}$ & $V_{\rm{rot}} \sin{i}$ & $[$M/H$]$ & $T_{\rm{eff}}$ & $\xi$ & $\zeta$\tablefootmark{$\star$} & $V_{\rm{rot}} \sin{i}$ \\
& [dex] & [K] & [km/s] & [km/s] & [km/s] & [dex] & [K] & [km/s] & [km/s] & [km/s] \\
\hline
GW\,Eri & $+0.52$\,$\pm0.23$ & $8314$\,$\pm64$ & $4.05^{+0.28}_{-0.24}$ & $8.0$ & $24.9$\,$\pm0.8$ & $+0.58$\,$\pm0.19$ & $8205$\,$\pm61$ & $4.34^{+0.28}_{-0.26}$ & $8.0$ & $24.0$\,$\pm0.7$ \\
HD\,32129 & $+0.19$\,$\pm0.06$ & $6713^{+76}_{-73}$ & $1.80^{+0.19}_{-0.17}$ & $6.5$ & $1.5$\,$\pm0.8$ & $+0.21$\,$\pm0.10$ & $5777^{+145}_{-152}$ & $1.52^{+0.36}_{-0.58}$ & $3.7$ & $2.6^{+1.6}_{-2.0}$ \\
UW\,LMi & $-0.12$\,$\pm0.07$ & $6048^{+116}_{-113}$ & $1.18$\,$\pm0.27$ & $4.0$ & $17.2$\,$\pm0.8$ & $-0.09$\,$\pm0.08$ & $6027^{+125}_{-127}$ & $1.36^{+0.20}_{-0.28}$ & $4.0$ & $16.9$\,$\pm0.9$ \\
QR\,Hya & $+0.00$\,$\pm0.06$ & $6012$\,$\pm64$ & $1.45^{+0.18}_{-0.23}$ & $4.0$ & $13.0$\,$\pm0.6$ & $-0.03$\,$\pm0.07$ & $5903^{+90}_{-92}$ & $1.17$\,$\pm0.16$ & $3.5$ & $12.2$\,$\pm0.8$ \\
V788\,Cen & $+0.58$\,$\pm0.17$ & $7852$\,$\pm68$ & $4.30$\,$\pm0.25$ & $8.0$ & $20.5$\,$\pm0.7$ & $+0.36$\,$\pm0.30$ & $7491$\,$\pm123$ & $3.36^{+0.41}_{-0.36}$ & $8.0$ & $17.3$\,$\pm1.2$ \\
V338\,Vir & $-0.07$\,$\pm0.08$ & $6723^{+135}_{-138}$ & $1.69^{+0.21}_{-0.25}$ & $6.0$ & $9.3$\,$\pm1.0$ & $-0.13$\,$\pm0.06$ & $6464$\,$\pm62$ & $1.72^{+0.19}_{-0.17}$ & $5.5$ & $13.2$\,$\pm0.6$ \\
V963\,Cen & $-0.07$\,$\pm0.07$ & $5866^{+90}_{-85}$ & $1.36^{+0.19}_{-0.15}$ & $3.5$ & $8.2$\,$\pm0.4$ & $-0.06$\,$\pm0.06$ & $5885^{+91}_{-88}$ & $1.41^{+0.14}_{-0.12}$ & $3.5$ & $7.9$\,$\pm0.4$ \\
LX\,Mus & $+0.09$\,$\pm0.04$ & $6587$\,$\pm56$ & $1.69$\,$\pm0.12$ & $6.0$ & $4.0$\,$\pm0.6$ & $+0.09$\,$\pm0.04$ & $6599$\,$\pm47$ & $1.53$\,$\pm0.12$ & $6.0$ & $4.9$\,$\pm0.5$ \\
V362\,Pav & $+0.02^{+0.15}_{-0.10}$ & $8205^{+71}_{-80}$ & $4.18$\,$\pm0.18$ & $8.0$ & $39.4$\,$\pm0.9$ & $+0.0$\tablefootmark{$\blacklozenge$} & $4900$ & $1.0$\tablefootmark{$\ast$} & $3.0$ & $19.5^{+10}_{-8}$ \\
CQ\,Ind & $-0.01$\,$\pm0.09$ & $6524^{+138}_{-130}$ & $1.39^{+0.36}_{-0.30}$ & $5.5$ & $8.0$\,$\pm0.7$ & $-0.06$\,$\pm0.11$ & $6224^{+180}_{-199}$ & $1.32^{+0.56}_{-0.62}$ & $4.5$ & $6.2$\,$\pm1.2$ \\
\hline
\end{tabular}
\tablefoot{\tablefoottext{$\star$}{adopted using published correlations between macroturbulence and $T_{\rm{eff}}$ or spectral type.}\\
\tablefoottext{$\blacklozenge$}{taken after the best matching model for the primary component.}\\
\tablefoottext{$\ast$}{based on the Gaia-ESO iDR6 calibration (see 3rd paragraph of Section\,\ref{temp:atm}).}}\\
\end{table*}
\subsubsection{Atmospheric parameters}
\label{temp:atm}
The input values for the parameters ($T_{\rm{eff}}$, $\log{g}$, $V_{\rm{rot}} \sin{i}$) were taken according to the results of modelling with the Wilson-Devinney code (see Section \ref{wd}, Tables\,\ref{tab_par_orb}\,and\,\ref{par_fi}). The surface gravities were not free parameters but were fixed to the values from the Wilson-Devinney code solution. Initial rotational velocities were set to the values corresponding to synchronous rotation, which is common in these types of binaries. We started to search around the solar value for metallicity $[$M/H$]$. The input values for the microturbulent velocity ($\xi$) was estimated using published correlations with $\log{g}$, and spectral types or $T_{\rm{eff}}$ \citep{Gray2001, Gray2005, Sma2014, She2019}.
The free parameters were: $[$M/H$]$, $T_{\rm{eff}}$, $\xi$, and $V_{\rm{rot}} \sin{i}$. The solution procedure was the same as we have used previously \citep{gra21} but now we applied only the {\sl binary} module to spectra of both components simultaneously. We started with using relatively large steps in the grids of parameters to find the region close to the global minimum. Next, the parameter ranges were gradually narrowed and the sampling was made finer to find the solution corresponding to the best-matching model in several iterations. The 1$\sigma$ errors were estimated by finding the intersection of the 1$\sigma$ levels in $\chi^2$ ($\chi^2_{1\sigma}$) with the polynomial functions that have been fitted to the minimum values of reduced $\chi^2$ (the $\chi^2$ value normalised by the number of pixels in the spectrum minus the number of free parameters) as recommended by \citet{Tka2015}. The resulting final parameters are shown in Table\,\ref{T_atm_par}. As an example of the analysis two parts of the observed spectra for two systems -- GW~Eri and LX~Mus -- are compared with the best fit synthetic spectra in Figs.\ \ref{sp_GWEri} and \ref{sp_LXMus}.
V362\,Pav was particularly difficult to analyse compared to the other systems in this work, due to the faintness of the secondary component with respect to the primary star. Thus there was a need to limit drastically the number of free parameters for the secondary: the metallicity was fixed to that of the primary and the microturbulent velocity was set based on the Gaia-ESO iDR6 calibration (R.\ Smiljanic, 2021, private communication) as $\xi = 1.0$~km~s$^{-1}$. This $\xi$ is close to the value expected for $T_{\rm{eff}} = 4900$\,K and $[$M/H$] \approx +0.3$\,dex. The primary was reported to be a metallic star of spectral type A2mA5-A9 \citep{hou75}. Indeed, a few elements are strongly enhanced (Ba, Y, Zr) but most elements have a solar or sub-solar abundance which results in an average metallicity of only +0.02 dex.
The final temperatures are consistent with those derived from photometric colours, with an agreement generally better than 1$\sigma$ (compare Tables \ref{T_atm_par} and \ref{par_fi}), with the exception of V788~Cen which shows a slightly larger (positive) difference whilst maintaining the components' temperature ratio. Our sample is dominated by metallicities that are near- or slightly sub-solar, with the exception of objects containing Am stars with metallicities of order $+0.5$\,dex (see\,Table\,\ref{T_atm_par}).
In most cases, we found that stars rotate synchronously: their measured projected rotational velocities ($V_{\rm{rot}} \sin{i}$) are in agreement, within the 1$\sigma$ errors, with those derived from the known orbital periods and component radii (Fig.\,\ref{f_Vrot-Vsyn}). There are some oddities: the primary components of HD~32129 and V788~Cen and both components of V338~Vir rotate significantly slower and much below the synchronous velocity. A probable reason of this is that the spin and orbital axes are not aligned (small values of $\sin{i}_{\rm \,rot}$). In the case of the eccentric system V963~Cen both components rotate super-synchronously but their rotation is a factor of $\sim$2 slower than due to synchronisation at periastron.
\section{Initial photometric analysis}
\subsection{Interstellar extinction\label{red}}
We used extinction maps \citep{sch98} with the recalibration by \cite{sch11} to determine the reddening in the direction of all ten eclipsing binaries. We followed the procedure described in detail in \cite{such15} assuming the distances from \textit{Gaia} EDR3 \citep{gaia20}. Additionally we used the three-dimensional interstellar extinction map {\sc stilism} \citep{cap17}. Finally, we adopted the average as the extinction estimate to a particular system.
\subsection{Colour -- temperature calibrations}
\label{temp:col}
To estimate the $T_{\rm eff}$ values of the eclipsing components we collected multi-band magnitudes of the systems. We use 2MASS \citep{cut03} as a good source for infrared photometry and the magnitudes were converted into appropriate photometric systems using transformation equations from \cite{bes88} and \cite{car01}. The reddening (Section~\ref{red}) and the mean Galactic interstellar extinction curve from \cite{fit07} assuming $R_V=3.1$ were combined with light ratios extrapolated from the Wilson-Devinney code (Section~\ref{wd}) in order to determine the intrinsic colours of the components. The light ratios are given in Table~\ref{tab:lratio}. We determined the $T_{\rm eff}$ values from a number of colour--temperature calibrations for a few colours: $B\!-\!V$ \citep{alo96,flo96,ram05,gon09,cas10}, $V\!-\!J$, $V\!-\!H$ \citep{ram05,gon09,cas10} and $V\!-\!K$ \citep{alo96,hou00,ram05,mas06,gon09,cas10,wor11}. For the few calibrations having metallicity terms we assumed the metallicity derived from the atmospheric analysis (see Table~\ref{T_atm_par}). The resulting temperatures were averaged for each component and are reported in Table~\ref{tab:temp}. Usually our colour temperatures are about 1$\sigma$ lower than the temperatures derived from atmospheric analysis (Section~\ref{temp:atm}). The errors reported are standard deviations of a sample of all temperatures derived for a given component. The errors include the zero-point uncertainties of the calibrations but not uncertainties introduced by disentangling of the colours.
\begin{table}
\begin{centering}
\caption{Extrapolated light ratios $l_2/l_1$ of the components. }
\label{tab:lratio}
\begin{tabular}{lccccc}
\hline \hline
& \multicolumn{5}{c}{Photometric band} \\
System & $B$ & $V$ & $J$ & $H$ & $K$ \\
\hline
GW~Eri & 0.9131 & 0.9281 & 0.9514 &0.9568& 0.9573 \\
HD~32129 & 0.1110 & 0.1404 & 0.2025& 0.2289 &0.2342 \\
UW~LMi & 0.8668 & 0.8751& 0.8877 & 0.8916& 0.8926 \\
QR~Hya & 0.8048 & 0.8230 & 0.8514 & 0.8603 & 0.8628 \\
V788~Cen &0.2588 & 0.2850 & 0.3321 & 0.3444 &0.3462\\
V338~Vir &2.0922 & 2.1555 & 2.2702 & 2.3036 &2.3123\\
V963~Cen &0.9753 & 0.9731 & 0.9698 & 0.9690 &0.9686\\
LX~Mus &1.1087 & 1.1005 & 1.0878 & 1.0833 &1.0832\\
V362~Pav &0.0068 & 0.0143& 0.0535 & 0.0820 & 0.0857\\
CQ~Ind &0.4714 & 0.5098 &0.5754 &0.5966& 0.6042 \\
\hline
\end{tabular}
\end{centering}
\end{table}
\begin{table}
\begin{centering}
\caption{Temperatures derived from intrinsic colours of components. }
\label{tab:temp}
\begin{tabular}{lcc}
\hline \hline
System & \multicolumn{2}{c}{Effective temperature (K)} \\
& Primary & Secondary \\
\hline
GW~Eri & $8210\pm140$ & $8125\pm140$ \\
HD~32129 & $6705\pm59$ &$5745\pm61$ \\
UW~LMi & $6035\pm95$ & $6000\pm97$ \\
QR~Hya & $5840\pm47$ &$5760\pm50$ \\
V788~Cen & $7725\pm86$& $7225\pm67$\\
V338~Vir &$6545\pm54$&$6375\pm48$ \\
V963~Cen &$5770\pm45$&$5780\pm45$ \\
LX~Mus &$6465\pm52$& $6500\pm53$\\
V362~Pav &$8180\pm100$& $4860\pm90$\\
CQ~Ind &$6400\pm67$&$6080\pm55$ \\
\hline
\end{tabular}
\end{centering}
\end{table}
\subsubsection{Adopted values}
Precise determination of the $T_{\rm eff}$ values is very important in our approach because we did not adjust the limb darkening coefficients whilst fitting the light curves. Instead, these coefficients were automatically calculated for a given set of surface atmospheric parameters ($T_{\rm eff}$, $\log{g}$) using tables from \cite{VHa93}. Surface gravities are well determined internally within the Wilson-Devinney code, but to set the $T_{\rm eff}$ scale we needed external information. The $T_{\rm eff}$ scale was set by fixing the surface $T_{\rm eff}$ of the primary star, $T_1$, to the average of two previous $T_{\rm eff}$ determinations (Sections \ref{temp:atm} and \ref{temp:col}). The adopted $T_1$ in all cases is well within the 1$\sigma$ uncertainty of both $T_{\rm eff}$ determinations. Subsequently the $T_{\rm eff}$ of the secondary, $T_2$, was scaled according to $T_1$ during the light curve analysis with the WD code.
\section{Analysis of combined light and radial velocity curves \label{wd}}
For analysis of the eclipsing binaries we made use of the Wilson-Devinney program (WD) version 2007 \citep{wil71,wil79,wil90,van07}\footnote{\texttt{ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2007/}}, equipped with a Python wrapper. When the work on the paper was well advanced we learned that a newer version of the WD code \citep[][LCDC2015, version 2019]{vHam14} included directly the TESS bandpass\footnote{\texttt{ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2015/}}. We decided to use this new version which allows also for a higher grid resolution over stellar surfaces and for which a specific python GUI\footnote{\texttt{https://github.com/Varnani/pywd2015-qt5}} was written \cite[][PyWD2015]{Guz20}.
\subsection{Initial parameters}
We fixed the $T_{\rm eff}$ of the primary component during analysis to the average of the $T_{\rm eff}$ values derived from the colour--temperature calibrations and the atmospheric analysis. In all cases those two determinations are consistent to within 1$\sigma$. The standard albedo and gravity brightening coefficients for convective stellar atmospheres were chosen. The stellar atmosphere option was used (\verb"IFAT1=IFAT2=1"), radial velocity tidal corrections were automatically applied (\verb"ICOR1=ICOR2=1") and no flux-level-dependent weighting was used. We assumed synchronous rotation for both components in all systems. Both the logarithmic \citep{kli70} and square root \citep{dia92} limb-darkening laws were used, with coefficients tabulated by \cite{VHa93}.
\subsection{Fitting model parameters}
\label{sub:fitting}
With the WD binary star model we fitted simultaneously the available light curves and radial velocity curves of both components using the grid fineness parameters \verb"N1=N2"=60. In cases in which one of the stars was significantly larger than a companion it was neccesary to use a higher numerical precision and we set \verb"N"=90 for a larger star. We assumed a detached configuration in all models and a simple reflection treatment (\verb"MREF=1", \verb"NREF=1"). Each observable curve was weighted only by its {\it rms} through comparison with the calculated model curve. We adjusted the following parameters during analysis: the orbital period $P_{\rm orb}$, the epoch of the primary eclipse $T_0$ in cases of circular orbits, the phase shift $\phi$ when orbits were significantly eccentric, the semimajor axis $a$, the mass ratio $q$, both systemic radial velocities $\gamma_{1,2}$, the eccentricity $e$, the argument of periastron $\omega$, the orbital inclination $i$, the temperature of the secondary $T_2$, the modified Roche potentials $\Omega_{1,2}$ -- corresponding to the fractional radii $r_{1,2}$ -- and the luminosity parameter $L_1$. Additionally, we fitted for third light $l_3$. The best models were chosen according to their reduced $\chi^2$ and a lack of significant systematic trends in the residuals. The initial temperatures of the components were set according to their individual colours (see Section~\ref{temp:col}), then adjusted according to the results of the atmospheric analysis of disentangled HARPS spectra. Usually we took a simple mean of the colour and spectroscopic temperatures of the primary to set the temperature scale of the model. In cases when the secondary was significantly brighter then the primary we set the scale using the secondary's temperature.
The statistical (formal) errors on the fitted parameters were estimated with the Differential Correction subroutine of the WD code. We assumed very conservative errors on parameters: we multiplied the formal errors by a factor of 3. The model synthetic light curves compared against photometric observations are presented in Fig.~\ref{fig:light} for all ten systems with the rms of the best solution given. The radial velocity solutions plotted against observed velocimetry are presented in Fig.~\ref{fig:rv}.
The model parameters for all systems are summarized in Table~\ref{tab_par_orb}. The systemic velocity is not corrected for the gravitational redshift or convective blueshift. The absolute dimensions of the systems were calculated using nominal astrophysical constants advocated by IAU 2015 Resolution B3 \citep{prsa16} and are presented in Table~\ref{par_fi}.
\subsection{Analysis details and results}
\label{WD_results}
\subsubsection{GW~Eri}
This is the most massive and hottest eclipsing binary in our sample, and is a triple system. It forms a common proper motion pair with HD 26590 which lies at a distance of about 60 arcsec. The TESS light curve shows two partial eclipses of moderate and similar depth. The orbit is circular and the components are similar in their physical properties. The similarity of the components and partial eclipses leads to a strong correlation between their radii. In order to break it we determined a $V$-band light ratio from available HARPS spectra. Strangely, the spectroscopic light ratio is significantly different during both quadratures: spectra taken between orbital phases 0.15 and 0.45 show the secondary being consistently brighter than the primary by about 3\% while all spectra taken between orbital phases 0.55 and 0.97 show the opposite effect with the primary being 5\% brigher. The reason for this difference is unclear and it may be connected with the metallic nature of both components. In the subsequent analysis we did not constrain our models for the light ratio. The final solution needs a small amount of third light in order to remove systematic residuals in both eclipses. No very close optical companions to GW Eri is known. It is possible that the detected third light is stray light from the nearby HD 26590, which is only 2 mag fainter than the system.
Both stars rotate synchronously and the estimated spectral types of the components are A4m\,V+A4m\,V but due to the strongly metallic nature of both components they are somewhat uncertain: both stars are hotter than expected for their masses. There is a significant spread of spectral type classifications in the literature: \cite{hou88} give A1mA2-A8, \cite{abt95} give kA2hA5VmF2 while \cite{AbL77} give kA1hA3VmA3 + kA1hA3VmA3. Although the spectra of both components are rich in metallic lines, the relatively fast rotation means that radial velocity measurements are only precise to about 140~m~s$^{-1}$. The $rms$ of the radial velocity solutions are 137 and 154 m~s$^{-1}$, so are fully consistent with measurement errors. We do not detect any radial velocity trends during the timespan of 12 years covered by the HARPS spectra.
\subsubsection{HD~32129}
This well-detached system shows a distinct total primary eclipse and a shallow partial secondary eclipse, due to a significant orbital eccentricity ($e\sim 0.44$) and a high orbital inclination ($i\sim89$ deg). The relatively long phase of totality during primary eclipse suggests large difference in size between the two components. The out-of-eclipse parts of the TESS and \textit{Kepler} light curves are flat, with a tiny flux modulation probably due to small starspots. The \textit{Kepler} K2 long-cadence light curve was rectified using the method used for FM~Leo in our previous work \citep{gra21}. We tried to solve the TESS and K2 light curves simultanouesly with the radial velocity curves. However, because of long time interval covered by both types of data (the spectra were taken over $\sim 1350$ day and the K2 and TESS epochs are separated by $\sim 1600$ days), apsidal motion has a significant effect on the times of eclipse and the shape of radial velocity curves. We therefore initially solved the light curves and radial velocities separately. After several iterations we could find common orbital parameters (eccentricity $e$, longitude of periastron $\omega$ and rate of periastron advance $d\omega/dt$) and thus carry out a full simultaneous solution. The apsidal motion has a rate of $1.9\cdot10^{-4}$ deg cycle$^{-1}$, which corresponds to an apsidal motion period of $\sim$85\,000 years.
In order to properly fit the shape of the primary eclipse we had to adjust also the third light in both filters (K2, TESS). The calculated third light contributes about 1\% of the flux in the K2 band and 2\% in the TESS band, so is redder than the light from the eclipsing system. If we assume that it comes from a physically bound companion to the system, i.e.\ at a common distance, it would correspond to absolute magnitudes of $M_{Rc}\!=\!7.4$ mag and $M_{Ic}\!=\!6.2$ mag \footnote{For HD 32129 the observed magnitudes are $R_C=8.77$ mag (K2), $I_C=8.49$ mag (TESS) and reddening $E(B-V)=0.11$ mag.}. Both numbers are consistent with a K6/K7 dwarf. Its expected contribution to the total light in the $K$-band is 5.5\%. The final fit to TESS light curve (Fig.~\ref{fig:light}) is very good in the case of the secondary eclipse but there are small systematic deviations (up to 300~ppm) in the primary eclipse near fourth contact. The radial velocity fit (Fig.~\ref{fig:rv}) is fully acceptable with minor residuals for the primary (the $rms$ is only 30~m~s$^{-1}$) and significantly larger for the secondary (78~m~s$^{-1}$). The secondary's larger $rms$ is a result of it being significantly fainter than the primary: in the $V$-band it is 7 times fainter than the primary.
The primary star is much more massive, hotter and larger than the companion. The secondary is a solar-twin star regarding its size, mass and temperature, however its surface composition is more metal rich. Both components rotate very slowly ($v\sin{i}\sim$~1--2~km~s$^{-1}$). The estimated spectral type is F3\,V + G2\,V.
\subsubsection{UW~LMi}
\label{uwlmi}
This is the only system in the sample which has no light curve based on space photometry. Instead we used ground-based Str{\"o}mgren photometry in the $uvby$ bands. However its precision is almost an order of magnitude lower than photometry from TESS or K2 for targets of similar brightness. Fortunately the use of four different bands mitigates this effect, in particular allowing a secure determination of the temperature ratio of the components. The system shows two relatively deep, partial eclipses of similar depth. The orbit appears to be circular and the out-of-eclipse parts of the light curves are practically flat. Because of the lower precision of the light curves we exceptionally used less dense grids on stellar surfaces for the WD analysis, with \verb"N1=N2"=50.
In order to improve the solution we applied a spectroscopic light ratio in the $V$-band as an additional constraint. It is well determined from HARPS spectra: $L_2/L_1=0.882\pm0.008$. We carried out a simultanenous solution of the $uvby$ data and radial velocity curves. The calculations converged to a solution with the primary being slightly more massive, hotter and larger than the companion. The light ratio in the $V$-band predicted by the model is 0.880 so is in perfect agreement with the spectroscopic value. We initially included third light as a fitted parameter, but the WD code returns always small negative values with no improvement in the residuals, so we set $l_3=0$ in the main analysis. The final solution in the $y$-band is presented in Fig.~\ref{fig:light} and shows a good fit to the observations, free of systematic deviations. The $rms$ is gradually decreasing with wavelength from $7.9$~mmag in $u$ to $5.5$~mmag in $y$. The solutions of the radial velocity curves are of high quality: the $rms$ for the primary is 37~m~s$^{-1}$ and for the secondary is 54~m~s$^{-1}$. The rotation of both components is fully synchronized with the orbital period: $v\sin{i}\approx v_{\rm syn}\approx 17$~km~s$^{-1}$. We do not find evidence for significant period changes in the system: we find consistent parameters of $P_{\rm orb}$ and $T_0$ in the simultaneous solution and from radial velocities solved separately, despite the mean epoch of the velocimetry being 7000 ~d ($\sim$1800 orbital cycles) later than the epoch of the photometry.
The estimated spectral type is G0\,V + G0\,V. This is fully consistent with the classifications of G0\,V given by \cite{upg70} and G0\,V + G1\,V given by \cite{Gri01}.
\subsubsection{QR~Hya}
A WD model of the system was obtained by fitting the TESS light curve from sector 9 and the HARPS velocimetry. The system shows partial eclipses of almost equal depth, and the secondary minimum is at orbital phase 0.5. We first fitted a model with a circular orbit and no third light. The orbital period was assumed to be constant. The iterations easily converged to a solution where the primary star is the slightly hotter, larger and more massive component. This solution gives a $V$-band light ratio of $L_2/L_1=0.82$ which is fully consistent with the spectroscopic light ratio of $0.80\pm0.02$. However, some systematic residuals of up to 1000 ppm versus the best fit remained in both eclipses. Including the orbital eccentricity and the argument of periastron as free parameters allowed a significantly better solution to be obtained, both for the light and radial velocity curves, although the resulting eccentricity is very small ($e\sim0.0001$). Inclusion of third light as a free parameter does not improve the solution and the WD code always returns small negative values of $l_3$. The final solution still shows some small systematic residuals (see Fig.~\ref{fig:light}) during eclipses but they are likely an artefact of removing trends in the TESS light curve caused by spot activity.
The residuals of the radial velocity solution show an $rms$ of about 60~m~s$^{-1}$ for both components. They are consistent with the precision of the radial velocity determination: the typical S/N of the spectra is not high (Table~\ref{tab:harps}) and the BF profiles are slightly rotationally broadened ($v\sin{i}\approx13$~km~s$^{-1}$). The rotation is fully synchronous and the tidal deformation of the components, defined as $(r_{\rm point}-r_{\rm pole})/r_{\rm mean}$, is just 0.1\%. Both components are slightly-evolved solar-type stars with a practically solar metallicity of [M/H] $=-0.01$ dex.
We estimated the spectral type of the system as G1\,V + G2\,V based on the calibration by \cite{pec13}. This is fully consistent with the spectral type G1\,V reported by \cite{hou82} based on photographic plates as well with the G1\,V + G2\,V based on high-resolution spectroscopy \citep{cut02}.
\subsubsection{V788 Cen}
\label{v788cen}
The TESS light curves from sectors 10 and 37 were combined with the radial velocity curves in order to obtain a simultaneous solution with the WD code. The light curves show two shallow, partial eclipses of slightly unequal depth. The orbital phase of the secondary minimum is exactly 0.5 and the ephemeris given by \cite{Cou74} accurately predicts the eclipse times in the TESS data. We assumed a circular orbit and constant orbital period during the first stage of our analysis. The iterations with WD quickly converged to a solution with the hotter and more massive primary being almost twice as large as the secondary. The resulting light ratio in the TESS passband, $L_2/L_1=0.3$, corresponds to a $V$-band light ratio of about 0.29, in good agreement with the observed intensity of absorption lines in the spectra ($\sim$0.30). The model we obtained has a small but non-negligible third light, $l_3\sim0.02$.
Futher investigation of residuals revealed abrupt changes in flux in the TESS light curves of instrumental origin, reaching 0.05\% of the total flux, and also small flux trends lasting up to few days due to spot activity and/or slow instrumental drifts. We corrected for them and repeated the fitting procedure, obtaining significant decrease of residuals in both eclipses. However, the third light was persistent. If this light would come from a physically bound close tertiary component it would have $M_{Ic}\!=\!4.9$ mag which would correspond to a K0\,V spectral type. The typical precision of individual radial velocity determination is $\sim 65$~m~s$^{-1}$ for the primary and $\sim 180$~m~s$^{-1}$ for the secondary. This precision is in accordance with the $rms$ of the primary's residuals (Fig.~\ref{fig:rv}) but the secondary's residuals are somewhat large. This may suggest some non-radial pulsations on the surface of the secondary star.
The temperature difference $T_1-T_2$ inferred from light curve solution is $414\pm30$ K which is fully consistent with the temperature difference derived from spectroscopy (Section~\ref{temp:atm}) $360\pm110$ K. The components differ in mass and radius, with the primary being a significantly more evolved star. The metallicity is super-solar and the primary rotates slower than synchronous. We estimated the spectral type as A7\,IV + A9\,V, which is somewhat later than the types reported before: A3m by \cite{And77} and A2mA5-F2 by \cite{hou78}.
\subsubsection{V338 Vir}
The components of this system differ significantly in size, but their temperaures are only slightly different. The more massive star is cooler and is eclipsed during the secondary minimum. The orbit is circular and the eclipses are of moderate depth. The out-of-eclipse part of the K2 light curve is not flat but shows a small ellipsoidal effect with an amplitude of $\sim$0.002~mag. The K2 long-cadence light curve was rectified using the method applied to FM Leo in our previous work \citep{gra21}. We carried out a simultaneous solution of photometry and velocimetry, and as the secondary is the more luminous star we adjusted temperature $T_1$ instead of $T_2$. Because the eclipses are partial and shallow we included the spectroscopic light ratio as an additional constraint. From a number of HARPS spectra we determined a $V$-band light ratio of $L_2/L_1=2.15\pm0.05$ and we forced the WD solutions to reproduce this light ratio to within its error bars.
The final light curve solution is presented in Fig.~\ref{fig:light}. It shows a number of instrumental trends and effects, which were only partially removed during the detrending procedure in order to preserve the out-of-eclipse proximity effects. The overall fit is very good, especially for the primary minimum, and only in the secondary minimum small systematic deviations of up to 200 ppm are present. Inclusion of third light does not improve the fit. The $rms$ of the fits to the radial velocities are 46~m~s$^{-1}$ and 51~m~s$^{-1}$, and they are consinstent with the precision of individual radial velocity measurements. Both stars rotate sub-synchronously and their metallicity is sub-solar. The secondary is an evolved star and it is close to a subgiant phase. The estimated spectral type of the system is F5\,V + F6\,V-IV, in agreement with the classification of F5\,V given by \cite{hou99}.
\subsubsection{V963 Cen}
\label{v963cen}
The system shows a relatively deep primary eclipse and a shallow secondary eclipse, both partial. The orbit is significantly eccentric ($e\sim0.42$). To obtain a WD model of the system we combined TESS photometry with the Str{\"o}mgren $uvby$ photometry and HARPS velocimetry. It turned out that obtaining a fully consistent simultaneous solution was very difficult and practically impossible for a number of reasons. First, the system shows apsidal motion with a period of about 55\,000~yr, complicating analysis of data obtained over a long time interval. The mean epochs of the observations are JD 2451000, 2456400 and 2459350 for the $uvby$ photometry, velocimetry and TESS photometry, respectively. Second, the analysis based on velocimetry or TESS photometry leads to different orbital eccentricities: the photometric one is 0.4237, whilst the spectroscopic one is 0.4218; the difference is more that 6$\sigma$. Third, the analysis of TESS photometry alone leads to temperature ratio $T_2/T_1=0.990$ while analysis of disentangled spectra, as well as $uvby$ photometry solved alone, gives $T_2/T_1>1$. Fourth, a solution derived from $uvby$ photometry gives a significantly larger orbital inclination than one based on the TESS photometry.
In order to find a consistent solution we fitted separately the three blocks of data, with the aim of obtaining as many consistent orbital and photometric parameters as possible. Full agreement was found for $P_{\rm orb}$, $\Omega_1$, $\Omega_2$ and $e$. The third and fourth problems were much mitigated by adjusting the third light: it turned out that TESS light curve has a quite large negative $l_3$, assuming that $uvby$ photometry has zero third light. The second problem was solved by finding a compromise eccentricity of 0.4223, while the first problem was overcomed by adjusting $\omega$ and $\phi$ separately in each block of data assuming the same eccentricity. The solution of the TESS light curve is presented in Fig.~\ref{fig:light}. The fit shows small but noticable systematic deviations during secondary eclipse -- those residuals can be removed by increasing the eccentricity but at a cost of degrading the radial velocity solution. The fit to the velocimetry is presented in Fig.~\ref{fig:rv}. The $rms$ of the residuals is 55~m~s$^{-1}$ for the primary and 37~m~s$^{-1}$ for the secondary.
The components of the system are very similar to each other: they differ in surface temperature by only $\sim$10~K, in mass by 0.5\%, and in radius by 1.7\%. The primary is the more massive, larger and cooler of the two. The components rotate about two times faster than synchronous rotation, but two times slower than the synchronous value at periastron. The estimated spectral type is G2\,V-IV + G2\,V-IV, which corresponds well with the classification of G2\,V given by \cite{hou75}.
\subsubsection{LX Mus}
The system consists of two very similar stars on an eccentric orbit. The eclipses are partial and rather shallow, and the out-of-eclipse parts of TESS light curve are flat with only tiny modulations due to some small stellar spots. The simultaneous solution of the photometry and velocimetry quickly converged to a solution with a slightly less massive, cooler and smaller star eclipsed during the deeper primary minimum. We also included third light in the fit, but adjusting this parameter does not reduce the residuals so we subsequently assumed $l_3=0$. The apsidal motion is not detected with the observations used in our analysis: the data likely cover too short a time interval to do so. We also do not detect any orbital period changes. The predicted light ratio at the $V$-band is $L_2/L_1=1.11$, which is slightly inconsistent with the spectroscopic light ratio of $1.06\pm0.02$. Forcing the WD code to reproduce the value of the spectroscopic light ratio worsens the fit and produces some small but noticable systematic deviations in both eclipses. To take this inconsistency into account we enlarged the errors on the radii by a factor of 1.5.
The $rms$ of radial velocity residuals are very small: 28~m~s$^{-1}$ and 25~m~s$^{-1}$ for the primary and the secondary, respectively. Those values are consistent with the precision of the radial velocity determinations: the numerous and sharp lines allow for a very precise determination of the broadening function profile. Both components rotate slightly slower than synchronous. The estimated spectral type of the system is F5\,V + F5\,V, which is in perfect agreement with the F5\,V reported by \cite{hou75}.
\subsubsection{V362 Pav}
The system consists of two very different stars on a practically circular and relatively tight orbit. The light ratio in the $V$-band is 70 and the primary completely dominates the spectrum. Fortunately the system has a favourable geometry with total eclipses, which allows the radii of both components to be precisely determined. The TESS light curve shows a noticable ellipsoidal effect with an amplitude of $\sim$0.01~mag. The secondary eclipse is much shallower than the primary eclipse, indicating a large temperature difference between the components. Radial velocity curves show a large difference in the component's masses, with a mass ratio of $q\sim0.45$.
Simultaneously fitting the photometry and velocimetry proved to be difficult. Although the fits converged quickly to a solution, inspection of the light curve residuals revealed that the model gave large systematic effects during eclipses (especially the primary) and a sinusoidal-like pattern of residuals outside of eclipses. We included third light as an adjusted parameter and found that it improved the fit but at the expense of a significantly negative value: $l_3=-0.054\pm0.005$. The systematic residuals during the secondary eclipse almost vanished but remained (albeit much diminished) in the primary eclipse. We decided to adjust also the albedo parameters for both components, finding that the albedo of the primary was very low and consistent with zero whilst that of the secondary was close to the value of 0.5 expected for a star with a convective atmosphere. However, this did not completely get rid of the systematic residuals especially close to the second and third contacts in the primary eclipse.
Then we turned out attention to the out-of-eclipse residuals. The strong difference in brigthness between components and relatively high radial velocity semiamplitude of the primary suggested that Doppler beaming will be significant in this system. Fig.~\ref{DopBin} shows the out-of-eclipse variations of V362 Pav together with a model fit (upper panel). The lower panel shows binned residuals from the model plotted against the theoretical Doppler beaming effect (a line). For calculating the effect we used equation 9 from \cite{plac19}. We estimated the beaming factor for the primary, assuming $T_{\rm eff}=8200$~K and $\log{g}=4.0$, to be $B=2.65\pm0.15$ from fig.~2 in \cite{plac19}. The secondary gives practically no contribution to the effect so we neglected it. The theoretical Doppler beaming accounts for about half of the observed sinusoidal pattern of the residuals. We subtracted that effect from the TESS light curve and repeated the fitting procedure, this time obtaining a much better agreement with the out-of-eclipse light changes and also a slight improvement in the primary eclipse. Finally we adjusted the orbital eccentricity and the argument of periastron, which enabled a further decrease of the residuals in both the light and radial velocity curves.
Solutions to the radial velocity curves are fully satisfactory, with an $rms$ of the radial velocity residuals of 83~m~s$^{-1}$ for the primary but much a larger value of $\sim660$~m~s$^{-1}$, for the secondary. Although the primary is a mid A-type star, it is also metallic-lined. This allows a relatively precise determination of its radial velocitities despite the large rotational broadening of the lines ($v\sin{i}\sim40$~km~s$^{-1}$). The secondary, on the other hand, is practically invisible in the spectra, even in those with the highest S/N of 140. This causes the low precision of individual radial velocities and the large $rms$ for the secondary.
The primary is slightly distorted with a tidal deformation of 1.0\%. We estimated the spectral types as A4\,V + early K-type dwarf.
\subsubsection{CQ Ind}
This system consists of two solar-type stars but with components significantly different in physical appearance: the slightly evolved primary is a little more massive and hotter than the unevolved secondary. At first we used in our analysis only the TESS photometry from sector 27 combined with HARPS velocimetry. Although the secondary eclipse is almost perfectly placed at orbital phase 0.5 the system possesses significant eccentricity: the secondary eclipse is nearly twice as long as the primary eclipse. The primary eclipse has a short phase of totality when the secondary transit over the primary stellar disc, while the secondary eclipse is partial because it occurs when the stars are further apart from each other. It was quickly clear that apsidal motion is also significant and is influencing the combined fit because the radial velocity data cover more that 4~yr. In order to properly account for the apsidal motion we included in the analysis the first two eclipses observed by TESS in sector 1. The timespan between these two sectors is about 700~d ($\sim$80 orbital cycles), but with the precision of the TESS photometry it is possible to determine the rate of apsidal motion.
We did not solve the photometry and velocimetry simultaneously, but instead applied an iteration scheme: using photometry we determined the apsidal motion rate ($d\omega/dt$), the position of the orbit ($i$, $\omega$, $e$), the relative sizes of stars ($r_{1,2}$) and the relative temperatures ($T_{1,2}$), then we solved the radial velocity curves to get the semimajor axis of the system $a$, the mass ratio $q$ and revised values of $\omega$ and $e$. We repeated these steps until we obtained a satisfactory consistency in the orbital parameters ($\omega$ and $e$) derived from the photometry and velocimetry separately. Fig.~\ref{fig:light} shows the best light curve fit for CQ Ind where the points denote data only from sector 27. The rate of apsidal motion is slow, $2.9\times10^{-4}$ deg cycle$^{-1}$, corresponding to an apsidal period of $\sim$30\,000 years.
We adjusted the third light as its inclusion reduces the residuals during the eclipses, however its value is slightly negative ($l_3=-0.006$). We also adjusted the albedo on the secondary component and obtained $A_2 = 0.42\pm0.06$, in agreement with the expected value for fully convective stellar atmospheres. The estimated spectral type is F6\,V + F8\,V, in good agreement with the F7\,V reported by \cite{hou78}. The metallicity is solar and both components rotate synchronously. The radial velocity solution presented in Fig.~\ref{fig:rv} shows very small residuals with an $rms$ of 32~m~s$^{-1}$ for the primary and 25~m~s$^{-1}$ for the secondary.
\begin{sidewaystable*}
\tiny
\caption{\tiny Model parameters from the Wilson-Devinney code}\label{tab_par_orb}
\centering
\begin{tabular}{@{}rlccccccccccccl@{}}
\hline\hline
&&&&&&&&&&&&&&\\
ID & $P_{\rm orb}/T_0$ & $q=\frac{M_2}{M_1}$ & $a$ & $\gamma$ & $e$ & $\omega$ & $\frac{d\omega}{dt}$ & $K$ & $i$ & $T_{\rm eff}$ & $\Omega$ & $r$ & $\frac{L_2}{L_1}$ & $l_3$\\
& (days/BJD) & & ($R_\odot$) & (km s$^{-1}$) & & (deg) & (deg year$^{-1}$) & (km s$^{-1}$) & (deg) & (K) & & & & \\
&&&&&&&&&&&&&&\\
\hline
GW~Eri p & 3.6586647(8) & 0.9906(19) & 15.272(15) & 32.32(13) & 0 & - & - & 104.51(14) & 83.933(51) & 8270 & 9.353(71) & 0.11971(101) & 0.9417 & 0.028(3) \\
s & 2459153.1534 & & & 32.32(13) & & & & 105.49(15) & & 8180(2) & 9.436(71) & 0.11760(98) & & \\
HD~32129 p & 16.4120349(7) & 0.6945(7) & 37.407(26) & 4.867(15) & 0.4374(7) & 61.20(2) & 0.0042(4) & 52.546(31) & 88.797(12) & 6725 & 21.212(52) & 0.05006(13) & 0.1724 & 0.022(2) \\
s &2459481.5181 & & & 5.370(10) & & & & 75.656(68) & & 5775(3) & 28.24(13) & 0.02640(13) & 0.1571\tablefootmark{a} & 0.009(4)\tablefootmark{a} \\
UW~LMi p & 3.87431667(14) & 0.9807(9) & 13.7086(63) & $-$33.930(24) & 0 & - & - & 88.480(49) & 86.621(44) & 5970 & 11.21(11) & 0.09778(100) & 0.8804\tablefootmark{b} & 0\tablefootmark{b} \\
s & 2450854.7741 & & & $-$33.897(26) & & & & 90.223(65) & & 5935(2) & 11.56(14) & 0.09300(118) & 0.8645\tablefootmark{c} & 0\tablefootmark{c} \\
QR Hya p & 5.0058710(4) & 0.9701(11) & 15.9370(87) & 10.534(18) & 0.0001(1) & 217(11) & - & 79.128(67) & 86.087(7) & 5880 & 13.383(39) & 0.08059(25) & 0.8378 & 0 \\
s & 2458559.4578 & & & 10.603(17) & & & & 81.567(57) & & 5801(2) & 13.873(58) & 0.07547(34) & & \\
V788 Cen p& 4.96637676(9) & 0.7881(14) & 18.6193(18) & $-$18.700(18) & 0 & - & - & 82.918(51) & 82.821(41) & 7820 & 7.613(22) & 0.14682(49) & 0.3126 & 0.021(2) \\
s & 2458578.9929 & & & $-$18.919(24) & & & & 105.21(17) & & 7406(3) & 10.046(58) & 0.08821(58) & & \\
V338 Vir p & 5.9853360(16)& 1.1586(11) & 19.6047(89) & 2.760(13) & 0 & - & - & 88.572(50) & 84.747(33) & 6582(3) & 13.952(84) & 0.07819(51) & 2.2116\tablefootmark{a} & 0\tablefootmark{a} \\
s & 2457228.4125 & & & 2.635(13)& & & & 76.446(55) & & 6425 & 10.535(38) & 0.12064(47) & & \\
V963 Cen p & 15.269303(7) & 0.9945(7) & 33.463(30) & $-$30.450(13) & 0.4223(16)& 140.10(3) & 0.0065(5) & 60.920(35) & 87.255(32) & 5800 & 24.869(81) & 0.04320(15) & 0.9714 & $-$0.028(4) \\
s & 2459356.3465 & & & $-$30.448(12) & & & & 61.257(24) & & 5808(6) &25.15(11) & 0.04248(20) & 0.9731\tablefootmark{b} & 0\tablefootmark{b} \\
LX Mus p & 11.750601(2) & 1.0082(3) & 30.2789(46) & $-$4.860(11) & 0.1975(2) & 148.59(6) & - & 66.708(15) & 87.603(10) & 6525 & 23.77(16) & 0.04442(28) & 1.0941 & 0 \\
s & 2459334.453 & & & $-$4.881(11) & & & & 66.163(12) & & 6556(2) & 23.11(12) &0.04610(23) & & \\
V362 Pav p & 2.7484368(5) & 0.4530(19) & 11.655(32) & $-$0.83(6) & 0.0014(4) & 283(16) & -& 66.555(81) & 84.304(35) & 8200 & 5.785(17) & 0.18817(61) & 0.0284 & $-$0.054(5) \\
s & 2458672.6411 & & & $-$0.06(25) & & & & 146.93(58) & & 4962(3) & 7.542(37) & 0.07231(44) & & \\
CQ Ind p & 8.9737116(2) & 0.8896(5)& 24.3265(78) & $-30.820$(12) &0.2764(5)& 89.66(1)& 0.0118(6) & 67.179(32) & 89.159(10)& 6440 & 18.489(46) & 0.05795(16) & 0.5460 &$-$0.006(3) \\
s & 2459046.259 & & & $-30.568$(11) & & & & 75.515(25) & & 6122(3) & 20.646(93) & 0.04632(23) & & \\
\hline
\end{tabular}
\tablefoot{\tiny Quoted uncertainties are the standard errors from the Differential Corrections subroutine combined with errors from
Monte Carlo simulations with the JKTEBOP code ver.~34.\\
In the ID column ``p'' refer to the primary and ``s'' to the secondary. The meaning of the columns are: the observed orbital period (and epoch of the primary eclipse $T_0$ given below), the mass ratio, the total semimajor axis $a=a_1+a_2$, the apparent systemic velocity of each component, the orbital eccentricity, the longitude of periastron, the rate of apsidal motion, the radial velocity semiamplitude, the orbital inclination, the effective temperature, the Roche potential, the fractional radius, the light ratio in the TESS band and the amount of third light in the TESS band.\\
\tablefoottext{a}{\textit{Kepler} band}
\tablefoottext{b}{Str{\"o}mgren $y$ band}
\tablefoottext{c}{Str{\"o}mgren $u$ band}}
\end{sidewaystable*}
\begin{table*}
\centering
\caption{Physical parameters of the stars.}
\label{par_fi}
\begin{tabular}{@{}rccccccccc@{}}
\hline \hline
ID & $M$ & $R$ & $\log{g}$ & $T_{\rm eff}$ & $L$ & $\upsilon\sin{i}$ & $[{\rm M}/{\rm H}]$ & $\varpi_{\rm phot}$ & $E(B\!-\!V)$ \\
& (M$_{\sun}$) & (R$_{\sun}$) & (dex) & (K) & (L$_{\sun}$) & (km s$^{-1}$) & (dex) & (mas) & (mag) \\
\hline
GW Eri p & 1.7936(57) & 1.828(16) & 4.168(7) & 8370(82) & 14.1(6) & 25(1) & +0.51(15) & 11.62(22) & 0.001(5) \\
s & 1.7768(54) & 1.796(15) & 4.179(7) & 8180(81) & 13.0(6) & 24(1) & & & \\
HD 32129 p & 1.5388(36) & 1.8726(50) & 4.080(2) & 6710(60) & 6.40(23) & 1(1) & +0.19(7) & 5.79(12) & 0.110(22) \\
s & 1.0687(20) & 0.9875(49) & 4.478(4) & 5760(97) & 0.97(7) & 2(2) & & & \\
UW LMi p & 1.1627(18) & 1.340(14) & 4.249(9) & 6035(75) & 2.15(11) & 17.4(7) & $-$0.10(6) & 9.55(18) & 0.005(5) \\
s & 1.1402(15) & 1.275(16) & 4.284(11) & 6000(72) & 1.90(10) & 17.1(8) & & & \\
QR Hya p & 1.1002(18) & 1.2844(40) &4.262(3) &5925(75) & 1.83(9) & 13.2(8) & $-$0.01(6)& 10.76(20) &0.003(3) \\
s & 1.0673(19) & 1.2028(55) & 4.306(4) &5845(71) & 1.52(8) & 12.3(9) & & & \\
V788 Cen p& 1.9621(70) & 2.733(10) & 3.858(3) & 7820(105) &25.2(1.4) & 20.3(7) & +0.5(2) & 11.11(20) & 0.006(4) \\
s & 1.5463(34) & 1.642(11) & 4.197(6) & 7405(120) &7.31(48) & 17(2) & & & \\
V338 Vir p & 1.3074(20) & 1.5329(99) & 4.183(6) & 6580(89) & 3.97(22) & 10(1) & $-$0.10(6) & 3.91(8) & 0.024(10) \\
s & 1.5148(21) & 2.3651(93) & 3.871(3) & 6425(62) & 8.59(34) & 13.3(7) & & & \\
V963 Cen p & 1.0812(29) & 1.4456(52) & 4.152(3) & 5810(58) & 2.15(9) & 8.4(8) & $-$0.06(5) & 8.87(22) & 0.018(10) \\
s & 1.0753(30) & 1.4215(68) & 4.164(4) & 5820(67) & 2.09(10) & 8.2(7) & & & \\
LX Mus p & 1.3433(6) & 1.3450(85) & 4.309(5) & 6535(70) & 2.97(12)& 4(1) & +0.09(5) & 6.91(16) & 0.056(12) \\
s & 1.3544(7) & 1.3959(70) & 4.280(4) & 6565(64) & 3.26(13) & 5.1(8) & & & \\
V362 Pav p & 1.936(18) & 2.1931(93) & 4.043(3) & 8200(70) & 19.6(7) & 39(1) & +0.02(15) & 6.72(13) & 0.016(10) \\
s & 0.8767(51) & 0.8428(56) & 4.530(5) & 4950(200) & 0.39(6) & 20: & & & \\
CQ Ind p & 1.2694(12) & 1.4097(39) & 4.243(2) & 6460(68) & 3.12(13) & 8.1(8) & $-$0.04(8) & 8.99(16) & 0.006(5) \\
s & 1.1293(12) & 1.1268(56) & 4.387(4) & 6140(71) & 1.63(8) & 6(1) & & & \\
\hline
\end{tabular}
\end{table*}
\section{Comparison with previous studies}
\subsection{GW Eri}
The first spectroscopic orbit of the system was provided by \cite{AbL77} based on 30 medium-resolution spectra taken at the 2.1-m coud{\'e} spectrograph at Kitt Peak between 1970 and 1976. The authors noted the extreme similarity of both components. Their radial velocity semiamplitudes are in very good agreement with ours, as is their systemic velocity. Their identification of the components is the same. The reference time $T_{\rm max}$ of the secondary quadrature provided by \cite{AbL77} is in perfect agreement with our result, with a difference of only $0.0003\pm0.011$ day, which indirectly shows that significant period changes are unlikely.
A combined light- and radial velocity solution was presented by \cite{Ver06}. They secured 22 high-resolution \'echelle spectra with the EBASIM spectrograph at the 2.1-m telescope at Complejo Astron{\'o}mico El Leoncito and CCD photometry in the $V$-band using the Helen Sawyer Hogg 0.6-m telescope. They used the WD code (version not specified) to derive the physical parameters of the components. Their masses are perfectly consistent with ours, while their radii are consistent with those from our unconstrained light curve solution. The reference time of the primary eclipse differs from our reference time by only $0.0004\pm0.0004$ day.
\subsection{UW LMi}
\cite{cla01}, in the end of a section devoted to UW~LMi, gave a reference to a forthcoming paper by Helt et al.\,containing a detailed analysis of this system. However, the paper was never published. A quantitive description of the system given by \cite{cla01} is in agreement with our results. \cite{Gri01} presented many more details of the system. He reported that the CORAVEL dips in his radial-velocity traces are slightly deeper for the primary star than for the secondary, and that the resulting difference in $V$-band magnitude between the components is about $0.15\pm0.05$ mag. Such a difference corresponds to a light ratio of $0.87\pm0.04$, which is consistent with our findings (Section~\ref{uwlmi}). We do not confirm his finding that the variance of the radial velocities of the primary is larger than that of the secondary's. In fact, as he already suggested, that was indeed due to a statistical fluke and our measurements show a larger radial velocity $rms$ of the secondary star, as one would expect. We found the systematic velocity about 1.5~km~s$^{-1}$ higher than Griffin's value, but we attribute this difference entirely to a zero-point instrumental shift between HARPS and CORAVEL. Our radial velocity semiamplitudes are in good agreement with Griffin's values though an order of magnitude more precise. We also derived the same rotational velocities of the components.
A combined analysis of \textit{Hipparcos} photometry and Asiago \'Echelle velocimetry was presented by \cite{mar04} and it shows a familiar picture: two components similar to each other. However, the precision of the determined parameters is much lower than from our work, and furthermore they are not consistent with our results. The primary star, which is the more massive and larger component, was assigned by them to be the secondary, less massive and smaller star. Their masses and radii differ from ours by $5\sigma$ and $2\sigma$, respectively. A much better agreement occurs for the orbital inclination and the orbital period. \cite{mar04} reported also unexpectedly high surface temperature for both stars ($T_{\rm eff}\approx 6500$ K) based on the strength of the Paschen 14 line relative to the Ca\,II triplet. We find much lower temperatures in accordance with the spectral type of UW LMi, the physical parameters of the stars, and the mass-luminosity relation for main-sequence stars \citep[$\sim 6000$ K;][]{eke15}.
\subsection{V788 Cen}
\cite{Cou74} presented a $V$-band light curve of the system showing two equal, shallow minima spaced by half an orbital period. However no analysis based on this light curve was published. The ephemeris given by \cite{Cou74} is in extremely good agreement with ours: the difference between the predicted and measured time of the primary eclipses in the TESS light curve is less than 1 minute, although the epochs differ by 47 years. Thus period changes in the system are unlikely.
\subsection{V963 Cen}
Preliminary results from the analysis of photometry and CORAVEL radial velocities reported by \cite{cla01} showed two nearly identical components with masses $\sim$1~M$_\odot$ on eccentric orbit. A detailed study of this system was announced but never published. A more comprehensive analysis of V963 Cen was presented by \cite{syb18}. They derived a very precise spectroscopic orbit in order to study the Rossiter-McLaughlin effect, and supplemented this by rather low-precision photometric parameters derived from an analysis of ASAS-3 data \citep{poj02}. The reported radial velocity semiaplitudes $K_{1,2}$ are practically identical to ours and the resulting masses $M_{1,2}$ are the same to within the errors. However, the errors quoted by \cite{syb18} for the semiaplitudes and masses, are surprisingly small. We suspect that by fixing the orbital eccentricity to $e=0.4217$ in their fit they artificially assumed a zero uncertainty on $e$. In our solution an uncertainty in the eccentricity is an important contribution to the error budget of $K_{1,2}$ and especially $M_{1,2}$. In fact our errors for the mass measurements are {\it dominated} by the error in $e$. If we assume an eccentricity with a standard and unrealistically small error from the WD code ($e=0.4223\pm0.0002$) that leads to smaller uncertainties in our $K_{1,2}$ and uncertainties in $M_{1,2}$ that are smaller by a factor of three -- in rough agreement with uncertainties reported by \cite{syb18}. However, their eccentricity is in perfect agreement with our value derived from velocimetry alone, although such a value of $e$ results in a relatively poor fit to the TESS light curve (see Section~\ref{v963cen}).
\section{The properties of new systems versus the surface brightness -- colour relation}\label{sec:sbcr}
We checked how the components of the ten systems in this work appear on a SBCR plot. We chose a standard relation between the surface brightness in the $V$-band and the $(V\!-\!K)$ colour. We expressed $K$ magnitudes in the 2MASS system \citep{skru06}. The light ratios of the components in Johnson $V$ and 2MASS $K$ bands which were extrapolated from our WD models and used to obtain individual intrinsic magnitudes are given in Table~\ref{tab:lratio}. Inspection of the positions of components against the SBCR gives immediate indications about any peculiarities, e.g. stars significantly above the mean SBCR are in most cases unrecognised multiple stellar systems or may have an incorrect value for third light. On the other hand a position significantly below may signify problems with adopted magnitudes e.g.\ a magnitude is calculated based on observations taken during eclipse without a correction for the light diminution. Another possibility is that the parallax is biased toward too large a value. Also, systems with a large reddening due to interstellar extinction could shifted away from SBCR if the reddening is not correctly accounted for.
The surface brightness parameter $S_{\!V}$ was calculated for our stars using equation 5 from \cite{hin89}:
\begin{equation}
\label{equ:sb}
S_{\!V} = 5 \log{\theta_{\rm LD}} + V_0,
\end{equation}
where $V_0$ is the intrinsic magnitude of a star in the $V$ band and $\theta_{\rm LD}$ is the limb-darkened angular diameter expressed in milliarcseconds. The angular diameters were calculated using:
\begin{equation}
\theta_{\rm LD} = 9.301 \times 10^{-3}\, R\, \varpi_{Gaia/EDR3},
\end{equation}
where $R$ is the stellar radius expressed in nominal solar radii $\mathcal{R}_\odot$ \citep{prsa16}.
We corrected the magnitudes of the HD~32129 system due to the presence of a putative K6/7\,V companion star. For all ten systems 2MASS magnitudes were taken outside eclipse, so there is no need to correct them for light loss. We adopted parallaxes from \textit{Gaia} Early Data Release 3 \citep{gaia20}. We did not use any corrections to parallaxes \citep[e.g.][]{lind20a,lind20b} because the systems are relatively close to us and the largest correction (V338 Vir) amounts to only 0.5\% of the parallax itself. From the sample one system, HD 32129, has the Gaia \verb"RUWE" parameter (the Renormalised Unit Weight Error)\footnote{\texttt{https://gea.esac.esa.int/archive/documentation/GDR2/Gaia\\\_archive/chap\_datamodel/sec\_dm\_main\_tables/ssec\_dm\_ruwe.html}} greater than 1.4 and also the largest fractional error of the parallax. Fig.~\ref{SBCR} shows the positions of the eclipsing binary components on the $V$-band surface brightness versus ($V\!-\!K$) colour diagram. The data are taken from our previous work \citep{gra21} and errorbars were suppressed in order to make the present sample clearly identifiable. The largest errorbars are those of the HD 32129 system. Practically all components lie very close to the SBCR derived from other eclipsing binary stars \citep{gra21} and the largest offsets are smaller than 2$\sigma$. New calibrations of the SBCRs utilizing the present, additional sample of stars are envisioned for a separate paper.
\section{Final remarks} \label{fin}
We present a detailed analysis of ten well-detached eclipsing binary stars. For the first time for all those systems, very precise and accurate astrophysical parameters were determined, including masses, radii, temperatures and metallicity. The high precision of the determined parameters makes these systems valuable for testing stellar evolution models. One system, GW Eri, is a visual triple system. Another, HD 32129, is a suspected triple system with a tertiary close to the main binary system. In principle all systems, with the possible exception of HD 32129, are useful for recalibration of the SBCRs based on {\it Gaia} EDR3 and later releases.
At least 30 more suitable DEBs lying within 250~pc of the Sun are expected to be analysed by our team in the near future. These systems, in combination with those with published detailed analysis, will be used to discuss issues such as the gravity and metallicity dependence of SBCRs. They will also be used for new calibrations of the stellar surface temperature versus colour relations.
\begin{acknowledgements}
We thank an anonymous referee for improvements in the text of this paper.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
\\
We are grateful to J.V.~Clausen, B.E.~Helt, and E.H.~Olsen for making their
unpublished $uvby$ photometric data available to us.
\\
The research leading to these results has received funding from the European Research Council (ERC)
Synergy "UniverScale" grant financed by the European Union's Horizon 2020 research and innovation programme under the grant agreement number 951549,
from the National Science Center, Poland grants MAESTRO UMO-2017/26/A/ST9/00446 and
BEETHOVEN UMO-2018/31/G/ST9/03050. We acknowledge support from the IdP II 2015 0002 64 and
DIR/WK/2018/09 grants of the Polish Ministry of Science and Higher Education.
\\
The research was based on data collected under the ESO/CAMK PAN - OCA agreement at the ESO Paranal Observatory.
\\
W.G. also gratefully acknowledges
support from the ANID BASAL project ACE210002.
W.G.\ also gratefully acknowledges financial support for this work from the BASAL Centro de Astrofisica y Tecnologias Afines
BASAL-CATA (AFB-170002), and from the Millenium Institute of Astrophysics (MAS) of the Iniciativa Cientifica Milenio del Ministerio de Economia,
Fomento y Turismo de Chile, project IC120009.
\\
A.G. acknowledges support from the ANID-ALMA fund No. ASTRO20-0059 and
MT acknowledges financial support from the Polish National Science Center
grant PRELUDIUM 2016/21/N/ST9/03310.
\\
This research has made use of the VizieR catalogue access tool, CDS,
Strasbourg, France (DOI : 10.26093/cds/vizier). The original description
of the VizieR service was published in 2000, A\&AS 143, 23.
\\
Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programmes 082.D-0499, 083.D-0549, 084.D-0591, 085.C-0614, 085.D-0395, 086.D-0078, 091.D-0469, 092.D-0363, 094.D-0056, 095.D-0026, 097.D-0150, 099.D-0380, 0100.D-0273, 0100.D-0339, 0101.D-0697, 0102.D-0281, 105.2045.002, 105.20L8.002, 106.20Z1.001, 106.20Z1.002, 108.21XB.001, 190.D-0237 to PIs: G.P., W.G. and D.G.; also 087.C-0012(A) to PI Krzysztof He{\l}miniak, 089.C-0415(A) and 094.C-0428(A) to PI Rafael Brahm.
\\
We used the {\it uncertainties} python package.
\end{acknowledgements}
\label{lastpage}
|
Title:
The EBLM project X. Benchmark masses, radii and temperatures for two fully convective M-dwarfs using K2 |
Abstract: M-dwarfs are the most abundant stars in the galaxy and popular targets for
exoplanet searches. However, their intrinsic faintness and complex spectra
inhibit precise characterisation. We only know of dozens of M-dwarfs with
fundamental parameters of mass, radius and effective temperature characterised
to better than a few per cent. Eclipsing binaries remain the most robust means
of stellar characterisation. Here we present two targets from the Eclipsing
Binary Low Mass (EBLM) survey that were observed with K2: EBLM J0055-00 and
EBLM J2217-04. Combined with HARPS and CORALIE spectroscopy, we measure M-dwarf
masses with precisions better than 5%, radii better than 3% and effective
temperatures on order 1%. However, our fits require invoking a model to derive
parameters for the primary star. By investigating three popular models, we
determine that the model uncertainty is of similar magnitude to the statistical
uncertainty in the model fits. Therefore, whilst these can be considered
benchmark M-dwarfs, we caution the community to consider model uncertainty when
pushing the limits of precise stellar characterisation.
| https://export.arxiv.org/pdf/2208.10534 |
\subsection{Combined radial velocity and photometric modelling}
\iffalse
K2 photometry was modelled jointly with radial velocity measurements to determine the final orbital solution. We performed a $\chi^2$ fit in a Bayesian framework to estimate the PPD of each parameter in the vector model. The vector model of parameters includes photometric zero-points for each $i^{th}$ light-curve --$zp_i$, $R_{\star}/a$, $k$, the impact parameter --$b = a\cos(i_{\rm orb})/R_{\star}$ where $i_{\rm orb}$ is the orbital inclination, the transit epoch --$\rm T_0$, the orbital period --$P$, the semi-amplitude of radial velocity measurements --$K$, the systematic radial velocity -- $\gamma$ and the change in systematic radial velocity with time --$d(\gamma)/dt$. Instead of fitting the argument of the periastron ($\omega$) and the eccentricity ($e$), we choose to use $f_c = \sqrt{e} \cos \omega$ and $f_s = \sqrt{e} \sin \omega$ since these have a uniform prior probability distribution and are not strongly correlated with each other. We also include a jitter term ($\sigma_J$) to account for spot activity which can introduce noise in to the radial velocity measurements \citep{2006ApJ...642..505F}.
We used a custom binary star model to determine the best orbital solution. Our model used the method described by \citet{2016A26A...591A.111M} to solve Kepler's equations. The true anomaly, $\nu_i$, corresponding to time $t_i$ was used to calculate a radial velocity model for the primary component,
\begin{equation}\label{radial_velocity}
V_{\rm 1} = K_1 \left( e \cos \omega + \cos \nu_i + \omega \right) + \gamma + (t_i - T_0)d(\gamma)/dt.
\end{equation}
The projected separation between the primary and secondary components,
\begin{equation}\label{sky_projected_seperation}
\delta = \frac{1 - e^2}{1 + e \cos \nu_1} \sqrt{1 - \sin^2i_{\rm orb} \sin^2(\nu + \omega)},
\end{equation}
was used to model primary eclipses and the secondary eclipse of J0055$-$00. For primary eclipses, we used the analytical approximation presented by \citet{2019A&A...622A..33M} to describe an object eclipsing a star with limb-darkening described by the power-2 law. We fitted the decorrelated limb-darkening parameters suggested by \citet{2018A&A...616A..39M},
\begin{eqnarray}
&h_1 &= 1 - c(1 - 2^{-\alpha}),\\
&h_2 &= c2^{-\alpha},
\end{eqnarray}
where $c$ and $\alpha$ describe the intensity ($I$) across the surface of the primary star as a function of $\mu = \cos \gamma$, where $\gamma$ is the angle between a line normal to the stellar surface and the line of sight of the observer,
\begin{equation}
I_\mu = 1 - c(1 - \mu^\alpha).
\end{equation}
We observed secondary eclipses for each system which permitted a measurement of the luminosity ratio between the primary and secondary component. The shape of the secondary eclipse is typically parameterised by the surface brightness ratio, $S = k^2 F_{\lambda,2} / F_{\lambda,\star}$, where $F_{\lambda}$ is the flux of each star observed in some bandpass. We assume the M-dwarf is uniformly illuminated where the normalised depth of a secondary eclipse is then given by $1-S \times k^2$. The primary eclipse was diluted was diluted by a factor of $k^2 (1 - F_{\lambda,2} / F_{\lambda,\star}$, although \citet{2019arXiv190412695G} found this effect to be less to be less than 500\,ppm for the earliest M-dwarfs in red transmission filters.
\fi
\begin{table}[]
\centering
\caption{Parameters and priors (if any) used determine the orbital solution. Priors take the form of $\mathcal{U}(x,y)$ where $x$ and $y$ are the lower and upper limits of a uniform prior respectively and $\mathcal{G}(x,y)$ where $x$ and $y$ are the centre and width of a Gaussian prior. }
\begin{tabular}{ccc}
\hline
\hline
& parameter & prior \\
\hline
radial velocity & $T_0 / \rm day$ & - \\
& $P_{\rm orb} / \rm day$ & - \\
& $K_1 / \rm km\, \rm s^{-1}$ & $\mathcal{U}$(0, none) \\
& $f_{\rm s}$ & $\mathcal{U}$($-0.5$, 0.5) \\
& $f_{\rm c}$ & $\mathcal{U}$($-0.5$, 0.5) \\
& $\gamma / \rm km\, \rm s^{-1}$ & - \\
& $\dot{\gamma} / \rm km\, \rm s^{-1}\, \rm yr^{-1}$ & $\mathcal{U}$($-1$, 1) \\
& $\sigma_{\rm RV}/ \rm km\, \rm s^{-1}$ & $\mathcal{U}$(0, none) \\
\hline
light curve & $R_\star / a$ & $\mathcal{U}$(0, 0.5) \\
& $R_2 / R_\star$ & $\mathcal{U}$(0, 0.5) \\
& $b$ & $\mathcal{U}$(0, 1+$k$) \\
& $h_1$ & $\mathcal{G}$(0, 1+$k$) \\
& $h_2$ & $\mathcal{G}$(0, 1+$k$) \\
& $S$ & $\mathcal{U}$(0, 0.5) \\
& $z_p$ & - \\
\hline
\textsc{celerite} & $\ln(B/ \rm mmag^2)$ & $\mathcal{U}$($-13$, $-9$) \\
& $\ln(L/ \rm day)$ & $\mathcal{U}$(0.5,8.) \\
& $\ln(P_{\rm rot}/ \rm day)$ & $\mathcal{U}$(0.7,8.0) \\
& $\ln(C)$ & $\mathcal{U}$(-5.0,8.0) \\
& $\ln(\sigma/ \rm mmag)$ & $\mathcal{U}$(-5.0,8.0) \\
\hline
\end{tabular}
\label{table_2:parameters}
\end{table}
\iffalse
To generate a red-noise model, we use the \textsc{celerite} package \citep{2017AJ....154..220F}. We followed the example detailed in Section 6.5 from \citet{2017AJ....154..220F} to model quasi-periodic variations in exoplanet lightcurves. We constructed a \textsc{celerite} co-variance function,
\begin{equation}
k(\tau) = \frac{B}{2+C}e^{-\tau/L} \left[ \cos \left( \frac{2 \pi \tau}{P_{\rm rot}} \right) + (1 + C) \right] + \sigma^2,
\end{equation}
where $B>0$, $C>0$, and $L>0$. Here, $\tau_k$ is the time difference between two observations, $B$ is the amplitude of quasi-period variations, $L$ describes their respective timescales, $P_{\rm rot}$ is the rotational period of the primary star, $C$ is an arbitrary factor, and $\sigma$ is an additional jitter term to account for additional uncertainty in photometric measurements. Similar to the example in Section 6.5 from \citet{2017AJ....154..220F}, we fit the logarithm of each parameter along with uniform priors.
The parameters we fitted along with respective priors can be found in Table \ref{table_2:parameters}. We compare models to data in a Bayesian framework with a log-likelihood function $\mathcal{L} = \exp(-\chi^2/2)$, with
\begin{equation}
\chi^2 = \sum_i \frac{(m_{\rm i} - m_{\rm model})^2}{\sigma_{m_{\rm i}}^2 + \sigma_{\rm LC}^2} + \sum_i \frac{(rv_{\rm i} - rv_{\rm model})^2}{\sigma_{rv_{\rm i}}^2 + \sigma_{\rm RV}^2}
\end{equation}
Here, $m_{\rm i}$ and $rv_{\rm i}$ represent the $i^{\rm the}$ measurement of magnitude and radial velocity with standard errors $\sigma_{\rm m_i}$ $\sigma_{\rm rv_i}$, respectively. We initiated 50 walkers and generated 20,000 draws, after an initial burn-in phase of 50,000 draws using \textsc{emcee}. We selected the trial step with the highest value of $\mathcal{L}$ as the measurement for each parameter. The uncertainties were calculated from the largest difference between the median and the $16^{th}$ and $84^{th}$ percentile of the cumulative PPD for each parameter from the second chain.
\fi
\subsection{Flattening \ktwo\ and \tess\ photometry}
\ktwo photometry is sufficiently precise that we can clearly identify primary and secondary eclipses. We performed a preliminary fit for each object to determine the best fitting transit model and corresponding contact points for both primary and secondary eclipses. Data within these contact points were masked and a Savitzky–Golay filter with a size of 103 data points ($\sim$2.5 days) and polynomial order 3 to encapsulate the out-of-transit trend.
\subsection{Moddeling}
We used the binary star model described by \citet{2020MNRAS.491.1548G} to assess models of radial velocity and transit photometery. This uses the analytical transit model \textsc{qpower-2} presented by \citet{2019A&A...622A..33M} which assumes stars are limb-darkened by the power-2 law. We fit decorrelated limb-darkening parameters $h_1$ \& $h_2$ (from Eqn. 1 \& 2 of \citealt{2018A&A...616A..39M}) with Gaussian priors centred on values interpolated from Table\,2 of \citet{2018A&A...616A..39M} and widths of 0.003 and 0.046 respectively. The \kepler\ and \wasp\ transmission throughputs are sufficiently different that we fit independent values of $h_1$ and $h_2$ for each dataset.
Our model vector included the transit epoch, $T_0$, the orbital period, $P$, $R_1 / a$, $k=R_2/R_1$, $b$, independent values of the photometric zero-point, $zp$, $h_1$ and $h_2$ for each filter, the surface brightness ratio, $SBR$, the semi-amplitude, $K_1$, and the systematic radial velocity of the primary star, $\gamma$. Instead of fitting the argument of the periastron ($\omega$) and the eccentricity ($e$), we used $f_c = \sqrt{e} \cos \omega$ and $f_s = \sqrt{e} \sin \omega$ since these have a uniform prior probability distribution and are not strongly correlated with each other. For \wasp datasets, we include a jitter term, $J$, added in quadrature to photometric uncertainties to account for various stellar effects and non-optimal observing conditions.
We also include a jitter term added in quadrature to radial velocity uncertainties ($J$) to account for spot activity, pulsations, and granulation which can introduce noise in to the radial velocity measurements \citep{2006ApJ...642..505F}. This was added in quadrature to the uncertainties associated with each RV measurement. We fit a similar term for each photometric data set, $\sigma$, which was also added in quadrature to photometric uncertainties. We assume a common third light contribution of 7.13\% in all transmission filters.
We used the ensemble Bayesian sampler \textsc{emcee} \citep{2013PASP..125..306F} to sample parameter space. We initiated 50 Markov chains and generated 100,000 trial steps, discarding the first 50,000 steps as part of the burn-in phase. We visually inspected each Markov chain to ensure convergence well before the 50,000$^{th}$ draw. The trial step with the highest log-likelihood was selected as our measurement for each fitted parameter. We adopted the difference between each measured parameter and the $16^{\rm th}$ and $84^{\rm th}$ percentiles of their cumulative posterior probability distributions as a measurement of asymmetric uncertainty. Fitted parameters are reported in Table\,\ref{tab:parameters} and shown in Fig.\,\ref{fig:Figure_3}.
\iffalse
\begin{table*}
\caption{The atmospheric parameters of 3 EBLMs discovered by the WASP survey.} %
\label{EBLMs_atmos} %
\centering
\resizebox{0.9\linewidth}{!}{%
\begin{tabular}{l c c c c c} %
\hline %
& J0055$-$00
& J0457$+$14
& J1652$-$19
& J2217$-$04 \\
& \object{EPIC220196587}
& \object{EPIC246712205}
& \object{EPIC205148699}
& \object{EPIC206500801}\\
\hline
\multicolumn{3}{l}{From SED fitting} \\
$\rm T_{\rm eff, phot}$ (K)
& $5880 \pm 110$ (G1)
& $7385 \pm 228$ (A9)
& $6226 \pm 180$ (F7)
& $5810 \pm 120$ (G2)\\
$\rm E(B-V)$
& $0.031 \pm 0.023$
& $0.329 \pm 0.040$
& $0.285 \pm 0.033$
& $0.095 \pm 0.024$\\
$g'_0$
& $11.214 \pm 0.091$
& $11.076 \pm 0.126$
& $11.875 \pm 0.133$
& $12.283 \pm 0.121$\\
\\
\multicolumn{3}{l}{From spectroscopy} \\
$\rm T_{\rm eff}$ $\rm(K)$
& $5969 \pm 85$
& $7373 \pm 85$
& $6262 \pm 85$
& $5848 \pm 85$\\
$\log g$ (dex)
& $4.36 \pm 0.13$
& $5.04 \pm 0.13$
& $4.56 \pm 0.13$
& $4.17 \pm 0.13$ \\
$\xi_{\rm t}\, (\rm km\,s^{-1})$
& $1.17 \pm 1.50$
& $1.92 \pm 1.50$
& $1.26 \pm 1.50$
& $1.15 \pm 1.50$ \\
$v_{\rm mac}\, (\rm km\,s^{-1})$
& $4.67 \pm 1.50$
& $19.51 \pm 1.50 $
& $6.84 \pm 1.50$
& $4.25 \pm 1.50$ \\
Vsin$i$ (km\,s$^{-1}$)
& $ \leq 5$
& $72 \pm 1$
& $11.56 \pm 1.35$
& $7.97 \pm 1.35$ \\
$\rm [Fe/H]$
& $0.39 \pm 0.06$
& $0.43 \pm 0.30$
& $0.18 \pm 0.06$
& $0.27 \pm 0.30$\\
$\log \rm A(Li) + 12$
& $2.5 \pm 0.1$
& -
& -
& -\\ \\
\hline
\hline %
\end{tabular}}
\end{table*}
\fi |
Title:
Metallicity of the intermediate velocity HI clouds derived based on the sub-mm dust emission for the whole sky |
Abstract: We have carried out a multiple regression analysis of the 21cm HI emission
combined with the sub-mm dust emission over 80 per cent of the sky at a
resolution of 47arcmin. The method covers the sky contiguously, and is
distinguished from the optical absorption line measurements toward bright stars
which cover a tiny fraction of the gas. On the assumption that the dust-to-gas
ratio is proportional to the metallicity, we derived the metallicity of all the
HI components, i.e., the intermediate velocity clouds (IVCs), the high velocity
clouds (HVCs), as well as the local HI gas. Major results include that the
metallicity of the IVCs is in a range of 0.1 -- 1.5 (relative to the majority
of local diffuse HI gas) with a mode at 0.6, and that a significant fraction,
$\sim$30 per cent, of the IVCs includes the low metallicity gas of <0.3. In
addition, it is revealed that 80 per cent of the HVC Complex C has a
metallicity of <0.3, and that the Magellanic Stream has a uniform very low
metallicity of <0.1. We argue that a large fraction of the low metallicity IVC
gas may favor a picture of the external low-metallicity HI gas accretion
instead of the Galactic-fountain model. In addition, we find that the IVCs show
a trend that metallicity of the IVCs increases with velocity decrease,
suggesting that the IVCs are accumulating high metallicity halo gas via
dynamical interaction at z<1 kpc.
| https://export.arxiv.org/pdf/2208.13406 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
ISM: abundance -- ISM: atoms -- ISM: clouds --- Galaxy: halo
\end{keywords}
\defcitealias{2014ApJ...796...59F}{F14}
\defcitealias{2017ApJ...838..132O}{O17}
\defcitealias{2019ApJ...878..131H}{H19}
\defcitealias{2021PASJ...73S.117F}{F21}
\section{Introduction}
\label{sec:introduction}
The intermediate velocity clouds (IVCs) in a velocity range of 40 -- 100\,km\,s$^{-1}$ are not explained by a simple model of the galactic rotation, but are not so extreme as the high velocity clouds (HVCs) whose velocity is higher than 100\,km\,s$^{-1}$.
The IVCs have been discovered predominantly in a negative radial velocity range, as well as the HVCs, through absorption line observations \citep[e.g.,][]{1949ApJ...109..354A,1952PASP...64..312M,1961ApJ...133...11M} and \ion{H}{i} 21\,cm line surveys \citep[e.g.,][]{1966BAN....18..405B,1973A&A....24...15W}.
There have been several efforts to constrain the distance to the IVCs/HVCs using the absorption-line bracketing technique.
The IVCs are located relatively close, have typical $z$ heights of $\sim 1$ -- 2\,kpc (e.g., \citealt{2008ApJ...672..298W,2022MNRAS.tmp..950L}, see also compilations by \citealt{2001ApJS..136..463W,2004ASSL..312..195V} and references therein), and are likely to be the disk-halo interface objects, while the major HVCs are further away, several to $\sim 10$\,kpc above the disk \citep[e.g.,][]{1999Natur.400..138V,2006ApJ...638L..97T,2008ApJ...684..364T,2007ApJ...670L.113W,2008ApJ...672..298W,2011MNRAS.415.1105S,2015A&A...584L...6R,2022MNRAS.tmp..950L} and belong to the inner halo.
One of the important questions is whether the IVCs and HVCs are different halo-cloud populations (i.e., gas comes from the disk and the one fuels the disk) or just the same population of objects with different heights and radial velocities.
The metallicity is a crucial parameter for pursuing the origin and has traditionally been measured by observing absorption lines.
The previous works presented that the HVCs have sub-solar metallicities $Z/Z_{\sun}\lesssim 0.1$ \citep[e.g.,][]{1995A&A...302..364S,1999Natur.402..388W,2001ApJ...559..318R,2003AJ....125.3122T,2003ApJ...585..336C,2004ApJS..150..387S} and are of extra-galactic origin.
The IVCs have near-solar metallicity of $\sim 0.5$ -- 1.0 solar (\citealt{2001ApJS..136..463W} and references therein, \citealt[][]{2001ApJ...549..281R,2001ApJ...559..318R}) and are usually explained by the so-called Galactic-fountain model in which hot gas ejected from the disk by the stellar feedback falls back in the form of neutral clouds \citep{1976ApJ...205..762S,1980ApJ...236..577B}.
However, the measured metallicities often have rather large uncertainties due to such factors as ionization states and the interstellar depletion (\citealt{2004ASSL..312..195V} summarized the possible problems in the metallicity determination, see section~5 of the reference).
For instance, \citet{2013ApJ...777...19H} claimed that IVC~135+54$-$45 \citep[cataloged as IV~21 in][]{1996ApJ...457..703K} has a sub-solar metallicity of $\log(Z/Z_{\sun})=-0.43\pm 0.12$\,dex ($Z/Z_{\sun}=0.37\pm 0.12$), but this is incompatible with the bright FIR emission suggesting the existence of a large amount of dust\footnote{\citet{2013ApJ...777...19H} also claimed that the dust-to-gas (100\,$\micron$ intensity to \ion{H}{i} column density) ratio of IVC~135+54-45, $0.32\times 10^{-20}$\,(MJy\,sr$^{-1}$\,cm$^{2}$), is one-third of the usual solar-neighborhood ratio.
However, \citet{1999A&A...344..955W} found $\sim 1\times 10^{-20}$\,(MJy\,sr$^{-1}$\,cm$^{2}$) and \citet{2015A&A...573A..83L} $0.7$--$1.2\times 10^{-20}$\,(MJy\,sr$^{-1}$\,cm$^{2}$) for the IVC.
\citet{2015A&A...573A..83L} surmised that this is because \citet{2013ApJ...777...19H} used \textit{IRAS} data instead of the reprocessed IRIS data.}.
In addition, these measurements are made at a limited number of locations because of a tiny number of bright background stars and do not lead to firmly establish the metallicity of the IVC.
An alternative method to measure the metallicity, the dust emission such as the 100\,$\micron$ emission obtained with \textit{IRAS} is a reasonable proxy of the metal abundance since the dust consists of a significant fraction of heavy elements, which is perhaps comparable to that in the gas phase, as indicated by the uniform dust-to-gas mass ratio of 1/100 in the interstellar medium (ISM).
However, the 100\,$\micron$ dust emission is not proportional to the dust mass.
This non-linearity, which hampered to use the dust emission for quantification of the metallicity, is eliminated at sub-mm wavelengths, where the modified Planck function is linearized completely in the Rayleigh-Jeans regime.
Therefore, the sub-mm dust emission is well proportional to the total dust mass.
The Planck Collaboration obtained such sensitive sub-mm dust emission over the whole sky at a 5\,arcmin resolution and derived dust optical depth at 353\,GHz (850\,$\micron$), $\tau_{353}$, by fitting the modified Planck function at four wavelengths from 100 to 850\,$\micron$ obtained by \textit{Planck} and \textit{IRAS} \citep{planck2013-p06b,planck2016-XLVIII}.
Previous works by \citet{2014ApJ...796...59F,2015ApJ...798....6F}, \citet[O17 hereafter]{2017ApJ...838..132O}, \citet[H19]{2019ApJ...878..131H}, and \citet{2019ApJ...884..130H} showed that $\tau_{353}$ characterizes well the linearity of the dust emission in a number of regions of the Milky Way.
The scheme was also applied to the Large Magellanic Cloud (LMC).
As such, \citet{2017PASJ...69L...5F} used the linear relationship between the velocity-integrated intensity of \ion{H}{i} line, $W_{\ion{H}{i}}$, and $\tau_{353}$, and estimated that the \ion{H}{i} ridge including R136 has a factor of two lower metallicity (dust-to-gas ratio) than the optical stellar Bar region in LMC.
In addition, \citet{2019ApJ...871...44T} presented that the N44 region near the \ion{H}{i} Ridge in the LMC have a $\sim 30$ per cent lower metallicity than that of the Bar region.
Most recently, \citet[F21 hereafter]{2021PASJ...73S.117F} applied the method to IVC~86$-$36 in the Pegasus-Pisces (PP) Arch and derived the upper limit of the dust-to-\ion{H}{i} ratio of IVC~86$-$36 to be less than $\sim 0.2$ of the local ISM, strongly suggesting that PP~Arch originated in a low-metallicity environment but not in the disk.
Their measurements of the dust-to-gas ratio significantly improved the preceding metallicity measurements by optical/ultraviolet atomic absorption lines of the background stellar spectrum in the IVC \citep{1997ApJ...475..623F}.
Although the absorption-line results suggest the subsolar metal abundance, the abundance values vary by $\sim 0.5$\,dex depending on the atomic species, leaving the metallicity unquantified.
Considering the whole above, we judge the dust-to-gas ratio measurements (\citealt{2017PASJ...69L...5F,2019ApJ...871...44T} and \citetalias{2021PASJ...73S.117F}) to be most appropriate and adopt it in the present work with an aim to extend the metallicity measurement of the \ion{H}{i} gas over the whole sky outside the Galactic plane.
A new feature in the present work is to employ a multiple-regression technique to analyze the relationship between $W_{\ion{H}{i}}$ of multiple velocity components and the total amount of $\tau_{353}$.
Another is the geographically weighted regression (GWR) technique \citep{10.1111/j.1538-4632.1996.tb00936.x,Fotheringham2002} which allows us to derive the spatial distribution of the dust-to-gas ratio.
The paper is organized as follows; Section~\ref{sec:datasets} describes the datasets used, and Section~\ref{sec:properties} gives the dust and gas properties analyzed by the present work.
Section~\ref{sec:analyses} shows the dust-to-gas ratio distribution of the \ion{H}{i} components such as the IVCs and HVCs, and the metallicity distribution in the whole sky including the individual outstanding components.
Section~\ref{sec:discussion} gives a discussion focusing on the implications of the IVC metallicity.
Section~\ref{sec:conclusions} concludes the present work.
\section{Datasets}
\label{sec:datasets}
\subsection{\ion{H}{i} data}
\label{subsec:HI_data}
Archival data of HI4PI full-sky survey \citep{2016A&A...594A.116H} were used in the present study.
HI4PI combined the data from the first release of the Effelsberg-Bonn \ion{H}{i} Survey \citep[EBHIS,][]{2016A&A...585A..41W} and the third revision of the Galactic All-Sky Survey \citep[GASS,][]{2015A&A...578A..78K}.
The data were divided into 192 parts, and each of them was presented as a FITS binary table containing spectra on the \textsc{HEALPix}\footnote{\url{http://healpix.sourceforge.net/}} grid with a resolution parameter of $N_\mathrm{side}=1024$ (the mean pixel spacing is 3.4\,arcmin).
The brightness temperature noise level is $\sim 43$\,mK (rms) at a velocity resolution of 1.49\,km\,s$^{-1}$.
The velocity coverage (with respect to the Local Standard of Rest, LSR) is $|V_\mathrm{LSR}|\leq 600$\,km\,s$^{-1}$ (EBHIS part) or $|V_\mathrm{LSR}|\leq 470$\,km\,s$^{-1}$ (GASS part).
The FWHM angular resolution of the combined map is 16.2\,arcmin, twice better than the Leiden/Argentine/Bonn (LAB) Survey \citep{2005A&A...440..775K}.
The \ion{H}{i} data were integrated into the five velocity ranges listed in Table \ref{tab:velocity_components} and combined into a single \textsc{HEALPix} format data for each velocity range.
\subsection{Dust optical depth data}
\label{subsec:tau353_data}
\citet{planck2016-XLVIII} used \textit{Planck} 2015 data release (PR2) maps and separated Galactic thermal dust emission from cosmic infrared background anisotropies by implementing the generalized needlet internal linear combination (GNILC) method.
The GNILC dust maps have a variable angular resolution with an effective beam FWHM varying from 5 to 21.8\,arcmin \citep[see][figure~2]{planck2016-XLVIII}.
The authors then produced the dust optical depth, temperature, and spectral index maps by fitting a modified blackbody model to the GNILC dust maps at 353, 545, 857\,GHz, and \textit{IRAS} 100\,{\micron} map.
We used the $\tau_{353}$ data released version R2.01 in \textsc{HEALPix} format with $N_\mathrm{side}=2048$ (the mean pixel spacing is 1.7\,arcmin).
The median relative uncertainty in $\tau_{353}$ is $\sigma(\tau_{353})/\tau_{353}=0.037$ in $|b|>15\degr$.
\subsection{CO data}
\label{subsec:co_data}
We used the \textit{Planck} PR2 type 2 CO(1--0) map \citep{planck2014-a12} to trace the distribution of molecular gas.
The type 2 map was obtained by combining the \textit{Planck} 100, 143, and 353\,GHz channel maps to extract the CO(1--0) line and has a higher signal-to-noise ratio than the type 1 map which is based on a single channel approach.
The angular resolution is 15\,arcmin FWHM.
The median uncertainty in the CO integrated intensity is $\sigma(W_\mathrm{CO})=0.44$\,K\,km\,s$^{-1}$ in $|b|>15\degr$.
\subsection{H~$\alpha$ data}
We used Wisconsin H~$\alpha$ Mapper Sky Survey (WHAM-SS) DR1-v161116-170912 which combines the Northern Sky Survey (NSS) in $\delta \gtrsim -30\degr$ \citep{2003ApJS..149..405H} and the Southern Sky Survey (SSS) data \citep{2010ASPC..438..388H} ($\delta \lesssim -30\degr$).
The WHAM-SS is the sole velocity-resolved all-sky H~$\alpha$ survey publicly available at present, covering velocity ranges roughly from $V_\mathrm{LSR}=-100$ to $+100$\,km\,s$^{-1}$ with a 12\,km\,s$^{-1}$ resolution.
\subsection{Pre-processing of the data}
The HI4PI \ion{H}{i} data and \textit{Planck} dust data have different resolutions.
We processed the data as follows; (1) If the $\tau_{353}$ data have a 21.8\,arcmin resolution at a data point, the \ion{H}{i} data were smoothed. Othewise, the dust data were smoothed to 16.2\,arcmin resolution.
(2) Both were degraded to $N_\mathrm{side}=256$ (the mean pixel spacing is 13.7\,arcmin).
In the following sections, we used these processed data unless otherwise noted.
\section{Gas and dust properties}
\label{sec:properties}
\subsection{Neutral gas and dust optical depth}
\begin{table*}
\caption{Summary of the velocity ranges in the present study.}
\label{tab:velocity_components}
\begin{threeparttable}
\begin{tabular}{l@{ }lr@{ -- }rrrr}
\hline
& & \multicolumn{2}{c}{$V_\mathrm{LSR}$ range$^{b}$} & $M_{\ion{H}{i}}$ ($|b|>15\degr$)$^{c}$ & \multicolumn{2}{c}{$M_{\ion{H}{i}}$ (valid data points)$^{d}$} \\
\cline{6-7}
\multicolumn{2}{c}{Name$^{a}$ } & \multicolumn{2}{c}{(km\,s$^{-1}$)} & \multicolumn {1}{c}{(M$_{\sun}$)} & \multicolumn {1}{c}{(M$_{\sun}$)} \\
\hline
Negative high velocity & (NHV) & $-470$ & $-100$ & $1.9\times 10^{7}$ & $1.9\times 10^{6}$ & 12\% \\
Negative intermediate velocity & (NIV) & $-100$ & $-30$ & $1.5 \times 10^{6}$ & $4.3\times 10^{5}$ & 29\% \\
Low velocity & (LV) & $-30$ & $+30$ & $7.0\times 10^{5}$ & $3.1\times 10^{5}$ & 43\% \\
Positive intermediate velocity & (PIV) & $+30$ & $+100$ & $5.3\times 10^{5}$ & $7.9\times 10^{4}$ & 15\% \\
Positive high velocity & (PHV) & $+100$ & $+470$ & $3.7\times 10^{6}$ & $2.5\times 10^{5}$ & 7\% \\
\hline
\end{tabular}
\begin{tablenotes}
\item[$a$] Identification name, and its abbreviation form in parentheses.
\item[$b$] Minimum and maximum velocities with respect to the LSR.
\item[$c$] The total \ion{H}{i} mass in $|b|>15\degr$ skys excluding the Magellanic Stream, obtained by equation~(\ref{eqn:HI_mass}) assuming distances of 150\,pc (LV), 1\,kpc (NIV and PIV), and 10\,kpc (NHV and PHV).
\item[$d$] The total \ion{H}{i} mass in valid data points and the percentage of whole $M_{\ion{H}{i}}$ in $|b|>15\degr$. The valid data points does not meet any of the masking criteria (a) -- (d) in Appendix~\ref{sec:masking} and have column densities $N_{\ion{H}{i}^{\ast}}>6\times 10^{19}$\,cm$^{-2}$ (see Section~\ref{subsec:regressionanalysis}).
\end{tablenotes}
\end{threeparttable}
\end{table*}
Fig.~\ref{fig:HI_maps} shows the spatial distributions of $W_{\ion{H}{i}}$ in the negative (i.e., blue-shifted) intermediate velocity (NIV, $V_\mathrm{LSR}=-100$ -- $-30$\,km\,s$^{-1}$), positive (red-shifted) intermediate velocity (PIV, $V_\mathrm{LSR}=+30$ -- $+100$\,km\,s$^{-1}$), and low velocity (LV, $|V_\mathrm{LSR}|<30$\,km\,s$^{-1}$) components, respectively.
Those of the negative- and positive high-velocity (NHV and PHV, $|V_\mathrm{LSR}|>100$\,km\,s$^{-1}$) components are also shown as silhouettes.
Fig.~\ref{fig:nhi_histo} shows the distribution function of the apparent amount of \ion{H}{i} gas (the product of $W_{\ion{H}{i}}$ and the solid angle $s$) as a function of the column density
\begin{equation}
N_{\ion{H}{i}}^{*}=C_{0}W_{\ion{H}{i}}=1.82\times 10^{18}\,(\mathrm{cm}^{-2}\,\mathrm{K}^{-1}\,\mathrm{km}^{-1}\,\mathrm{s})\,W_{\ion{H}{i}}
\end{equation} in the five velocity ranges (the asterisk means the \ion{H}{i} column density is under the optically-thin approximation ($\tau_{\ion{H}{i}}\ll 1$), following the notation of \citealt{2014ApJ...796...59F,2015ApJ...798....6F}, \citetalias{2017ApJ...838..132O} and \citetalias{2019ApJ...878..131H}).
As reported in the previous works, the IVCs are mainly in the negative velocity ranges.
The NIV components are concentrated in giant IVC complexes IV~Arch, IV~Spur, and Low-Latitude~(LL)~IV~Arch, occupying one-third of $b>15\degr$ skies.
Another conspicuous IVC, PP~Arch, is located in the Galactic South and has a head-tail structure elongated from IVC~86~$-$36 to $(l, b)=(150\degr, -60\degr)$.
The only small apparent amount of \ion{H}{i} gas is in the positive velocity ranges, except for the Magellanic System.
The LV components covering the whole sky are considered to be the local volume gas within 300\,pc of the sun.
They have an \ion{H}{i} column density range of $10^{20}$ -- $10^{21}$\,cm$^{-2}$ with a peak at 3 -- $4\times 10^{20}$\,cm$^{-2}$, which is one order of magnitude higher than IVCs ($1\times 10^{19}$ to $2\times 10^{20}$\,cm$^{-2}$) and HVCs ($\lesssim 1\times 10^{20}$\,cm$^{-2}$).
Fig.~\ref{fig:tau353_maps} shows the spatial distribution of $\tau_{353}$, and Fig.~\ref{fig:global_correlation} shows the correlation plots between $\tau_{353}$ and $W_{\ion{H}{i}}$ in the $|b| > 15\degr$ skies with masking described in Appendix~\ref{sec:masking}.
The $\tau_{353}$ values are the total amount on the line-of-sights, but the $W_{\ion{H}{i}}$ values are obtained by integrating within each velocity range.
The good correlation between the two quantities is found only in the LV component, while the other \ion{H}{i} gas components show a rather poor correlation between them.
The plots demonstrate that the gas in the intermediate- and high-velocity components has significantly different properties showing little correlation with $\tau_{353}$.
A trend that is seen in high- to intermediate-velocity components is the vertical features at $\tau_{353}\sim 1\times 10^{-6}$ in Fig.~\ref{fig:global_correlation}, which indicate little dependence of $W_{\ion{H}{i}}$ on $\tau_{353}$.
We suggest these vertical features, although not dominant, indicate low metallicity gas included in the high- to intermediate-velocity components as described later in Section~\ref{sec:discussion}.
For the rest of the points scattered broadly, $\tau_{353}$ shows no correlation with the IVCs or the HVCs.
\subsection{Contribution of the Ionized Gas to the Dust Optical Depth}
\label{subsec:ionizedgas}
The diffuse warm ionized medium (WIM) outside of localized \ion{H}{ii} regions is another significant component of the ISM.
The diffuse WIM distributed in the thick disk is estimated to have a surface density of approximately one-third of \ion{H}{i} \citep{1991IAUS..144...67R}.
The absorption-line studies revealed that HVCs and IVCs are associated with the ionized components \citep[e.g.,][]{2003ApJS..146..165S,2009ApJ...699..754S,2011Sci...334..955L}.
The velocity-resolved high-sensitivity surveys of diffuse H~$\alpha$ emission using WHAM showed that the neutral and ionized components trace each other well, though the detailed structure is not identical \citep[e.g.,][]{1998ApJ...504..773T,2001ApJ...556L..33H,2012ApJ...761..145B}.
The estimated mass of the associated ionized component is comparable to that of the neutral counterpart.
Previous works reported the observational evidence for the dust in the WIM \citep{1999ApJ...517..746H,2009ApJ...699.1374D}.
However, the dust properties in the diffuse WIM are still not well understood.
The destruction of the dust grains is expected in the low-density ISM \citep[e.g.,][]{1989ApJ...345..782M} and the WIM could be dust poor by grain-grain sputtering, in particular, in the H/IVCs which are colliding with the halo warm gas at supersonic velocity of 30 -- 200\,km\,s$^{-1}$ \citep[e.g.,][]{1994ApJ...433..797J}.
In order to approximately and empirically quantify the dust emission related to the WIM, we examined the relationship between $N_{\ion{H}{i}}^{\ast}$, $N_{\mathrm{H}^{+}}$, and $\tau_{353}$.
The column density of the ionized hydrogen $N_{\mathrm{H}^{+}}$ is converted from H~$\alpha$ intensity ($I_{\mathrm{H}\alpha}$) by a conversion factor $N_{\mathrm{H}^{+}}=3\times 10^{20}\,(\mathrm{cm}^{-2}\,\mathrm{R}^{-1})I_\mathrm{H\alpha}$ (see Appendix~\ref{sec:IHalpha-to-NH+}), where $1\,\mathrm{R} = 10^{6}/4\pi$\,photons\,cm$^{-2}$\,s$^{-1}$\,sr$^{-1}$.
Fig.~\ref{fig:WHAM_maps}, the distribution of $I_{\mathrm{H}\alpha}$ in the NIV, LV, and PIV velocity ranges, shows that the WHAM-SSS \citep{2010ASPC..438..388H} data in $\delta \lesssim -30\degr$ suffer from block discontinuity (0.2\,R rms, corresponding to $6\times 10^{19}$\,cm$^{-2}$ using the conversion factor above) probably due to some data-reduction issue.
We thus made the regression analysis in the two regions in the NSS part ($\delta >-30\degr$) shown in Fig.~\ref{fig:WHAM_maps}.
The region (1) is the main part of IV~Arch and Spur ($l=90\degr$ -- $270\degr$, $b>60\degr$), and the region (2) is in the middle-to-high latitude ($l<30\degr$ or $l>330\degr$, $b>30\degr$).
Both regions were set by avoiding the HVCs and strong compact H~$\alpha$ sources.
The WHAM survey is undersampled with a 1\,deg beam at a 1\,deg spacing \citep[see][section~2.2]{2003ApJS..149..405H}, having a $\sim 3$ -- 12 times lower resolution than the HI4PI and \textit{Planck} data, and we convolved the $N_{\ion{H}{i}}^{\ast}$ and $\tau_{353}$ data with the WHAM beam centered on each WHAM pointing.
Following \citet{2003ApJS..146..407F}, we approximated the WHAM beam to be a smoothed top-hat function
\begin{equation}
f(\theta)=\frac{1}{\exp\left[(\theta-\theta_{0})/\theta_\mathrm{s}\right]+1},
\end{equation}
where $\theta$ is the angular distance from the beam center, $\theta_{0}$ and $\theta_\mathrm{s}$ are set to 0.5 and 0.025\,deg, respectively.
Fig.~\ref{fig:tau353_vs_WHAM} shows the $\tau_{353}$-$N_{\ion{H}{i}}^{\ast}$ and $\tau_{353}$-$N_{\mathrm{H}^{+}}$ correlations in the NIV, LV and PIV velocity ranges.
It is shown that $\tau_{353}$ does not show any sign of dependence on $N_{\mathrm{H}^{+}}$, while $\tau_{353}$ is clearly correlated with $N_{\ion{H}{i}}^{\ast}$.
We therefore confirm that the dust emission from the ionized component is not significantly contributing to $\tau_{353}$ and can be ignored in the present analysis.
Note that \citet{2000A&A...354..247L} previously concluded that the dust emissivity ($\tau_{\lambda}/N_{\mathrm{H}^{+}}$ at a wavelength $\lambda$) in the WIM is close to the one in the \ion{H}{i}.
However, the authors used four times smaller $N_{\mathrm{H}^{+}}/I_{\mathrm{H}\alpha}$ conversion factor than the one in the present study, which overestimates the dust emissivity and is unacceptable.
\section{The Estimate of the dust-to-gas ratio}
\label{sec:analyses}
\subsection{Formulation}
\label{subsec:formulation}
The observed $\tau_{353}$ is the sum of the contribution of each component,
\begin{equation} \label{eqn:tau353_sum_of_contributions}
\tau_{353} = \sum_{X} \left( \tau_{353, X, \mathrm{H}^{+}}+\tau_{353, X, \ion{H}{i}}+\tau_{353, X, \mathrm{H}_{2}} \right),
\end{equation}
where $X=\mathrm{NHV}$, NIV, $\cdots$, PHV represents the velocity ranges listed in Table \ref{tab:velocity_components}, $\tau_{353, X, \mathrm{H}^{+}}$, $\tau_{353, X, \ion{H}{i}}$, and $\tau_{353, X, \mathrm{H}_{2}}$ are the contribution of ionized, atomic, and molecular gas in the velocity range $X$, respectively.
We masked the regions where CO integrated-intensity $W_\mathrm{CO}>1.4$\,K\,km\,s$^{-1}$ (corresponding to the $3\sigma$ noise level in $|b|>15\degr$), and the molecular fraction can be approximated to be zero in the unmasked regions.
If we assume that the contribution of the ionized gas is negligibly small (see Section~\ref{subsec:ionizedgas}), then equation~(\ref{eqn:tau353_sum_of_contributions}) can be rewritten as
\begin{equation} \label{eqn:tau353_sum_of_contributions_approx}
\tau_{353} = \sum_{X} \tau_{353, X, \ion{H}{i}}.
\end{equation}
The contribution of the velocity component $X$ is expressed as a function of \ion{H}{i} column-density $N_{\ion{H}{i}, X}$,
\begin{equation} \label{eqn:totalcolumn_to_tau353}
\tau_{353, X, \ion{H}{i}}=\left(\zeta_{X} \frac{N_{\ion{H}{i}, X}}{C}\right)^{\alpha},
\end{equation}
where $\zeta_{X}$ is the `metallicity' term, or normalized dust-to-gas ratio (that of the reference local ISM is assumed to be 1.0), and $C$ is an empirical constant determined independently.
Here, we adopt the $N_{\ion{H}{i}}$ model having a nonlinear relationship with $\tau_{353}$ derived by \citetalias{2017ApJ...838..132O} and \citetalias{2019ApJ...878..131H}.
These authors used the 21\,cm \ion{H}{i} data with $\tau_{353}$ following \citet{2014ApJ...796...59F,2015ApJ...798....6F} by taking into account the optical depth effect of the 21\,cm \ion{H}{i} emission.
\citetalias{2017ApJ...838..132O} derived a $\tau_{353}$-$N_{\ion{H}{i}}$ relationship with $\alpha = 1.3$ and $C=9.0\times 10^{24}$\,cm$^{-2}$ for the \ion{H}{i} gas in the Perseus region, and \citetalias{2019ApJ...878..131H} obtained $\alpha = 1.2$ and $C=2.0\times 10^{25}$\,cm$^{-2}$ in the Chamaeleon molecular cloud complex.
The value of $\alpha$ greater than 1.0 was suggested to be due to the dust evolution effect by \citet{2013ApJ...763...55R} who derived the non-linearity with $\alpha=1.3$ from (the far infrared optical depth)-(near infrared color excess) relationship in Orion.
The \citetalias{2017ApJ...838..132O} and \citetalias{2019ApJ...878..131H} models give slightly different but almost the same $\tau_{353}$ value (the difference in $\tau_{353}$ is $\lesssim 1\times 10^{-7}$ for $N_\mathrm{H}<8\times 10^{20}$\,cm$^{-2}$), as shown in Fig.~\ref{fig:global_correlation}.
\citet{2019ApJ...884..130H} made a \textit{Fermi}-LAT $\gamma$-ray analysis and confirmed the non-linearity with $\alpha\sim 1.4$.
Note that \citetalias{2017ApJ...838..132O} and \citetalias{2019ApJ...878..131H}\footnote{The description in Section~2.3 of \citetalias{2019ApJ...878..131H}, `released version R1.10 are used', is to be corrected to `released version R1.20 are used'. } used \textit{Planck} 2013 data release (PR1) dust data released version R1.20 \citep{planck2013-p06b}, whereas \citet{2019ApJ...884..130H} and the present study used R2.01 data.
We compared the two datasets and found a good consistency between $\tau_{353}$(R2.01) and $\tau_{353}$(R1.20) (see Appendix~\ref{sec:PR2_vs_PR1}).
The \ion{H}{i} column density is expressed as a function of \ion{H}{i} integrated intensity $W_{\ion{H}{i}, X}$ and optical depth $\tau_{\ion{H}{i}, X}$,
\begin{equation} \label{eqn:HI_intensity_to_column}
N_{\ion{H}{i}, X} = C_{0}\frac{\tau_{\ion{H}{i}, X}}{1-\exp(-\tau_{\ion{H}{i}, X})}W_{\ion{H}{i}, X}.
\end{equation}
Suppose that $N_{\ion{H}{i}, X} \sim N_{\ion{H}{i}, X}^{\ast} = C_{0}W_{\ion{H}{i}, X}$ under the optically thin approximation of the \ion{H}{i} emission ($\tau_{\ion{H}{i}, X} \ll 1$), then equations~(\ref{eqn:totalcolumn_to_tau353}) and (\ref{eqn:HI_intensity_to_column}) give
\begin{equation} \label{eqn:WHI_to_tau353}
\tau_{353, X, \ion{H}{i}}=\left(\zeta_{X} \frac{C_{0}}{C} W_{\ion{H}{i}, X} \right)^{\alpha}
\end{equation}
and equation~(\ref{eqn:tau353_sum_of_contributions_approx}) can be rewritten as
\begin{equation} \label{eqn:regression_model}
\tau_{353} = \left(\frac{C_{0}}{C}\right)^{\alpha} \sum_{X} \left(\zeta_{X}W_{\ion{H}{i}, X}\right)^{\alpha}.
\end{equation}
Equation~(\ref{eqn:regression_model}) is reformed by introducing spatially varying coefficients $\zeta_{X}(l_{i}, b_{i})$ as
\begin{equation} \label{eqn:regression_model_libi}
\tau_{353}(l_{i}, b_{i}) = \left(\frac{C_{0}}{C}\right)^{\alpha} \sum_{X} \left[\zeta_{X}(l_{i}, b_{i}) W_{\ion{H}{i}, X}(l_{i}, b_{i})\right]^{\alpha},
\end{equation}
where $(l_{i}, b_{i})$ are the galactic coordinates of the $i$-th data point ($i=1, \cdots, n$; $n=4\times 10^{5}$ in the present study).
\subsection{Regression analysis and resulsts}
\label{subsec:regressionanalysis}
We estimated $\zeta_{X}(l_{i}, b_{i})$ at each data point using the GWR technique.
The traditional (ordinal) linear regression determines a set of global (spatially invariant) coefficients, whereas GWR estimates local regression coefficients at each data point employing a distance-decay weighting function.
We used a truncated-Gaussian weighting function
\begin{equation} \label{eqn:weighting_function}
w_{i}(l_{j}, b_{j})=\left\{\begin{array}{ll}
\exp\left[-\displaystyle{\frac{1}{2}}\left(\frac{\theta_{ij}}{\theta_\mathrm{bw}}\right)^{2}\right] & (\theta_{ij} \leq \theta_\mathrm{tr}) \\
0 & (\mbox{otherwise})
\end{array}\right.,
\end{equation}
where $\theta_{ij}$ is the great-circle angular distance between the regression point (the data point where we want to estimate the local coefficients $\zeta_{X}(l_{i}, b_{i})$) and the $j$-th ($j=1, \cdots, n$) data point $(l_{j}, b_{j})$, $\theta_\mathrm{bw}$ is the bandwidth of the weighting function, and $\theta_\mathrm{tr}=3.71\times \theta_\mathrm{bw}$ is the truncation radius which gives a truncation value of $w_{i}(l_{j}, b_{j})=1\times 10^{-3}$.
We used $\theta_\mathrm{bw}=20$\,arcmin ($\mathrm{FWHM}=47$\,arcmin).
The results of GWR are relatively insensitive to the choice of weighting function \citep[e.g.,][]{Fotheringham2002}; not only Gaussian but also, for example, bi-square functions are often used.
Because the $\zeta_{X}$ values are not allowed to become negative, we introduced a non-negative least-squares (NNLS) regression technique in which $\zeta_{X}$ is set to 0 if the component $X$ at and around a regression point has a very low dust-to-gas ratio.
The procedures and formulae in the analysis are summarized in Appendices \ref{sec:GWR}.
Fig.~\ref{fig:distribution_of_zeta} shows the spatial distribution of $\widehat{\zeta}_\mathrm{NHV}$, $\widehat{\zeta}_\mathrm{NIV}$, and $\widehat{\zeta}_\mathrm{LV}$ (the hat symbol means that the variable is estimated by the regression), estimated by adopting `modified \citetalias{2019ApJ...878..131H} model' $C=1.5\times 10^{25}$\,cm$^{-2}$ (chosen to the $W_{\ion{H}{i}, \mathrm{LV}}$-weighted mode of $\widehat{\zeta}_\mathrm{LV}$ to be 1.0) and $\alpha=1.2$.
The remaining results of the analyses, $\widehat{\zeta}_\mathrm{PIV}$ and $\widehat{\zeta}_\mathrm{PHV}$, are presented in Appendix~\ref{sec:GWR}.
The standard error of $\widehat{\zeta}_{X}$, $\sigma(\widehat{\zeta}_{X})$, is inversely proportional to $W_{\ion{H}{i}, X}$ (or $N_{\ion{H}{i}, X}^{\ast}$), but is not dependent on $\widehat{\zeta}_{X}$ (see Appendix~\ref{subsec:GWR_stderr}).
We hereafter set a threshold column density at $N_{\ion{H}{i}, X}^{\ast}=6\times 10^{19}$\,cm$^{-2}$ ($W_{\ion{H}{i}, X}=33$\,K\,km\,s$^{-1}$), beyond which $\widehat{\zeta}_{X}$ can be determined with $\sigma(\widehat{\zeta}_{X}) \lesssim 0.3$.
We find that the total mass of the IVC gas is $1.5\times 10^{6}$\,M$_{\sun}$ (assuming a distance of 1\,kpc, see below) and the metallicity is determined in the present work for 29 per cent of the IVC gas (Table~\ref{tab:velocity_components}).
Fig.~\ref{fig:mass_histo_of_zeta} shows the metallicity distribution function (the distribution of the \ion{H}{i} mass as a function of $\widehat{\zeta}_{X}$).
The \ion{H}{i}-mass at each pixel is given by
\begin{equation}\label{eqn:HI_mass}
M_{\ion{H}{i}, X}(l, b)=C_{0}m_\mathrm{p}W_{\ion{H}{i}, X}(l, b)\frac{4\pi}{12{N_\mathrm{side}}^{2}}D^{2},
\end{equation}
where ($l$, $b$) are the galactic coordinates of the pixel, $m_\mathrm{p}$ is the mass of the proton, $N_\mathrm{side}=256$ is the resolution parameter in the present dataset, and $D$ is the distance to the \ion{H}{i} gas.
We applied $D=150$\,pc for the LV component, 1\,kpc for the NIV component (excluding IV21 and the Magellanic Stream), and 10\,kpc for the HVC Complex~C.
We give a description of the properties of the LV, NIV and HVC and some of the individual objects in the following subsubsections.
\subsubsection{The LV components}
The LV components, considered to be the local gas in the solar vicinity, have a metallicity ($\widehat{\zeta}_\mathrm{LV}$) range from 0.6 to 1.5 at 10 per cent of the peak level and the distribution is nearly Gaussian shape with a dispersion of 0.1\,dex (Fig.~\ref{fig:mass_histo_of_zeta}(a)).
The measurements of metallicity of G dwarfs were made for the local volume within 25\,pc by many authors in the last five decades \citep[e.g.,][]{1996MNRAS.279..447R}.
These works indicate that the dispersion of metallicity is 0.2 -- 0.3\,dex with a peak at $-0.2$\,dex.
The dispersion is somewhat larger than the present one.
Considering the much longer age of G dwarfs, several Gyr, than the dynamical timescale of \ion{H}{i} gas, $\sim $\,Myr, the difference seems reasonable, and the \ion{H}{i} gas in the local volume seems to be well mixed probably by the turbulent motion.
\subsubsection{The NIV components}
The metallicity of the IVCs in the NIV range, most of them are in the complexes the IV~Arch and Spur, ranges from $\widehat{\zeta}_\mathrm{NIV}=0.2$ to 1.5 with a peak at 0.6, and 30 per cnet of them have less than 0.5 (Fig.~\ref{fig:mass_histo_of_zeta}(b)).
This result raises revision of the conventional view that the IVCs have metallicities nearly 1.0 solar, and may indicate an appreciable fraction of the IVCs originated in the low metallicity environment in the Galactic halo or further away.
The IVCs are then not exclusively due to the Galactic-fountain picture but indicate a significant fraction of the falling gas from the Galactic halo or outside the Galaxy.
Fig.~\ref{fig:zeta_vs_mom1}(a) shows that the metallicity of the IV~Arch and Spur depends on velocity in the sense that the low-velocity IVCs tend to have higher metallicity.
We find a trend that the high-velocity part lies in the inner part of the IV~Arch and Spur and its low-velocity part in the outer part in Fig.~\ref{fig:mom1_NIV}.
The typical infalling time scale of the IVCs is estimated to be 1\,kpc/100\,km\,s$^{-1}$ $\sim10$\,Myr.
The continuity of the IV~Arch and Spur and the short time scale of the IVCs suggest that the metallicity of the IV~Arch and Spur may be changing in $\sim 10$\,Myr.
A possibility is that the IV Complex is dynamically interacting with the halo gas having high metallicity, and significant gas accretion onto the IV~Arch and Spur is taking place.
This accretion will make the IVCs decelerated and more massive.
It is likely that the process also increases its metallicity by merging with high metallicity gas in the halo (T.\ Inoue, private communication).
The process will be more important for smaller IVCs of $<10^{3}$\,M$_{\sun}$, while the interaction will not be effective for a massive IVC of $10^{4}$\,M$_{\sun}$.
The simulations by \citet{2022ApJ...925..190S} dealt with such a massive IVC and deceleration does not seem to be significant.
\subsubsection{IVC~86$-$36 and PP~Arch}
The recent result on IVC~86$-$36 in the head of PP~Arch indicates a low metallicity of $\lesssim 0.2$ compared with the solar neighbor \citepalias{2021PASJ...73S.117F}.
The present metallicity distribution for the PP~Arch (Fig.~\ref{fig:distribution_of_zeta_PPArch}) is not directly compared with \citetalias{2021PASJ...73S.117F} for the following reasons.
\begin{enumerate}
\item The reference dust-to-gas ratio in \citetalias{2021PASJ...73S.117F} (the $\tau_{353}$-$W_{\ion{H}{i}}$ relationship for the foreground local ISM, their equation~(3)) gives $\sim 30$ per cent smaller `metallicity' values than those in the present study; the value of 0.22 in \citetalias{2021PASJ...73S.117F} corresponds to approximately $\widehat{\zeta}_\mathrm{NIV}=0.3$ in the present study.
This should be because the foreground local \ion{H}{i} in \citetalias{2021PASJ...73S.117F} is not optically thin enough, though they picked up the data points with high dust temperature ($T_\mathrm{d}>21$\,K).
\item In the present work, a large part of IVC~86$-$36 is masked due to saturated local \ion{H}{i} emission (meet the criterion (d) in Appendix~\ref{sec:masking}), and a small number of data points are valid.
\item The uncertainties of the foreground-subtracted $\tau_{353}$ in \citetalias{2021PASJ...73S.117F} ($\sim 1\times 10^{-6}$) are larger than the contribution of IVC~86$-$36 to $\tau_{353}$ (estimated to be $10^{-7}$).
\citetalias{2021PASJ...73S.117F} presented the low metallicity of IVC~86$-$36 but did not accurately obtain the regression coefficient of $\tau_{353}$-$W_{\ion{H}{i}}$ correlation in IVC~86$-$36.
\end{enumerate}
In spite of that, the present result reveals that the tail of PP~Arch has a low metallicity of $\widehat{\zeta}_\mathrm{NIV}=0.2$ and complements the low metallicity observed in the head.
If the low metallicity is correct, the previous observations of the absorption lines toward the IVCs may be subject to errors due to uncertain ionization states of the atoms and/or the interstellar depletion.
Fig.~\ref{fig:zeta_vs_mom1_PPArch}, the velocity-metallicity plot in PP~Arch, shows no dependence between the two parameters.
This may be explained if the halo gas is less dense in the negative (Galactic southern) hemisphere, leading to less interaction than in the positive latitude.
\subsubsection{The HVCs}
Complex~C has the largest apparent size among HVC complexes in $|b|>15\degr$ skies outside of the Magellanic Stream and barely enough column density to allow the present metallicity measurement.
Fig.~\ref{fig:mass_histo_of_zeta}(c) indicates that the metallicity of the complex is significantly lower than that of the IVCs; 50 per cnet of the HVCs show $\widehat{\zeta}_\mathrm{NHV}<0.2$ and 90 per cent show $\widehat{\zeta}_\mathrm{NHV}<0.5$.
We find no strong dependence of the metallicity on velocity (Fig.~\ref{fig:zeta_vs_mom1}(b)).
If we adopt the same interaction scheme, the no correlation may be explained as due to the lower gas density in the halo, because the HVCs are located at 10\,kpc \citep{2008ApJ...684..364T}, further away from the plane.
\subsubsection{The Magellanic Stream}
The Magellanic Stream is a \ion{H}{i} streamer which was probably formed by the tidal interactions between the Magellanic Clouds and the Galaxy $\sim 2$\,Gyr ago.
According to the numerical simulations, most of the \ion{H}{i} gas was stripped off from the Small Magellanic Cloud (SMC) halo and it is expected that the gas has very low metallicity similar to the SMC \citep[e.g.,][]{2010ApJ...721L..97B}.
Figs.~\ref{fig:distribution_of_zeta_MS} and \ref{fig:mass_histo_of_zeta}(d), the spatial distribution and the distribution function of $\widehat{\zeta}_{X}$ in the Magellanic Stream, indicate that the gas is of low metallicity even more than the HVCs, and 90 per cent of the gas has $\widehat{\zeta}_{X}<0.15$.
Tsuge et al.\ in prep. derived metallicity by the same method with \citetalias{2021PASJ...73S.117F} and derived the metallicity distributions.
The results indicate that the SMC has a low metallicity similar to the Magellanic Stream, while the LMC contains more metal-rich gas than the Magellanic Stream.
\section{Discussion}
\label{sec:discussion}
\subsection{Comparison of the present metallicity with the absorption line measurements}
\label{subsec:present_vs_absorption}
\begin{table}
\caption{Summary of metallicities.}
\label{tab:summary}
\begin{threeparttable}
\begin{tabular}{lrr}
\hline
& \multicolumn{1}{c}{Metallicity$^a$} & \\
\multicolumn{1}{c}{Cloud} & \multicolumn{1}{c}{(Relative to Solar)} & \multicolumn{1}{c}{{$\widehat{\zeta}_{X}$}$^b$} \\
\hline
\multicolumn{3}{l}{HVCs (in NHV)} \\
\hline
Complex~A & 0.02 -- 0.04 & 0 -- 0.3 \\
Complex~C & $0.089\pm 0.024^{+0.020}_{-0.005}$ & 0 -- 0.2 \\
\hline
\multicolumn{3}{l}{IVCs (in NIV)} \\
\hline
IV~Arch and Spur & $\sim 1$ & 0.4 -- 0.9 \\
LLIV~Arch & $1.0 \pm 0.5$ & $\cdots$ \\
PP~Arch & $0.54\pm 0.04$ & 0 -- 0.7 \\
\hline
Magellanic~Stream (tail) & $0.33\pm 0.05$ & 0 -- 0.08 \\
\hline
\end{tabular}
\begin{tablenotes}
\item[$a$] Taken from table~4 of \citet{2001ApJS..136..463W}.
\item[$b$] The $N_{\ion{H}{i}, {X}}^{\ast}$-weighted 25h and 75th percentiles of $\widehat{\zeta}_{X}$ for data points with $N_{\ion{H}{i}, {X}}^{\ast}>6\times 10^{19}$\,cm$^{-2}$.
\end{tablenotes}
\end{threeparttable}
\end{table}
The present work revealed the metallicity distribution which covers a large fraction of the local ISM, the IVCs and the HVCs.
We compare the present results with the previous absorption line measurements in Table~\ref{tab:summary}.
Most of these absorption line measurements were made before 2001 and \citet{2001ApJS..136..463W} compiled the metallicity and the distance of the HVCs and IVCs comprehensively from the literature.
Some limited number of measurements were made after that, while they do not affect the compilation.
In table~4 of \citet{2001ApJS..136..463W}; the metallicity is $\sim 0.1$ solar for one HVC (Complex~C), $\sim 0.3$ solar for the Magellanic Stream, $\sim 0.5$ solar for a southern IVC, and are consistent with $\sim$ solar for two northern IVCs (the IV~Arch and LLIV~Arch) and three more IVCs.
The effective resolution of the absorption measurements is given by a stellar image, and is extremely small as compared with the present resolution 47\,arcmin.
So, the two metallicity values sample entirely different volume, which differs by many orders of magnitude, and the values of the absorption measurements must be subject to large fluctuations due to the small volume.
In spite of that we find that the low metallicity of $\sim 0.1$ -- 0.3 solar in the HVCs and the Magellanic Stream is fairly consistent with the present results.
Further, the metallicity is given to be $0.54\pm 0.04$ solar in the IVC PP~Arch and to be $\sim 1$ solar in the rest of the IVCs.
These IVC values, 0.5 -- 1.0 solar, are consistent with the distribution of the metallicity in Figure~\ref{fig:mass_histo_of_zeta}(b), which shows that for 60 per cent of the IVCs the metallicity is distributed in a range from $\widehat{\zeta}_\mathrm{NIV}=0.5$ to 1.5 with a $M_{\ion{H}{i}, \mathrm{NIV}}$-weighted mode at 0.6.
We therefore conclude that the absorption measurements show reasonable consistency with the present metallicity distribution function.
\subsection{The effect of the dust transformation}
The dust may be subject to physical/chemical transformation and have the different emission properties in the neutral gas in the Local Group.
The shock waves with velocity higher than 100\,km\,s$^{-1}$ are shown to be the most efficient dust destruction process according to previous theoretical works \citep[e.g.,][]{2004ASPC..309..347J}.
Such high velocity shock waves are usual in supernova remnants.
Also, the IVCs and HVCs having high velocity around 100\,km\,s$^{-1}$ may ionize the gas as observed by the H~$\alpha$ emission toward the IVCs.
Comparison of the H~$\alpha$ emission with $\tau_{353}$ indicates that such ionized gas contains little dust as argued in Section~\ref{subsec:ionizedgas}.
This is likely due to the dust destruction by grain-grain sputtering in the high velocity collision.
The neutral gas inside the HVCs may not be affected much by the shocks because the shock velocity is probably retarded inside the IVCs.
This in fact seems to be consistent with the relatively loose correlation between H~$\alpha$ and the intense parts of NIV; e.g., IVC~86$-$36 shows no clear enhancement of H~$\alpha$ as compare with the whole PP~Arch (see Figure~\ref{fig:WHAM_maps}(a)), while some IVCs having less \ion{H}{i} column density might be more highly ionized.
So, we infer that the dust transformation is probably not so important in the present measurements, which focus on $N_{\ion{H}{i}}^{\ast}>6\times 10^{19}$\,cm$^{-2}$ (the vertical lines in Figure~\ref{fig:nhi_histo}), while future extensive follow-up works including the shock interaction of IVCs and HVCs will be valuable to ensure better the present metallicity measurements.
\subsection{The HVCs and IVCs, their interaction with the halo}
The origin and connection of the IVCs and HVCs have been an issue of broad interest over several decades since \citet{1966BAN....18..421O}.
According to the literature \citep[e.g.,][]{2001ApJS..136..463W}, it is favored that the IVCs are mostly originated by the Galactic-fountain mechanism within the Galaxy, and that the HVCs are of the extragalactic origin.
As discussed in Section~\ref{subsec:present_vs_absorption} the Galactic origin of the IVCs is suggested by the higher metallicity of the IVCs, 0.5 -- 1.0 solar, than the HVCs, which was derived by the absorption measurements.
The present results, with orders of magnitude larger spatial coverage, revealed that the IVCs have significantly lower metallicity than previously thought (more than 35 per cent of the IVCs are $\widehat{\zeta}_\mathrm{NIV}<0.5$).
Another intriguing aspect revealed by the present work is that the metallicity of the IVCs shows a correlation with the velocity (Fig.~\ref{fig:zeta_vs_mom1}(a)); the higher velocity IVCs at $<-50$\,km\,s$^{-1}$ have lower metallicity $\widehat{\zeta}_\mathrm{NIV}<0.5$, while the lower velocity IVCs at $>-50$\,km\,s$^{-1}$ have higher metallicity of $\widehat{\zeta}_\mathrm{NIV}=0.5$ -- 1.0.
These raise a possibility that at least part of the IVCs may be of extragalactic origin in a similar way to the HVCs.
We also pay attention to that some IVCs show a strong sign of interaction with the halo gas.
IVC~86$-$36 in PP~Arch is located at $z\sim 1$\,kpc and shows kinematical bridge features between the IVC and the disk \ion{H}{i} \citepalias{2021PASJ...73S.117F}.
The bridge features seem to link the two \ion{H}{i} gases as a result of the momentum exchange between them.
A collision between two clouds is numerically simulated \citep[e.g.,][]{2014ApJ...792...63T} and the formation of such a bridge is demonstrated.
Motivated by these newly found properties, we frame a scenario that the IVCs are infalling to the disk from $z >\mathrm{several}$\,kpc under significant dynamical interaction with the halo gas.
Possible consequences of the scenario are as follows.
First, this scenario explains the dominant low metallicity of the IVCs.
Second, the metallicity trend depending on velocity is explained in terms of accumulation of the high metallicity halo gas onto the IVC.
\citet{2017ApJ...837...82H} and \citet{2014ApJ...795...99G} presented a picture by hydrodynamical numerical simulations that HVCs/IVCs interact with the high metallicity halo gas, which mixes with the low metallicity HVC gas.
The Smith cloud which is falling to the plane has high metallicity of 0.5 solar \citep{2016ApJ...816L..11F}, and may be explained by such mixing.
This issue affects deeply the interpretation of the metallicity of the IVCs and HVCs, and deserves further intensive pursuits.
As for the connection of the IVCs with the HVCs, it is possible that the HVCs evolve to the IVCs by the interaction with the low density gas outside the disk over 10\,kpc, although the details of the halo gas is poorly constrained at present.
In this scenario, an implication is that the total mass and the metallicity of the IVCs may be increased significantly by the accumulation of the halo gas.
If so, the observed IVCs give an upper limit in mass for the real infalling gas outside the halo.
This issue needs to be more carefully pursued by employing numerical simulations of the interaction and a detailed study of the IVC distribution and kinematics, which are beyond the scope of the present work.
Recent numerical simulations by \citet{2022ApJ...925..190S} were aimed at numerically reproducing the formation of the long tail of PP~Arch with a head IVC~86$-$36 observed by \citetalias{2021PASJ...73S.117F}.
These authors reproduced the shape and revealed details of the physical processes of the interaction between an infalling cloud and the halo and disk.
It showed that over 1 kpc the head develops a kpc long tail in the direction of its orbit making an angle nearly $45\degr$ to the plane.
The cloud mass and the initial velocity is taken to be $10^{4}$\,M$_{\sun}$ and 100\,km\,s$^{-1}$ respectively.
In the end, the cloud completely merges with the disk, suggesting that most of the IVCs with less mass will merge with the disk.
The simulations of the Smith cloud show that the cloud having $10^{6}$\,M$_{\sun}$ punches through the disk, and may be moving up and down through the disk \citep{2016ApJ...816L..11F,2008ApJ...679L..21L}.
We suggest that most of the IVCs having small mass less than $10^{4}$\,M$_{\sun}$ will merge with the disk, as the Smith cloud as an exceptional case.
\subsection{The G-dwarf problem and sustainment of the star formation rate}
The IVCs are relevant to the long-standing issue, the G-dwarf problem \citep[e.g.,][]{1962AJ.....67..486V,1975MNRAS.172...13P}.
The G-dwarf problem raised a dilemma that the metallicities of the G dwarfs in the Galaxy show no particularly low metallicity in spite of their old age in the order of 10\,Gyr.
In such an early area, G dwarfs should have significantly lower metallicity than the solar metallicity, if the metallicity of the ISM increases monotonically in time.
The problem has also been observed in K and M dwarfs \citep[e.g.,][]{1997MNRAS.286..617F,2012MNRAS.422.1489W}.
A possible scenario to solve the problem is that the low metallicity gas is being supplied to the Galaxy continuously after the formation of the high-metallicity G dwarfs.
Another issue relevant is that the current star formation rate of the Galaxy, 1\,M$_{\sun}$\,yr$^{-1}$, may not hold as it is if the Galaxy received no additional gas supply from outside.
This is because the total gas in the Galaxy will be consumed by star formation within several Gyr ago unless the Galaxy has no additional gas supply.
If so, the current high star formation cannot be explained due to the lack of \ion{H}{i} gas.
As has been discussed previously, the present findings have the potential to support the low-metallicity mass supply in the required order of magnitude as IVCs falling to the Galaxy.
The \ion{H}{i} mass of the present IVCs is estimated to be $1.5\times 10^{6}$\,M$_{\sun}$ within $\sim 1$\,kpc of the sun (Table~\ref{tab:velocity_components}).
Most of the IVCs have negative velocities.
If we assume that all the IVCs are falling to the Galactic Disk at $\sim 100$\,km\,s$^{-1}$, the mass accretion rate within the 1\,kpc radius of the sun is calculated to be $\sim 10^{-1}$\,M$_{\sun}$\,yr$^{-1}
$ for a typical dynamical time scale of the IVCs, $\sim 10$\,Myr.
This is a significant fraction of the total star formation rate of the Galaxy, $\sim 1$\,M$_{\sun}$\,yr$^{-1}$.
As discussed above, the observed mass of the IVCs is probably an upper limit by considering the possible mass accumulation from the halo.
If we assume that the net mass inflow occurs roughly in a 10\,kpc radius, the mass accretion rate becomes $\sim 10$\,M$_{\sun}$\,yr$^{-1}$.
This is able to supply the required low metallicity mass as discussed above even if the subsequent mass increase of the IVCs in the halo is considered.
This issue will be pursued more deeply by an extensive analysis of the distribution and kinematics of the IVCs in the Galaxy and the nearby galaxies like M31 in future.
The forthcoming new instruments including SKA and ngVLAs are expected to provide important opportunities in these studies by covering the Local Group galaxies as well as nearby and more distant galaxies.
\section{Conclusions}
\label{sec:conclusions}
Aiming at revealing the metallicity distribution of the local ISM, the IVCs, and HVCs, we have carried out a multiple regression analysis of the 21\,cm \ion{H}{i} emission combined with the sub-mm dust emission $\tau_{353}$ over 29 per cent of the IVCs, for which we have derived the metallicity distribution most extensively.
As byproducts, we obtained the metallicity distributions in the low-velocity gas in the solar vicinity, some of the HVCs, and the Magellanic Stream.
The main conclusions are summarized below.
\begin{enumerate}
\item The analysis yields the dust-to-gas ratio of the multiple \ion{H}{i} components including the IVCs, the HVCs, the Magellanic Stream, and the local interstellar medium outside the Galactic plane at an effective resolution of 47\,arcmin, which is optimized for the multiple component analysis.
The method covers the sky contiguously, and is distinguished from the conventional optical absorption line measurements toward bright stars which cover a very small fraction of the gas with uncertainties due to the atomic ionization states and the interstellar depletion.
On the assumption that the dust-to-gas ratio is proportional to the metallicity, we derived the metallicity distribution function of all the components.
\item The present study allowed us to derive the metallicity over a far greater portion of the \ion{H}{i} gas than the previous studies, besides the IVCs, these include $\sim 10$ per cent of the HVC, most of the Magellanic Stream, and 43 per cent of the local ISM.
The major results include that the metallicity of the IVCs varies from $<0.2$ to 1.0 of the reference local ISM, and that a significant fraction, $\sim 36$ per cent of the IVCs includes the low metallicity gas of $<0.5$ with a \ion{H}{i}-mass weighted mode at 0.6.
In addition, it is shown that the HVCs in Complex~C has metallicity of $<0.1$ of the reference local ISM, and that the Magellanic Stream has a uniform metallicity of $<0.2$.
We argue that a large fraction of the low metallicity IVC gas is consistent with a picture of the external \ion{H}{i} gas accretion of low metallicity as opposed to the Galactic-fountain model.
It is likely that most of the IVCs merge with the Galactic \ion{H}{i} disk and supply thereby the low metallicity gas at a rate of $\sim 10$\,M$_{\sun}$\,yr$^{-1}$, which may offer a solution of the G-dwarf problem and the gas deficiency to sustain star formation in the Galaxy.
\item We find a trend observed in the IVCs, IV~Arch and Spur, that the metallicity of the IVCs increases with decrease of velocity.
We present a scenario where the IVCs are falling onto the Galactic plane and interacting with the high metallicity gas in the halo.
The interaction is evidenced by the kinematic bridge features connecting the IVC with the disk in IVC~86$-$36 in PP~Arch, and will cause accretion of the halo gas onto the IVCs.
The accretion increases the metallicity and mass of the IVCs and decelerates their infall velocity.
This scenario explains the observed correlation between the metallicity and velocity.
The observed metallicity of the IVCs therefore gives upper limits for the metallicity and mass of the original infalling gas, suggesting that the original metallicity of the IVCs is even lower than derived.
\end{enumerate}
In order to pursue further the implications of the IVCs, we need more systematic studies of the \ion{H}{i} gas over the whole sky by focusing on the individual regions as well as the overall properties of the IVCs and HVCs.
The forthcoming new instruments including SKA and ngVLA are expected to provide important opportunities in these studies by covering the Local Group galaxies as well as nearby and more distant galaxies.
\section*{Acknowledgements}
This work was supported by JSPS KAKENHI Grant Numbers 15H05694 and 21H00040.
This research made use of ds9, a tool for data visualization supported by the Chandra X-ray Science Center (CXC) and the High Energy Astrophysics Science Archive Center (HEASARC) with support from the JWST Mission office at the Space Telescope Science Institute for 3D visualization.
EBHIS is based on observations with the 100-m telescope of the MPIfR (Max-Planck-Institut fГјr Radioastronomie) at Effelsberg.
The Parkes Radio Telescope is part of the Australia Telescope which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO.
Some of the results in this paper have been derived using the \textsc{HEALPix} \citep{2005ApJ...622..759G} package.
The usefule comments by T.\ Onishi, K.\ Tachihara and T.\ Inoue helped to improve the comment and readability of the paper.
\section*{Data Availability}
The HI4PI data underlying this article are available in VizieR at \url{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/594/A116}.
The \textit{Planck} data are in Planck Legacy Archive (PLA) at \url{https://pla.esac.esa.int/}.
The WHAM-SS data were available at \url{http://www.astro.wisc.edu/wham-site/wham-sky-survey/wham-ss/} until 2022 March, but the web site is inaccessible as is 2022 August.
The data products from this study will be shared on reasonable request to the corresponding author.
\bibliographystyle{mnras}
\bibliography{test,Planck_bib,statistics_bib}
\appendix
\section{Masking}
\label{sec:masking}
We masked the data points which meet any of the following criteria;
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi})}
\item Galactic latitude is $|b|<15\degr$ in order to eliminate contamination by the Galactic-disk components far from us.
\item The CO integrated-intensity $W_\mathrm{CO}>1.4$\,K\,km\,s$^{-1}$ (corresponding to the $3\sigma$ level in $|b|>15\degr$), where the molecular gas is not negligible compared to the atomic gas.
\item The areas covering nearby galaxies listed in either or both the catalog by \citet{2013AJ....145..101K} and the samples studied by \citet{2016MNRAS.460.2143W}.
\item \ion{H}{i} emission is saturated due to the optical depth effect for lower spin temperature, in which case the $\tau_{353}\propto {N_{\ion{H}{i}}}^{\alpha}$ relationship described in Section~\ref{subsec:formulation} is not applicable.
\end{enumerate}
We employed a simple procedure to determine if each data point met the criterion (d); (1) redefined equation~(\ref{eqn:regression_model_libi}) adding a constant term $\tau_{0}$,
\begin{equation}\label{eqn:regression_model_with_constant}
\tau_{353}(l_{i}, b_{i}) = \left(\frac{C_{0}}{C}\right)^{\alpha} \sum_{X} \left[\zeta_{X}(l_{i}, b_{i})W_{\ion{H}{i}, X}(l_{i}, b_{i})\right]^{\alpha} + \tau_{0}(l_{i}, b_{i}),
\end{equation}
and estimate local parameters at the data point as described in Section~\ref{subsec:regressionanalysis} and Appendix~\ref{sec:GWR}, then (2) determined that \ion{H}{i} emission is saturated if $\widehat{\tau_{0}} \not\sim 0$.
We set the threshold to $|\widehat{\tau_{0}}|=1\times 10^{-6}$, the $1\sigma$ of all data points excluding (a) -- (c).
Fig.~\ref{fig:partial_correlation} shows the $\tau_{353}$-$W_{\ion{H}{i}, \mathrm{LV}}$ correlations for optically thin (not saturated) and saturated examples.
In both samples, the \ion{H}{i} integrated intensity of the NHV, NIV, PIV, and PHV components are small enough, and their contribution to $\tau_{353}$ is approximately zero.
Fig.~\ref{fig:maskmaps} summarizes the masked areas.
\section{H~$\alpha$ intensity-to-WIM column density Conversion Factor}
\label{sec:IHalpha-to-NH+}
A simple approach to converting $I_{\mathrm{H}\alpha}$ to $N_{\mathrm{H}^{+}}$ is the following.
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})}
\item $I_{\mathrm{H}\alpha}$ is proportional to the rate of the recombination $2.584\times 10^{-13}(T_\mathrm{e}/10^{4}\,\mathrm{K})^{-0.806}$\,cm$^{3}$\,s$^{-1}$ and the mean number of H~$\alpha$ photons produced per recombination $0.46\times (T_\mathrm{e}/10^{4}\,\mathrm{K})^{-0.118}$ \citep{1988ApJS...66..125M},
\begin{equation}
\frac{I_\mathrm{H\alpha}}{\mathrm{R}}=\frac{1}{2.75}\int \left(\frac{T_\mathrm{e}}{10^{4}\,\mathrm{K}}\right)^{-0.924}\frac{n_\mathrm{e}}{\mathrm{cm}^{-3}}\frac{n_{\mathrm{H}^{+}}}{\mathrm{cm}^{-3}}\frac{1}{\mathrm{pc}}dl,
\end{equation}
where $T_\mathrm{e}$ is the electron temperature, $n_\mathrm{e}$ and $n_{\mathrm{H}^{+}}$ are the densities of electron and ionized hydrogen ($n_\mathrm{e} \approx n_{\mathrm{H}^{+}}$), and $dl$ is the line-of-sight path length over which the electrons are recombining.
\item The distribution of $T_\mathrm{e}$ and $n_\mathrm{e}\approx n_{\mathrm{H}^{+}}$ in a line of sight is usually unknown.
We approximate that they are constant over the emitting region, and the path length of the ionized gas is given by
\begin{equation}
\frac{L}{\mathrm{pc}} = 2.75\left(\frac{T_\mathrm{e}}{10^{4}\,\mathrm{K}}\right)^{0.924}\left(\frac{\langle n_\mathrm{e}\rangle}{\mathrm{cm}^{-3}}\right)^{-2}\frac{I_\mathrm{H\alpha}}{\mathrm{R}},
\end{equation}
where $\langle n_\mathrm{e}\rangle$ is the mean electron density in the line of sight.
Then, $N_{\mathrm{H}^{+}}$ is given by
\begin{eqnarray}
\frac{N_{\mathrm{H}^{+}}}{\mathrm{cm}^{-2}} & = & 3.086\times 10^{18}\frac{\langle n_{\mathrm{H}^{+}}\rangle}{\mathrm{cm}^{-3}} \frac{L}{\mathrm{pc}} \nonumber \\
& \approx & 3.086\times 10^{18}\times 2.75 \nonumber \\
& & \times \left(\frac{T_\mathrm{e}}{10^{4}\,\mathrm{K}}\right)^{0.924} \left(\frac{\langle n_\mathrm{e}\rangle}{\mathrm{cm}^{-3}}\right)^{-1} \frac{I_\mathrm{H\alpha}}{\mathrm{R}}.
\end{eqnarray}
\item Previous works estimated that the thick disk WIM has a mid-plane volume-averaged electron density of $\sim 0.01$ -- 0.03\,cm$^{-3}$ and a scale height of $\sim 1$ -- 2\,kpc \citep[e.g.,][]{2001AJ....122..908G,2002ApJ...575..217P,2008A&A...490..179B,2008PASA...25..184G}.
Using $\langle n_\mathrm{e}\rangle=0.02$\,cm$^{-3}$ and $T_\mathrm{e}=8\times 10^{3}$\,K, we obtain a conversion factor of
\begin{equation}
\frac{N_{\mathrm{H}^{+}}}{\mathrm{cm}^{-2}} = 3\times 10^{20} \times \frac{I_\mathrm{H\alpha}}{\mathrm{R}}.
\end{equation}
\end{enumerate}
\section{Planck PR2 dust data versus PR1 dust data}
\label{sec:PR2_vs_PR1}
We checked the consistency between \textit{Planck} PR2 GNILC dust data \citep[released version R2.01,][]{planck2016-XLVIII} and PR1 data \citep[R1.20,][]{planck2013-p06b}.
Here, we did not smooth the R2.01 data but smoothed the R1.20 data to have the same effective beam size as the original R2.01 data. Then both were degraded to $N_\mathrm{side}=256$.
Here, we applied the masking criteria (a) -- (c) in Appendix~\ref{sec:masking}.
Fig.~\ref{fig:r2_vs_r1_tau353} shows the $\tau_{353}$-$\tau_{353}$ correlation between R2.01 and R1.20.
The two datasets are highly correlated with a correlation coefficient of 0.999 and a linear regression by an OLS-bisector method \citep{1990ApJ...364..104I} is
\begin{equation}
\tau_{353}\mbox{(R1.20)} = (1.0302\pm 0.0003)\times \tau_{353}\mbox{(R2.01)}-(8.85\pm 0.09)\times 10^{-8} \label{eqn:r2.0_vs_r1.2}.
\end{equation}
We found a good consistency between $\tau_{353}$(R2.01) and $\tau_{353}$(R1.20).
\section{Geographically Weighted Regression Analysis}
\label{sec:GWR}
\subsection{Estimating the Local Coefficients}
\label{subsec:GWR_main}
Prior to the analysis, we performed a linearizing transformation on $W_{\ion{H}{i}, k}$ ($k=1, \cdots, m$; $m=5$ in the present study and corresponding to the five velocity ranges in Table \ref{tab:velocity_components})
\begin{equation}
x_{k}(l_{i}, b_{i}) = \left\{
\begin{array}{ll}
{[W_{\ion{H}{i}, k}}(l_{i}, b_{i})]^{\alpha} & \mbox{if $W_{\ion{H}{i}, k}(l_{i}, b_{i}) > 5.5$\,K\,km\,s$^{-1}$}\\
0 & \mbox{otherwise}
\end{array}
\right.
\end{equation}
and replaced other variables,
\begin{eqnarray}
y(l_{i}, b_{i}) & = & \left(\frac{C}{C_{0}}\right)^{\alpha}\tau_{353}(l_{i}, b_{i}) \\
\beta_{k}(l_{i}, b_{i}) & = & [\zeta_{k}(l_{i}, b_{i})]^{\alpha} \\
a(l_{i}, b_{i}) & =& \left(\frac{C}{C_{0}}\right)^{\alpha}\tau_{0}(l_{i}, b_{i}).
\end{eqnarray}
Equation~(\ref{eqn:regression_model_with_constant}) was rewritten in a standard GWR form,
\begin{equation} \label{eqn:gwr_model}
y(l_{i}, b_{i}) = \sum_{k=1}^{m}\left[\beta_{k}(l_{i}, b_{i})x_{k}(l_{i}, b_{i})\right] + a(l_{i}, b_{i}) + \varepsilon(l_{i}, b_{i}),
\end{equation}
where $\varepsilon(l_{i}, b_{i})$ is the error term.
Equation~(\ref{eqn:gwr_model}) can be rewritten in a matrix form
\begin{equation}\label{eqn:gwr_model_matrix}
\mathbfit{y} = (\mathbfss{B} \circ \mathbfss{X})\mathbfss{1}_{\bmath{m\times n}} + \mathbfit{a} + \boldsymbol{\varepsilon},
\end{equation}
where $\circ$ denotes the Hadamard product (also known as element-wise product) of the matrices,
\begin{equation}
\mathbfss{X} = \left[
\begin{array}{ccc}
x_{1}(l_{1}, b_{1}) & \cdots & x_{m}(l_{1}, b_{1}) \\
\vdots & \ddots & \vdots \\
x_{1}(l_{n}, b_{n}) & \cdots & x_{m}(l_{n}, b_{n})
\end{array}
\right],
\end{equation}
is the matrix of the $n$ observations of $m$ independent variables,
\begin{equation}
\mathbfss{B} = \left[
\begin{array}{ccc}
\beta_{1}(l_{1}, b_{1}) & \cdots & \beta_{m}(l_{1}, b_{1}) \\
\vdots & \ddots & \vdots \\
\beta_{1}(l_{n}, b_{n}) & \cdots & \beta_{m}(l_{n}, b_{n})
\end{array}
\right] = \left[
\begin{array}{c}
\boldsymbol{\beta}(l_{1}, b_{1})^\mathrm{T} \\
\vdots \\
\boldsymbol{\beta}(l_{n}, b_{n})^\mathrm{T}
\end{array}
\right]
\end{equation}
is $n$ set of local coefficients, and $\mathbfss{1}_{\bmath{m\times n}}$ is an $m\times n$ all-ones matrix.
The $n$-element vectors $\mathbfit{y}$, $\mathbfit{a}$, and $\boldsymbol{\varepsilon}$ are dependent variables, local constant terms, and error terms.
The local regression coefficients at the $i$-th regression point is given by solving
\begin{equation} \label{eqn:standard_GWR}
\widehat{\boldsymbol{\beta_{i}}}=\left(\mathbfss{X}_{\bmath{i}}^\mathrm{T}\mathbfss{W}_{\bmath{i}}\mathbfss{X}_{\bmath{i}}\right)^{-1}\mathbfss{X}_{\bmath{i}}^\mathrm{T}\mathbfss{W}_{\mathbfit{i}} \mathbfit{y}_{\bmath{i}},
\end{equation}
where
\begin{equation}
\mathbfss{X}_{\bmath{i}} = \left(\mathbfss{I}_{\bmath{n}}-\mathbfss{W}_{\bmath{i}}\mathbfss{1}_{\bmath{n\times n}}\right)\mathbfss{X}
\end{equation}
is the local-centered independent variables,
\begin{equation}
\mathbfit{y}_{\bmath{i}} = \left(\mathbfss{I}_{\bmath{n}}-\mathbfss{W}_{\bmath{i}}\mathbfss{1}_{\bmath{n\times n}}\right)\mathbfit{y},
\end{equation}
is the local-centered dependent variable, $\mathbfss{I}_{\bmath{n}}$ is the identity matrix of size $n$, $\mathbfss{1}_{\bmath{n\times n}}$ is an $n\times n$ all-ones matrix, $\boldsymbol{\beta_{i}}$ is a short-hand notation for $\boldsymbol{\beta}(l_{i}, b_{i})$, the superscript T indicates the matrix transpose, $\mathbfss{W}_{\bmath{i}}$ is a weighting matrix
\begin{equation}
\mathbfss{W}_{\bmath{i}} = \frac{1}{\sum_{j=1}^{n}w_{i}(l_{j}, b_{j})}\left[
\begin{array}{ccc}
w_{i}(l_{1}, b_{1}) & & \\
& \ddots & \\
& & w_{i}(l_{n}, b_{n})
\end{array}
\right],
\end{equation}
and $w_{i}(l_{j}, b_{j})$ is given by equation~(\ref{eqn:weighting_function}).
As each component of $\boldsymbol{\beta_{i}}$ should be non-negative ($\boldsymbol{\beta_{i}}\geq 0$), equation~(\ref{eqn:standard_GWR}), therefore, must be reformulated by a non-negative least-squares (NNLS) problem
\begin{equation} \label{eqn:nnls_statement}
\mbox{Minimize}\ || \mathbfss{W}_{\bmath{i}}(\mathbfss{X}_{\bmath{i}}\boldsymbol{\beta_{i}}-\mathbfit{y}_{\bmath{i}})||\ \mbox{subject to}\ \boldsymbol{\beta_{i}}\geq 0,
\end{equation}
where $||\cdot ||$ denotes Euclidean norm (or also called $L^{2}$ norm).
A widely used algorithm for solving NNLS problems is the one by \citet{Lawson1995} briefly summarized as follows.
\begin{enumerate}
\renewcommand{\labelenumi}{\arabic{enumi}.}
\item Initialize set $P=\varnothing$ and $Q=\{1, \cdots, m\}$.
\item Let $\mathbfit{u}=(
\begin{array}{ccc}
u_{1} & \cdots & u_{m}
\end{array}
)=\mathbfss{X}_{\bmath{i}}^\mathrm{T}\left(\mathbfit{y}_{\bmath{i}}-\mathbfss{X}_{\bmath{i}}\boldsymbol{\beta_{i}}\right)$.
\item If $Q=\varnothing$ or $\max\{u_{q}: q\in Q\}\leq 0$, the calculation is completed.
\item Find an index $k \in Q$ such that $u_{k}=\max\{u_{q}: q\in Q\}$, and then move the index $k$ from $Q$ to $P$.
\item Let $\mathbfss{P}_{\bmath{i}}$ denote the $n \times m$ matrix defined by
\begin{equation}
\mbox{column $k$ of}\ \mathbfss{P}_{\bmath{i}}= \left\{\begin{array}{ll}
\mbox{column $k$ of}\ \mathbfss{X}_{\bmath{i}} & \mbox{if}\ k \in P \\
0 & \mbox{if}\ k \in Q.
\end{array}\right.
\end{equation}
\item Let $\mathbfit{z}$ be vector of same length as $\boldsymbol{\beta_{i}}$, and set
\begin{equation}
\mathbfit{z} = (
\begin{array}{ccc}
z_{1} & \cdots & z_{m}
\end{array}
) = \left(\mathbfss{P}_{\bmath{i}}^\mathrm{T}\mathbfss{P}_{\bmath{i}}\right)^{-1}\mathbfss{P}_{\bmath{i}}^\mathrm{T}\mathbfit{y}.
\end{equation}
\item Set $z_{q}=0$ for $q\in Q$.
\item If $\min\{z_{p}: p\in P\} > 0$, set $\boldsymbol{\beta_{i}}=\boldsymbol{z}$ and go to Step 2.
\item Let
\begin{equation}
\gamma=\min\left\{\frac{\beta_{p}}{\beta_{p}-z_{p}}: z_{p} \leq 0, p\in P \right\},
\end{equation}
where $\beta_{p}$ means the $p$-th element of $\boldsymbol{\beta_{i}}$.
\item Set $\boldsymbol{\beta_{i}}$ to $\boldsymbol{\beta_{i}}+\gamma(\mathbfit{z}-\boldsymbol{\beta_{i}})$.
\item Move from $P$ to $Q$ all indices $p\in P$ such that $\beta_{p} \leq 0$.
Then go to Step 5.
\end{enumerate}
The obtained $\boldsymbol{\beta_{i}}$ satisfies $\beta_{p} > 0$ for $p\in P$, $\beta_{q} = 0$ for $q\in Q$, and is an estimator $\widehat{\boldsymbol{\beta_{i}}}$ for the least squares problem
\begin{equation}
\mathbfss{P}_{\bmath{i}}\boldsymbol{\beta_{i}} = \mathbfit{y}_{\bmath{i}}.
\end{equation}
The estimator of the constant term is given by
\begin{equation}
\widehat{a}(l_{i}, b_{i}) = \mathbfss{1}_{\bmath{1\times n}}\mathbfss{W}_{\bmath{i}}\left[\mathbfit{y} - \mathbfss{X}\widehat{\boldsymbol{\beta_{i}}} \right],
\end{equation}
where $\mathbfss{1}_{\bmath{1\times n}}$ is an $n$-element all-ones row vector.
The estimated regression coefficients are transformed as following
\begin{eqnarray}
\widehat{\zeta}_{k}(l_{i}, b_{i}) & = & \left[\widehat{\beta}_{k}(l_{i}, b_{i})\right]^{1/\alpha} \\
\widehat{\tau_{0}}(l_{i}, b_{i}) & = & \left(\frac{C_{0}}{C}\right)^{\alpha} \widehat{a}(l_{i}, b_{i}).
\end{eqnarray}
Figs.~\ref{fig:distribution_of_zeta} and \ref{fig:distribution_of_zeta_PIV+PHV} show $\widehat{\zeta}_{k}(l_{i}, b_{i})$.
Fig.~\ref{fig:residual_maps} shows the residuals from the regression given by $\tau_{353}(l_{i}, b_{i})-\widehat{\tau}_{353}(l_{i}, b_{i})$.
The median and interquartile range of the residuals are $-1.5\times 10^{-9}$ and $5.9\times 10^{-8}$, respectively.
\subsection{The Standard Error of the Estimated Local Coefficients}
\label{subsec:GWR_stderr}
Let
\begin{equation}
\mathbfss{C}_{\bmath{i}} = \left(\mathbfss{P}_{\bmath{i}}^\mathrm{T}\mathbfss{W}_{\bmath{i}}\mathbfss{P}_{\bmath{i}}\right)^{-1}\mathbfss{P}_{\bmath{i}}^\mathrm{T}\mathbfss{W}_{\bmath{i}},
\end{equation}
where $\mathbfss{P}_{\bmath{i}}$ satisfies
\begin{equation}
\mbox{column $k$ of}\ \mathbfss{P}_{\bmath{i}}= \left\{\begin{array}{ll}
\mbox{column $k$ of}\ \mathbfss{X}_{\bmath{i}} & \mbox{if the $k$-th element of $\widehat{\boldsymbol{\beta_{i}}} > 0$} \\
0 & \mbox{if the $k$-th element of $\widehat{\boldsymbol{\beta_{i}}} = 0$}
\end{array}\right..
\end{equation}
Then the estimated variance-covariance matrix of $\widehat{\boldsymbol{\beta}}(l_{i}, b_{i})$ is denoted by
\begin{equation}
\boldsymbol{\Sigma_{i}} = \sigma^{2}\mathbfss{C}_{\bmath{i}}\mathbfss{C}_{\bmath{i}}^\mathrm{T},
\end{equation}
where $\sigma^{2}$ is the normalized residual sum of squares (RSS) from the local regression
\begin{equation}
\sigma^{2} = \frac{||\mathbfit{y}-\widehat{\mathbfit{y}}||^{2}}{n-2\mathrm{tr}\mathbfss{S}+\mathrm{tr}(\mathbfss{S}^\mathrm{T}\mathbfss{S})} \sim \frac{||\mathbfit{y}-\widehat{\mathbfit{y}}||^{2}}{n-\mathrm{tr}\mathbfss{S}}.
\end{equation}
The vector of predicted values $\widehat{\mathbfit{y}}$ is given by
\begin{eqnarray}
\widehat{\mathbfit{y}} & = & (\widehat{\mathbfss{B}} \circ \mathbfss{X})\bmath{1_{m\times n}} + \widehat{\mathbfit{a}} \nonumber \\
& = & \left\{ \left[
\begin{array}{c}
\widehat{\boldsymbol{\beta}}(l_{1}, b_{1})^\mathrm{T} \\
\vdots \\
\widehat{\boldsymbol{\beta}}(l_{n}, b_{n})^\mathrm{T}
\end{array}
\right] \circ \mathbfss{X}\right\}\mathbfss{1}_{\bmath{m\times n}} + \left[
\begin{array}{c}
\widehat{a}(l_{1}, b_{i}) \\
\vdots \\
\widehat{a}(l_{n}, b_{n})
\end{array}
\right].
\end{eqnarray}
The matrix $\mathbfss{S}$ is the hat matrix which maps $\widehat{\mathbfit{y}}$ on to $\mathbfit{y}$
\begin{equation}
\widehat{\mathbfit{y}} = \mathbfss{S}\mathbfit{y},
\end{equation}
and the $i$-th row of $\mathbfss{S}$ is given by
\begin{equation}
\mathbfit{s}_{\bmath{i}} = (\mbox{row $i$ of } \mathbfss{X}_{\bmath{i}})\mathbfss{C}_{\bmath{i}}\left(\mathbfss{I}_{\bmath{n}}-\mathbfss{W}_{\bmath{i}}\mathbfss{1}_{\bmath{n\times n}} \right)+\mathbfss{1}_{\bmath{1\times n}}\mathbfss{W}_{\bmath{i}}.
\end{equation}
The standard errors of the components of $\widehat{\boldsymbol{\beta}}(l_{i}, b_{i})$, $\sigma\left[\widehat{\beta}_{k}(l_{i}, b_{i})\right]$ are obatined from square roots of diagonal elements of $\boldsymbol{\Sigma_{i}}$.
The estimated standard errors are transformed as
\begin{equation}
\sigma(\widehat{\zeta}_{k})(l_{i}, b_{i}) = \frac{1}{\alpha} \widehat{\beta}_{k} \left[\widehat{\beta}_{k}(l_{i}, b_{i})\right]^{1/\alpha-1} \sigma\left[\widehat{\beta}_{k}(l_{i}, b_{i})\right].
\end{equation}
Figs.~\ref{fig:WHI_vs_SEzeta} and \ref{fig:SE_vs_SEzeta} show that $\sigma(\widehat{\zeta}_{X})$ is inversely proportional to $W_{\ion{H}{i}, X}$ (or $N_{\ion{H}{i}, X}^{\ast}$), but is not dependent on $\widehat{\zeta}_{X}$.
\bsp %
\label{lastpage} |
Title:
Black hole superradiance with (dark) matter accretion |
Abstract: Studies of black hole superradiance often focus on the growth of a cloud in
isolation, accompanied by the spin-down of the black hole. In this paper, we
consider the additional effect of the accretion of matter and angular momentum
from the environment. We show that, in many cases, the black hole evolves by
drifting along the superradiance threshold, in which case the evolution of its
parameters can be described analytically or semi-analytically. We quantify the
conditions under which accretion can serve as a mechanism to increase the
cloud-to-black hole mass ratio, beyond the standard maximum of about 10%. This
occurs by a process we call over-superradiance, whereby accretion effectively
feeds the superradiance cloud, by way of the black hole. We give two explicit
examples: accretion from a vortex expected in wave dark matter and accretion
from a baryonic disk. In the former case, we estimate the accretion rate by
using an analytical fit to the asymptotic behavior of the confluent Heun
function. Level transition, whereby one cloud level grows while the other
shrinks, can be understood in a similar way.
| https://export.arxiv.org/pdf/2208.06408 |
\setcounter{page}{1}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
~
\vspace{.80truecm}
\begin{center}
{\fontsize{24}{15} \bf Black hole superradiance\\\vskip 3pt with (dark) matter accretion}
\end{center}
\vspace{0.3cm}
\begin{center}
{\fontsize{13}{18}\selectfont
Lam Hui,${}^{\rm a}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
Y.T. Albert Law,${}^{\rm a, b}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
Luca Santoni,${}^{\rm c}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
Guanhao Sun,${}^{\rm a}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
\\[4.5pt]
Giovanni Maria Tomaselli,${}^{\rm d}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
Enrico Trincherini${}^{\rm e}$\footnote{\href{mailto:[email protected]}{\texttt{[email protected]}}}
}
\end{center}
\vspace{0.5cm}
\centerline{{\it ${}^{\rm a}$Center for Theoretical Physics, Department of Physics,}}
\centerline{{\it Columbia University, New York, NY 10027, USA}}
\vspace{.3cm}
\centerline{{\it ${}^{\rm b}$Center for the Fundamental Laws of Nature,}}
\centerline{{\it Harvard University, Cambridge, MA 02138, USA}}
\vspace{.3cm}
\centerline{{\it ${}^{\rm c}$ICTP, International Centre for
Theoretical Physics,}}
\centerline{{\it Strada Costiera 11, 34151, Trieste, Italy}}
\vspace{.3cm}
\centerline{{\it ${}^{\rm d}$GRAPPA, Institute of Physics,}}
\centerline{{\it University of Amsterdam, Science Park 904, 1098 XH, Amsterdam, The
Netherlands}}
\vspace{.3cm}
\centerline{{\it ${}^{\rm e}$Scuola Normale Superiore, Piazza dei
Cavalieri 7, 56126, Pisa, Italy and}}
\centerline{{\it INFN - Sezione di Pisa, 56100, Pisa, Italy}}
\vspace{.25cm}
\vspace{0.2cm}
\newpage
\setcounter{tocdepth}{2}
\tableofcontents
\vspace{1.0cm}
\renewcommand*{\thefootnote}{\arabic{footnote}}
\setcounter{footnote}{0}
\newpage
\section{Introduction}
The phenomenon of black hole superradiance has been known since the 1970s
\cite{1972JETP351085Z,1973JETP3728S,Bardeen:1972fi,Press:1972zz,Damour:1976kh,
Detweiler:1980uk}: mass and angular momentum can be extracted
from a Kerr black hole via an instability associated with the presence
of a light bosonic field. More recent work has emphasized axions or
axion-like-particles as particularly compelling examples of such a
field \cite{Arvanitaki:2009fg}, and explored observational signatures
such as the spin-down of black holes and gravitational wave emission
\cite{Arvanitaki:2010sy,Arvanitaki:2016qwi,Stott:2018opm,Davoudiasl:2019nlo}.
See also \cite{Dolan:2007mj,Brito:2015oca} for state-of-the-art
computations and a comprehensive review.
In this paper, we focus on the case of a scalar field; generalization to
higher spins is straightforward.
Consider a minimally coupled scalar $\Phi$ of mass $\mu$ on a Kerr background:
\begin{eqnarray}
(- g^{\alpha\beta}\nabla_\alpha\nabla_\beta + \mu^2) \Phi = 0 \, .
\label{scalareqKG}
\end{eqnarray}
Imposing boundary conditions that $\Phi$ is (1)
ingoing at the horizon and (2) vanishes at infinity, it can
be shown that a solution of definite angular momentum numbers $\ell,
m$ has a discrete set of frequencies $\omega$, much like the hydrogen
atom. Superradiance refers to the possibility of energy and angular momentum extraction from the black hole, which occurs when the following inequality is satisfied
\begin{eqnarray}
{\rm Re \,\omega} < {am \over r_s r_+} \, ,
\label{eqn:super-inequality}
\end{eqnarray}
where $r_s = 2 GM_{\rm BH}$ is the Schwarzschild radius, $a$ is the
spin of the black hole (maximal spin is $a=r_s/2$), and $r_+$ is the
outer horizon. The combination $\Omega_+ \equiv a/(r_s r_+)$ is the angular velocity of the horizon. For the
hydrogen-like bound states, whenever (\ref{eqn:super-inequality}) is
satisfied, $\omega$ acquires a positive imaginary part
($\Phi \propto e^{-i\omega t}$), signalling an instability (see Table~\ref{tab1} for a summary of the different cases).
Typically, ${\rm Re \,\omega}$ is of the order of the
scalar mass $\mu$, and ${\rm Im\,\omega} \ll \mu$.
The superradiance condition can thus be re-expressed as a condition on
the dimensionless spin of the black hole $a_* \equiv 2a/r_s$ as a
function of $\mu r_s/2$ (the gravitational radius to Compton scale
ratio). See the top-left shaded blue region of Figure~\ref{fig:regge-introductory} for an
illustration for $m=1$. This is the region of the parameter space in
which an initial scalar seed, no matter how small (even a quantum
fluctuation), can grow,
extracting both mass and angular momentum in the process, thereby
spinning down the black hole (see the red arrows in the blue
region). The upper bound on the mass of the cloud which grows in this way is, as we will see, about 10\% of the black hole mass \cite{Herdeiro:2021znw}.
The story we wish to tell starts when the black hole spins down to the
boundary of the superradiance region. In particular, let us remember
that black hole in nature rarely exists in isolation. The ambient matter, be it baryonic or dark matter,
can accrete onto the black hole. In most cases, as we will check
below, the accretion rate is small enough that the spin-down to the
superradiance boundary (the downward red arrows in the blue region)
is not significantly affected. But once the black hole approaches the
vicinity of the boundary, the mass and angular momentum extraction by
the cloud slows down considerably. Meanwhile, the ambient
accretion is still ongoing and can compete with superradiance. In
fact, to say ``compete'' does not quite convey the complete picture.
The two actually act in concert, in the following sense. The ambient accretion donates mass
and angular momentum to the black hole, while superradiance
extracts mass and angular momentum from the black hole at the same
time: accretion effectively
feeds the cloud---by way of the black hole.
The net result is a cloud that can grow to a significantly bigger size,
and a black hole that spins up and grows in mass. Effectively, the
black hole climbs up the boundary of the superradiance region
(Figure~\ref{fig:regge-introductory}). In detail, the black hole
actually executes a trajectory ever so slightly above the
boundary---we thus call this \textit{over-superradiance-threshold-drift}, or
\textit{over-superradiance} in short. It turns out an evolution
slightly under the boundary is also possible,
where the cloud shrinks, giving mass back to the black hole.
We refer to this second way for the black hole to climb up the superradiance
boundary as \textit{under-superradiance-threshold-drift}, or
\textit{under-superradiance} in short.
The goal of this paper is to explain the
details of these phenomena.
We emphasize that these phenomena have been seen in
previous numerical
computations, for instance \cite{Brito:2014wla} who explored the
effect of baryonic accretion in combination with superradiance.
Our aim in this paper is to highlight the
over/under-superradiance effect, to provide an analytical or
semi-analytical understanding, and to emphasize the consequences of a
significant cloud size attained this way. In addition, we also consider
the possibility that the accretion occurs from the ambient dark
matter. This is particularly appealing in the context of wave dark
matter, which is described by a light scalar field with a mass
$\lesssim30 \text{ eV}$, exhibiting wave phenomena (see, e.g.,
\cite{Hui:2016ltb,Niemeyer:2019aqm,Ferreira:2020fam,Hui:2021tkt}
and references therein). This may or may not be the same scalar that leads to
superradiance. This scenario naturally has vortices due to wave
interference (\cite{Hui:2020hbq} and references therein), from
which the black hole can accrete mass and angular momentum in a way
that triggers over-superradiance. We derive and use an analytical
fit of the solution to the scalar wave equation in Kerr spacetime to
describe the accretion fluxes in this case, building on earlier work
by \cite{Clough:2019jpm,Hui:2019aqm,Bamber:2020bpu}.
Lastly, we also make the connection with another known phenomenon: that
along the superradiance boundary, one cloud level can dissipate while
another grows. This is a similar phenomenon to the one outlined above, and can be understood within the under-superradiance description, as we will explain
below.
The outline of the paper is as follows.
We begin in Section \ref{sec:setup} with a
brief review of superradiance in isolation
(i.e., a single scalar mode present). The superradiance we are most
interested in is {\it bound} superradiance, which gives rise to the
growth of a cloud around the black hole,
reducing the latter's mass and angular
momentum. In particular, the standard maximum
cloud-to-black-hole mass ratio of around $10\%$ is derived in
(\ref{eqn:limit-analytical}).
We introduce the phenomenon of
threshold
drift in Section \ref{sec:accretion+superradiance}, a process in
which the black hole moves along the superradiance threshold
in the mass-spin (``Regge'') plane, by combining the effects of
superradiance and accretion from the ambient environment.
The evolution of the black hole $+$ superradiance cloud system
is described by (\ref{eqn:threshold-derivative}) to
(\ref{eqn:over-under-superradiant}). The distinction between
over-superradiance and under-superradiance is explained here.
In Section \ref{sec:cases}, we present several different cases of
interest, for both accretion from (wave) dark matter (Section \ref{sec:dm-accretion}), and from a
baryonic accretion disk (Section \ref{sec:baryonic}).
In Section \ref{sec:transition}, we demonstrate how the
phenomenon of level transition, whereby one cloud level is
depleted while another grows, can be understood in the same
threshold drift framework. We conclude in Section \ref{sec:discuss}
with a discussion of the observational implications and open
questions. A number of useful
results can be found in the Appendices: \ref{app1} contains an exploration of
the scalar field profile around
a Kerr black hole using the confluent Heun function;
\ref{app2} contains a summary of technical
results on superradiance;
\ref{sec:toy-model} describes a toy model helpful for
understanding the
salient features of combining superradiance and accretion.
\paragraph{Notations and terminology.} We work in natural units, with $\hbar=c=1$. Newton's constant and the reduced Planck mass are related by $G=1/(8\pi\Mpl^2)$. Our metric signature will be $(-,+,+,+)$, with Greek letters standing for spacetime indices. The metric of a Kerr black hole of mass $\MBH$ and angular momentum $\JBH$ is
\begin{equation}
\label{kerr}
ds^2 = -\left(1 - {r_s r \over \varrho^2}\right)\rd t^2 - {2a r_s r {\,\rm sin}^2\theta
\over \varrho^2}\rd t \rd \phi + {\varrho^2 \over \Delta} \rd r^2 + \varrho^2
\rd \theta^2 + { (r^2 + a^2)^2 - a^2\Delta {\,\rm sin}^2\theta \over
\varrho^2} {\,\rm sin}^2\theta \, \rd \phi^2 \, ,
\end{equation}
where $r_s \equiv 2 G M_{\rm BH}$ is the Schwarzschild radius, $a\equiv J_{\rm BH}/M_{\rm BH}$ is the spin parameter (which is taken to be nonnegative), $\varrho^2 \equiv r^2 + a^2 {\,\rm cos\,}^2\theta$ and $\Delta \equiv r^2 - rr_s + a^2$. The roots of $\Delta=0$ give the radii of the outer and inner horizons, $r_\pm \equiv r_s/2 \pm \sqrt{(r_s/2)^2 - a^2}$. The angular velocity of the outer horizon is $\Omega_+\equiv a/(\rs r_+)$ and the dimensionless black hole spin parameter is $a_* \equiv 2a/r_s$, ranging from 0 (non-rotating case) to 1 (extremal case).
We introduce the dimensionless quantity $\alpha\equiv\mu\rs/2$, where
$\mu$ is the scalar field mass.
A few words on terminology are in order. Accretion is
the process by which a black hole gains mass. This can occur via
accretion from the ambient environment (such as the surrounding dark
matter or baryonic disk), or accretion from the
cloud that was built up by superradiance (but is no longer in a
superradiant state due to the evolution of the black hole).
Most of the time, by accretion, we implicitly
refer to the former, i.e.~ambient accretion.
We reserve the word ``cloud''
to describe the scalar cloud bound to the black hole, grown by superradiance.
We reserve the words ``ambient''
and ``environment'' to describe what is around the black hole other than
the superradiance cloud itself.
\section{Superradiance in isolation}
\label{sec:setup}
In this section, we set the stage by reviewing the massive
Klein-Gordon equation in Kerr background, describing solutions
involving fluxes of mass and angular momentum {\it into} and {\it out of} the
black hole. Superradiance refers to the latter possibility.
In particular, when the scalar is bound to the black hole (the scalar
vanishes far away from it), the superradiance is accompanied by an
instability: a scalar cloud grows around the black hole.
The resulting backreaction on the black hole's mass and angular
momentum is described, and visualized in plots of the Regge plane.
Throughout this section, only a single mode (of angular momentum $m$)
is present.\footnote{Focusing on a single mode is a reasonable starting point,
because as we will see, the timescales associated with different
$m$'s are generally quite different.}
Most of the discussion focuses on this single mode being the
superradiant mode, though
some of the discussion applies equally well if
the single mode refers to accretion from the ambient environment.
In the next section, we will study situations in which two modes are
present, including the most interesting case where one mode refers to
superradiance, and the other refers to ambient accretion.
\subsection{Fluxes and evolution equations}
\label{sec:fluxes-evolution}
Consider a scalar field $\Phi$ of mass $\mu$ in the Kerr background.
Throughout the paper we will ignore self-interactions of
$\Phi$.\footnote{See Section \ref{sec:discuss} for a discussion on when this is a good approximation.} Under this assumption, the scalar obeys the Klein-Gordon equation $(-g^{\alpha\beta}\nabla_\alpha\nabla_\beta + \mu^2) \Phi = 0$, which can be solved by decomposing it into a linear combination of
\begin{eqnarray}
\Phi_{\omega\ell m} = e^{-i\omega t} e^{im\phi} S_{\ell m}(\theta) R_{\omega\ell m}(r) \, ,
\end{eqnarray}
where $\omega$ is, in general, complex. Here, $e^{im\phi}S_{\ell m}(\theta)$ is a spheroidal harmonic, which reduces to the
spherical harmonic $Y_{\ell m}(\theta,\phi)$ if $a=0$ or $\omega = \mu$, and $R_{\omega\ell m}(r)$ is the
radial function, which depends on $\omega, \ell, m$ as well as the black
hole parameters $a$ and $r_s$ and scalar mass $\mu$. Both the spheroidal harmonic and the radial function are solution of the confluent Heun equation; details of the decomposed Klein-Gordon equation are given in Appendix \ref{app1}. The above expression is technically only valid if $\Phi$ is complex; if it were real, one should simply add the complex conjugate:
\begin{eqnarray}
\label{ccreal}
\Phi_{\omega\ell m} = e^{-i\omega t} e^{im\phi} S_{\ell m}(\theta) R_{\omega\ell m}(r) + \text{c.c.} \, .
\end{eqnarray}
Two central quantities of our study are the integrated energy and angular momentum fluxes across the horizon.
To derive them, we take the $(r,t)$ and $(r,\phi)$ components of the
scalar energy-momentum tensor in the Kerr background; here, for simplicity we consider
only one $(\omega,\ell,m)$ mode and drop the subscripts:\footnote{In the case of a real scalar field, the following
expressions only hold in a time-averaged sense. More precisely,
the expressions for $T^\mu {}_\nu$ in terms of a real (and canonically
normalized) $\Phi$ has an extra
factor of $1/2$; this factor is canceled, once one expresses
$\Phi$ as in Eq.~\eqref{ccreal} and evaluates the time-averaged
$T^\mu {}_\nu$.}
\begin{align}
T^r{}_t&=g^{rr}(\partial_r\Phi^*\partial_t\Phi+\partial_t\Phi^*\partial_r\Phi)=2\frac\Delta{\varrho^2}\Im(\omega R'^*R)|S|^2e^{2\Im(\omega)t},\\
T^r{}_\phi&=g^{rr}(\partial_r\Phi^*\partial_\phi\Phi+\partial_\phi\Phi^*\partial_r\Phi)=-2\frac\Delta{\varrho^2}m\Im(R'^*R)|S|^2e^{2\Im(\omega)t}.
\end{align}
From the near-horizon limit of the radial part of the Klein-Gordon equation,
\begin{equation}
\label{eqn:near-horizon-radial}
\Delta\frac{\rd}{\rd r}\biggl(\Delta\frac{\rd R}{\rd r}\biggr)+\rs^2r_+^2(\omega-m\Omega_+)^2R=0,
\end{equation}
we can extract the near-horizon behavior of $R(r)$,
\begin{equation}
R(r)\propto(r-r_+)^{-i\sigma},\qquad \sigma=\frac{\rs r_+(\omega-m\Omega_+)}{r_+-r_-},
\end{equation}
and evaluate the energy-momentum tensor at the horizon,
\begin{align}
\label{eqn:T^r_t}
T^r{}_t(r_+)&=2\frac{r_sr_+}{\varrho^2}(|\omega|^2-\Re(\omega)m\Omega_+)\Phi^*\Phi(r_+),\\
T^r{}_\phi(r_+)&=-2m\frac{r_sr_+}{\varrho^2}(\Re(\omega)-m\Omega_+)\Phi^*\Phi(r_+).
\label{eqn:T^r_phi}
\end{align}
The angular integrals of these quantities provide the total energy and momentum fluxes across the horizon, which we write as a variation of the mass and spin of the black hole:
\begin{align}
\label{eqn:mass-evolution-single-mode}
\dot M_{\rm BH}&=2\rs r_+(|\omega|^2-\Re(\omega)m\Omega_+)|R_+|^2,\\
\label{eqn:J-evolution-single-mode}
\dot J_{\rm BH}&=2\rs r_+m(\Re(\omega)-m\Omega_+)|R_+|^2,
\end{align}
where we have set $R_+\equiv R(r_+)$.
A few comments are in order about these expressions. First of all, they hold when only one $(\omega,\ell,m)$ mode is present. Because $T_{\mu\nu}$ is quadratic in the field, interference terms appear when multiple modes are present. As we argue in Appendix \ref{app:superradiant-nonlinear}, these interference terms are not expected to play a significant role in the cases we are interested in, because they oscillate much faster than the timescale of variation of $M_{\rm BH}$ and $J_{\rm BH}$.
Second, the frequency $\omega$ is determined by
boundary
conditions. For a bound mode (i.e., $\Phi$ vanishes far away from the
black hole), $\omega$ turns out to be complex and discretized (see
\cite{Dolan:2007mj,Brito:2015oca} and Appendix \ref{app2}),
while for an unbound mode (i.e., $\Phi$ does not vanish far away)
$\omega$ is real and can
be interpreted as the energy of the scalar very far from the
black hole. In both cases, for applications of interest, $\omega\approx\mu$, so that the equations can be approximated as
\begin{align}
\label{eqn:mass-evolution-single-mode-approx}
\dot M_{\rm BH}&=2\rs r_+\mu(\mu-m\Omega_+)|R_+|^2,\\
\label{eqn:J-evolution-single-mode-approx}
\dot J_{\rm BH}&=2\rs r_+m(\mu-m\Omega_+)|R_+|^2.
\end{align}
Another important point is that, when we equate the fluxes to the
changes of the black hole parameters, we are no longer dealing with a
linear system. In other words, the Klein-Gordon equation, while
superficially linear in the scalar field, is not strictly so because the black
hole's geometry is modified by the scalar itself.
When the parameters of the black
hole change with time, the time dependence of the field will
no longer
be strictly $e^{-i\omega t}$.
Equations (\ref{eqn:mass-evolution-single-mode}) and
(\ref{eqn:J-evolution-single-mode}), or (\ref{eqn:mass-evolution-single-mode-approx}) and
(\ref{eqn:J-evolution-single-mode-approx}), therefore
need to be
supplemented with information on the long-term
evolution of the scalar field.
In case the mode under consideration is bound, this is
usually done in a \textit{quasi-adiabatic} approximation
\cite{Brito:2014wla,Ficarra:2018rfu}, in which the growth
of the cloud is computed using the instantaneous value
of $\Im(\omega)$ (and we will further apply the
approximation $\Re(\omega)\approx\mu$).
The total mass and angular momentum are kept constant.
The resulting evolution equations are
\begin{align}
\arraycolsep=16pt
\begin{array}{cc}
\dot M_{\rm BH}=-\dot M_c & \dot J_{\rm BH}=-\dot J_c\\
\dot M_c=2\Im(\omega)M_c & \dot J_c=\frac{m}\mu\dot M_c,
\end{array}
\label{eqn:nonlin-1-mode-super}
\end{align}
where $M_c$ and $J_c = m M_c/\mu$ are the mass and angular momentum carried by the
bound mode (the cloud). The role of $|R_+|$ in
Eqs.~(\ref{eqn:mass-evolution-single-mode-approx}) and
(\ref{eqn:J-evolution-single-mode-approx}) is thus replaced by
$M_c$ in the system of equations here.
When multiple modes are present, an explicit model of the cloud profile is necessary to generalize the above equation \cite{Ficarra:2018rfu}.
If the mode under consideration is unbound, such as in the case of
scalar dark matter accretion from the environment, then
$|R_+|$ in
Eqs.~(\ref{eqn:mass-evolution-single-mode-approx}) and
(\ref{eqn:J-evolution-single-mode-approx}) would need to be
connected to the scalar field value far away.
A stationary accretion flow solution
(Appendix \ref{app1})
provides such a connection,
fixing the scalar amplitude at the horizon $|R_+|$
in terms of the scalar energy density $\rho$ far away.
As we will see, for our main conclusions only a few ingredients
of the equations above will be relevant, namely the ratio $\dot
J_c/\dot M_c$ (or $\dot J_{\rm acc}/\dot M_{\rm acc}$ for accretion
modes) and the fact that the mass in each individual bound state
$(n,\ell,m)$ grows as $\dot M_c/M_c\sim2\Im(\omega_{n\ell
m})\sim(m\Omega_+-\mu)\alpha^{4\ell+5}$ (Appendix \ref{app2}).
\subsection{The Regge plane}
\label{sec:regge}
We are interested in the evolution of the black hole in the mass-spin
(``Regge'') plane (Figure \ref{fig:regge-introductory}). On the $x$-axis, we have $\alpha\equiv\mu\rs/2$, which is a measure of the mass of the black hole, while on the $y$-axis we have $a_* \equiv 2a/r_s$, i.e., the angular momentum to squared mass ratio.
Even though eventually we will be interested in situations where multiple modes are present, let us first gain some intuition on the Regge flow for a single mode. From (\ref{eqn:mass-evolution-single-mode})
and (\ref{eqn:J-evolution-single-mode}), we see that the scalar field extracts energy and angular momentum from the black hole when
\begin{equation}
\label{supercondition1}
\mu\approx\Re(\omega)<m\Omega_+.
\end{equation}
This is the superradiance condition.
If the mode of interest is bound (i.e., $\Phi$ vanishes far away),
the superradiance is accompanied by an instability, i.e.~${\,\rm Im\,}\omega > 0$, telling us that the scalar field
builds up around the black hole into a cloud.
In other words, the gravitational potential of the black hole does not
allow the superradiance generated scalar to escape, leading to a
run-away process.
It is useful to re-express the inequality as
\begin{eqnarray}
\label{supercondition2}
a_* > { m / \alpha \over 1 + (m/2\alpha)^2} \, ,\qquad\text{for}\quad\frac{2\alpha}{m}<1,
\end{eqnarray}
with $m$ understood to be positive. This region is shown in blue in Figure \ref{fig:regge-introductory}.
It is worth noting
that superradiance can occur with an unbound mode too.
If one were to remove the $\Phi \rightarrow 0$ boundary condition far away
from the black hole, $\omega$ can be real (and not discretized).
This means the extraction of mass and angular momentum from the black
hole does not occur by an exponential build up of the scalar cloud.
Rather, it occurs by sending the mass and angular momentum out to
infinity, a reverse accretion flow if you will. This is entry 3 in
Table \ref{tab1}, whereas bound superradiance is entry 1.
Note that the extraction of angular momentum implies, but is not implied by, $\dot a_*<0$, as
\begin{equation}
\label{astardot}
\dot a_*\propto\frac2{\rs^2}\bigl(m-2a_*\alpha\bigr)\biggl(\mu-\frac{ma_*}{2 r_+}\biggr),
\end{equation}
where we used $ \dot J_{\rm BH} =(m/\mu)\dot M_{\rm BH}$ and $\dot
M_{\rm BH}\propto(\mu-m\Omega_+)$. This expression also tells us
there is a region
$a_*>m/(2\alpha)$,
not overlapping with (\ref{supercondition2}), where the black hole is
spun down even if its angular momentum increases. This region is shown
in red (top-right corner) in Figure \ref{fig:regge-introductory}.
The Regge trajectories in Figure \ref{fig:regge-introductory} are
obtained by computing $\dot a_* / \dot \alpha$ from
Eqs.~(\ref{eqn:mass-evolution-single-mode-approx})
and (\ref{eqn:J-evolution-single-mode-approx}).
The red arrows in the blue region represent the Regge trajectories
when an $m=1$ (bound) superradiance cloud grow,
reducing the black hole's mass
and angular momentum. The red arrows outside the blue region
show the Regge trajectories when a (non-superradiant) $m=1$ mode
accretes onto the black hole. Such a non-superradiant $m=1$ mode
could arise from the bound cloud that was previously built up by
superradiance (which now shrinks and gives back mass and angular
momentum to black hole), or it could be from the (unbound)
ambient environment.
Note that the field amplitude at the horizon $|R_+|$ gets scaled out
of the $\dot a_* / \dot \alpha$ ratio. However, the speed with which the black hole follows
these trajectories will depend on $|R_+|$. We will see below how
the timescale can be quite different for different scenarios.
See Table \ref{tab1} for a summary of the various scalar field
configurations of interest.
\begin{table}[tb]
\vspace{0.2cm}
\begin{center}
\begin{tabular}{@{}l|c|c}
\hline
{} & $\mu < am/(r_s r_+)$ (superradiance) & $\mu > am/(r_s r_+)$
\\ \hline
bound (complex $\omega$, ${\rm Re\,} \omega < \mu$) & 1. cloud grows
(${\rm Im\,}
\omega > 0$)
\,\, \,&
2. cloud
shrinks (${\rm Im\,}
\omega < 0$)
\, \\
{} & black hole shrinks \,\,\,\, \,\,\,& black hole grows \,\,\,\,\,\,\quad\quad
\\ \hline
unbound (real $\omega$, $\omega \ge \mu$) & 3. ambient mass/ang. mom.
& 4. ambient
mass/ang. mom.
\\
{} & \,\,\quad \, extraction from black hole &
\,\,\,\,
accretion
onto
black hole
\\ \hline
\end{tabular}
\caption{Summary table for the different scalar configurations of
interest around a black hole. The
superradiance condition (\ref{eqn:super-inequality}) has been
applied with the approximation $\Re\omega\approx\mu$.}
\label{tab1}
\end{center}
\end{table}
\subsection{Black hole spin down and growth of the superradiance cloud}
\label{sec:single-evolution}
Let us discuss in a bit more detail the case of bound
superradiance (entry 1 in Table \ref{tab1}). Solving (\ref{eqn:nonlin-1-mode-super}), when $\Im(\omega)$ is
properly expressed as a function of the mass and spin of the
black
hole, gives the time evolution of the black hole parameters
due to the
growth of the superradiance cloud. While such a solution
may not have an easy analytical expression,
things simplify when the
time coordinate is factored out, i.e., when we only look at the
trajectory in the Regge plane.
We will use a similar approach in
Section \ref{sec:accretion+superradiance} when putting
accretion and
superradiance together, so let us describe how it works.
Because the extraction happens with a fixed angular momentum-to-mass ratio (and equal to $\dot J_c/\dot M_c=m/\mu$), (\ref{eqn:nonlin-1-mode-super}) implies
\begin{equation}
\frac{\rd}{\rd t}\biggl(M_{\rm BH}-\frac{\mu}mJ_{\rm BH}\biggr)=0.
\label{eqn:trajectory}
\end{equation}
This simple observation fully determines the trajectory
followed by
the black hole in the Regge plane. Superradiance can only last until
$\Im(\omega)$ reaches zero (i.e., $\dot J_c=\dot M_c=0$), which means
that the black hole hits the threshold $\mu=m\Omega_+$, see Figure
\ref{fig:regge-introductory}. As long as no other states are
considered, this will be a point of stable equilibrium for the system:
moving above the threshold in the Regge plane will cause the cloud to
become superradiant, pushing the black hole down again; moving below
the threshold will cause the cloud to decay, giving mass and angular
momentum to the black hole, pushing the black hole back to the
threshold. Intersecting the trajectory (\ref{eqn:trajectory}) with the threshold $\mu=m\Omega_+$, we can find analytically the final parameters $(\rs',a')$ of the black hole in terms of the initial ones $(\rs,a)$:
\begin{equation}
\label{eqn:analytic-formulae}
\frac{\mu\rs'}m=\frac{1-\sqrt{1-(2(\mu\rs/m)(1-\mu a/m))^2}}{2(\mu\rs/m)(1-\mu a/m)},\qquad \frac{a'}{\rs'}=\frac{\mu\rs}m\Bigl(1-\frac{\mu a}m\Bigr).
\end{equation}
The cloud mass at the end of the process will be
$M_c=(\rs-\rs')/(2G)$. The maximum ratio between the mass of the cloud
and the mass of the black hole achievable with the evolution of a
single state can be thus obtained from the formula above:\footnote{This estimate is precisely equivalent to the one presented in \cite{Herdeiro:2021znw}, where the authors compute instead $\max\bigl\{(\rs-\rs')/\rs\bigr\}=9.73\%$.}
\begin{equation}
\max\Bigl\{\frac{M_c}{M_{\mathrm{BH}}}\Bigr\}=\max\Bigl\{\frac{\rs-\rs'}{\rs'}\Bigr\}=10.78\%,\qquad\text{for}\quad\frac{\mu\rs}m\approx0.24\quad\text{and}\quad\frac{a}\rs=0.5.
\label{eqn:limit-analytical}
\end{equation}
How much time does it take to grow the cloud? Although the system of equations (\ref{eqn:nonlin-1-mode-super}) is nonlinear, the nonlinearities are negligible as long as the size of the cloud is small enough to not make $\Im(\omega)$ change appreciably. As a consequence, for a cloud growing from a small seed, for example a quantum fluctuation, we can estimate the growth time as
\beq
T_{\rm growth}\approx\frac{\log(M_{c}/M_{c{\rm,\,seed}})}{2\Im(\omega(\rs,a))},
\label{eqn:t-growth}
\eeq
where $\rs$ and $a$ are the initial black hole parameters. For a quantum fluctuation, we have $M_{\rm c, seed}r_{c}\sim1/2$, where $r_{c}=\rs n^2/(2\alpha^2)$ and $n$ is the principal quantum number of the cloud, giving
\beq
\log\biggl(\frac{M_{c}}{M_{\rm c, seed}}\biggr)\sim175.5+\log\biggl(\frac{M_{c}}{M_{\rm BH}}\frac{M_{\rm BH}^2}{M_\odot^2}\frac{n^2}{\alpha^2}\biggr).
\eeq
The growth rate $\Im(\omega(\rs,a))/\mu$ varies by orders of magnitude
across the instability region of a given mode, reaching a maximum of
about $10^{-7}$ for $\alpha=0.42$, $m=1$ \cite{Dolan:2007mj}. $T_{\rm
growth}$ can then be as short as several hours for stellar black
holes, and $10^5$--$10^6$ years for supermassive black holes.
\section{Threshold drift: combining superradiance with accretion}
\label{sec:accretion+superradiance}
In this section we explain in detail the evolution of a black hole in
the presence of both a cloud from superradiance {\it and} accretion
from the ambient environment.\footnote{The accretion could in principle also be from {excited states of the cloud which still undergo superradiance, thus falling in the first case of Table~\ref{tab1}, providing a ``negative accretion'', or extraction of mass. This is relevant for level
transition that will be discussed in Section \ref{sec:transition}.
The threshold drift discussion in
the current section applies equally well, for accretion from the ambient
environment, as for accretion from the cloud.}
}
In Section \ref{sec:setup} we showed that the evolution
of a black hole and its superradiance cloud is determined by the
nonlinear\footnote{The nonlinearity refers to the implicit dependence
of $\Im(\omega)$ on the mass and spin of the black hole.} equations
(\ref{eqn:nonlin-1-mode-super}). It is worth stressing that these equations neglect potentially
important effects, like the depletion of the cloud due to
gravitational waves, or self-interactions of the scalar field which
mix different levels. We will briefly discuss them in Section \ref{sec:discuss}.
In Section \ref{sec:single-evolution} we described the evolution of a
black hole from its initial ``starting point'', to its final
meta-stable, or at least long-lived, state, surrounded by a boson
cloud of a single $m$ mode. Processes involving either additional states or external
effects are needed to drive the gravitational atom away from this final
position in the Regge plane. In this section, we take the endpoint of
the single-mode evolution as an initial condition, and focus on the
case where the system is fed mass and angular momentum from the
outside.\footnote{It is also possible, as we will see in Section
\ref{sec:cases}, that the black hole reaches the threshold ``from
below'', instead of from above, for example because of the same
accretion mechanism that drives its subsequent evolution. The way
the black hole arrives at the threshold does not matter for the
discussion here.} How does the gravitational atom respond?
Suppose that some external fluxes of mass and of angular momentum change the parameters of the black hole as
\begin{equation}
2G\dot M_{\rm acc}=\frac{\rd\rs}{\rd t}\bigg|_{\mathrm{acc}},\qquad2G\dot J_{\rm acc}=\frac{\rd(a\rs)}{\rd t}\bigg|_{\mathrm{acc}}.
\end{equation}
where the label ``acc'' stands for accretion.
Like in the last section, we assume the bound, superradiance cloud is
described by a single $(n,\ell,m)$ state.
The evolution equations (\ref{eqn:nonlin-1-mode-super}) get modified
by accretion as follows:
\begin{align}
\label{eqn:M:acc+superr}
\dot M_{\rm BH}&=\dot M_{\mathrm{acc}}-\dot M_c,\\
\label{eqn:J:acc+superr}
\dot J_{\rm BH}&=\dot J_{\rm acc}-\frac{m}\mu\dot M_c,\\
\label{eqn:R-Im-omega}
\dot M_c&=2\Im(\omega)M_c,
\end{align}
where we approximated $\omega\approx\mu$. At the end of the previous
section, we have seen that the superradiance timescale can be very
short compared to other astrophysical processes such as accretion (see
Eq.~\ref{eqn:t-growth}), going from hours to $10^5$--$10^6$ years
depending on the mass of the black hole.
Thus, superradiance, when it is operative, will tend to move the black
hole to the superradiance threshold in the Regge plane where it is
turned off. And it is when the black hole is very close to the threshold
that accretion can compete with superradiance.
The crucial observation for
our discussion is that, whenever the superradiance timescale is much
shorter than the accretion timescale (assumed henceforth, which we
will verify later), the system described by
Eqs.~(\ref{eqn:M:acc+superr}), (\ref{eqn:J:acc+superr}) and
(\ref{eqn:R-Im-omega}) will closely follow the superradiance threshold
line defined by $\mu=m\Omega_+$ during its accretion-driven evolution
in the Regge plane, as long as the cloud has a high enough occupation
number. We can see how this works in a few different ways.
Let us start with a simple intuitive explanation. With reference to
Figure \ref{fig:zig-zag}, let us consider a black hole sitting exactly
on the superradiance threshold, $\mu=m\Omega_+$. The second terms on
the right-hand sides of Eqs.~\eqref{eqn:M:acc+superr} and
\eqref{eqn:J:acc+superr} vanish, and we can think of accretion as a
process that tends to drive the black hole slightly away from the
threshold. During this first step of the evolution, depending on the slope of accretion
with respect to the threshold, the black hole will end up either above
or below it.
Consider the case where the black hole is driven
slightly above the threshold (left panel), superradiance then kicks
in, and because it is very efficient (unless one is on the threshold),
quickly moves the black hole back to the threshold. During this second
step, the superradiance cloud grows further in mass.
The black hole loses mass in this second step, but
the combined action of the first (accretion) and second
(superradiance) steps is such that the black hole has a net gain in
mass. The two-step combination repeats itself, and
as a result, the black hole climbs up along the
superradiance threshold. (It can never climb down, by virtue of the
second law of thermodynamics; see Figure \ref{fig:regge-introductory}.) We refer to this phenomenon as
over-superradiance-threshold-drift, or over-superradiance in short.
Conversely, consider the case where the first (accretion) step takes
the black hole below the threshold (right panel). Remember the superradiance cloud
is still there, and because the black hole is below threshold, the
cloud will in fact shrink and give mass back to the black hole.
This second step involves the existing superradiance cloud, but in a
non-superradiant state, i.e.~${\,\rm Im\,}\omega < 0$, as opposed to
${\,\rm Im\,}\omega > 0$, implying the scalar field decreases in value,
that is to say, cloud loses mass. The net effect of the first and second steps
is once again to increase the black hole's mass, and as a result, the black hole
climbs up along the superradiance threshold.
We refer to this phenomenon as
under-superradiance-threshold-drift, or under-superradiance in short.
Of course, this discrete ``zig-zag'' description of over- or under-superradiance
should not be taken literally.
In reality, the black hole's evolution in the Regge plane is smooth:
it drifts along a trajectory that closely hugs the superradiance
threshold, where the effects of accretion and superradiance (or cloud
decay) finely complement each other. We call this phenomenon threshold drift.
It is worth emphasizing again that the black hole can only drift to
the right, in the direction of increasing its mass. This is because
the other direction is forbidden by the second law of the black hole
thermodynamics, as the area of the event horizon would reduce (see
Appendix \ref{sec:superradiance-area} and Figure
\ref{fig:regge-introductory}). The superradiance
trajectories at precisely the threshold are in fact parallel to lines of
constant area; the accretion trajectories must intersect them
and point to the right.
This threshold drift phenomenon can be seen in numerical solutions for
the evolution of the black hole in the Regge plane. For example, this
explains why in \cite{Brito:2014wla}, where accretion from a baryonic
disk was
taken into account, the numerical evolution tracked the superradiance
threshold for
a significant part of the black hole's history. In Section
\ref{sec:cases}, we will examine the case of baryonic accretion more
closely, and present a semi-analytic way to understand the black
hole's evolution.
We note that it is possible to understand the dynamics of the system in the Regge plane
in terms of a simpler toy model. This is described in detail in Appendix \ref{sec:toy-model}, where we show that the threshold drift is an attractor of the dynamics as long as the mass of the cloud has a sizable value.
Having established that the system follows a trajectory that lies very
close to the superradiance threshold in the Regge plane, we can thus
enforce the condition $\mu=m\Omega_+$ in the equations
\eqref{eqn:M:acc+superr} and \eqref{eqn:J:acc+superr}, reducing it to
a single ordinary differential equation. Let us first take
the time derivative of $\mu=m\Omega_+$ (or equivalent, $a_* =
(m/\alpha)/(1 + (m/2\alpha)^2)$; see \ref{supercondition2}):
\begin{equation}
\frac\mu{m}\frac{\rd(a\rs)}{\rd t}=\frac{x^4+3x^2}{(1+x^2)^2}\frac{\rd\rs}{\rd t},
\label{eqn:threshold-derivative}
\end{equation}
where we defined $x\equiv\mu\rs/m$. Note that the superradiance
instability region, as well as its threshold, spans the region
$0<x<1$. Combining \eqref{eqn:M:acc+superr} and
\eqref{eqn:J:acc+superr} to eliminate the dependence on the mass of
the cloud, and then plugging (\ref{eqn:threshold-derivative}) in, we
obtain an equation for the evolution of the black hole's
mass:
\begin{equation}
\frac{1-x^2}{(1+x^2)^2}\frac{\rd x}{\rd
t}=\bigg(1-\frac\mu{m}\frac{\dot J_{\mathrm{acc}}}{\dot
M_{\mathrm{acc}}}\biggr)\frac{\mu}m \frac{\rd\rs}{\rd
t}\bigg|_{\mathrm{acc}} ,
\label{eqn:accretion-effective-x-evolution}
\end{equation}
where $\rd r_s/\rd t |_{\rm acc} \equiv 2 G \dot M_{\rm acc}$ can be
thought of as $\dot r_s$ due to accretion alone.
For given rates of mass and angular momentum accretion, equation
\eqref{eqn:accretion-effective-x-evolution} describes how the mass of
the black hole, $M_{\mathrm{BH}} \equiv mx/(2G\mu)$, evolves in time, as long
as the superradiance cloud with azimuthal number $m$ has high enough
mass to keep the black hole pinned at the superradiance threshold.
While this process takes place, the cloud's mass $M_c \equiv mx_c/(2G\mu)$ evolves according to
\begin{equation}
\frac{\rd(x+x_c)}{\rd t}=\frac{\mu}m
\frac{\rd\rs}{\rd t}\bigg|_{\mathrm{acc}}
\label{eqn:accretion-effective-xc-evolution}
\end{equation}
due to conservation of mass. Beware that if and when the cloud loses enough mass (say, at $x_c=0$), it will no longer be able to keep the black hole near the threshold, and this effective description of the evolution will break down. Equations \eqref{eqn:accretion-effective-x-evolution} and \eqref{eqn:accretion-effective-xc-evolution} can be combined to give
\begin{equation}
\frac{\rd x_c}{\rd t}=\biggl(\frac{1}{1-\mu\dot J_{\mathrm{acc}}/(m\dot M_{\mathrm{acc}})}\frac{1-x^2}{(1+x^2)^2}-1\biggr)\frac{\rd x}{\rd t}.
\label{eqn:over-under-superradiant}
\end{equation}
Looking at the sign of the parenthesis on the right-hand side, this
equation makes it easy to tell whether we have
\begin{equation}
\begin{aligned}
\text{over-superradiance}\quad&\longleftrightarrow\quad \frac{1}{1-\mu\dot J_{\mathrm{acc}}/(m\dot M_{\mathrm{acc}})}\frac{1-x^2}{(1+x^2)^2}>1\\
\text{under-superradiance}\quad&\longleftrightarrow\quad
\frac{1}{1-\mu\dot J_{\mathrm{acc}}/(m\dot
M_{\mathrm{acc}})}\frac{1-x^2}{(1+x^2)^2}<1 \, ,
\end{aligned}
\label{eqn:under-over-def}
\end{equation}
where once again, $x$ is defined as $\mu r_s / m$.
Note that, due to the shape of the function $(1-x^2)/(1+x^2)^2<1$, any
given fixed ratio $\dot J_{\mathrm{acc}}/\dot
M_{\mathrm{acc}}$ satisfying $0<\dot J_{\mathrm{acc}}/\dot
M_{\mathrm{acc}}<m/\mu$
will produce over-superradiance for a
sufficiently small $x$ (small black hole mass), but it will eventually
turn into under-superradiance as $x$ approaches 1.
The presence of a superradiance cloud thus acts as a \textit{glue}
that keeps the black hole attached to the superradiance threshold. The
glue gets enhanced by over-superradiance (because the cloud grows),
and weakened by under-superradiance (because the cloud shrinks).
If the cloud is completely dissipated, either by under-superradiance or some other process, the threshold drift ends, with the black hole moving away from it, following the Regge trajectories determined by accretion.
So far, we have neglected any other effect that is able to change to
the total mass of the system. While this may be a good approximation
if $\Phi$ is a complex scalar field, a real field undergoes an
inevitable decay via the emission of gravitational waves \cite{Yoshino:2013ofa}. This
adds an extra source term to the right-hand side of
(\ref{eqn:accretion-effective-xc-evolution}). Such an extra source of
cloud mass loss does not change the previous criterion to distinguish
between over-/under- superradiance, nor the time evolution of the black hole parameters during the threshold drift, as Eq.~(\ref{eqn:accretion-effective-x-evolution}) stays unchanged.
However, it does change the duration of the threshold drift, as the
cloud will disappear faster. Moreover, this mechanism for cloud mass
loss becomes more important as the black hole gains mass during
threshold drift, since the radiated power goes as $\alpha^{4\ell+10}$.
\section{Examples and cases of interest}
\label{sec:cases}
In this section, we apply the approach developed in Section
\ref{sec:accretion+superradiance} to three physical scenarios. First,
in Section \ref{sec:dm-accretion}
we consider cases where the black hole accretes dark matter from
the ambient environment, and the dark matter is itself a scalar field,
which might or might not be the same as the scalar making up the
superradiance cloud. We will show in particular (see ``Case 2'' below)
that, for
certain choices of the parameters, it is possible to grow a cloud of
mass up to
roughly a third of the black hole mass, well beyond the standard
$\sim$10\% discussed in Section \ref{sec:setup}.
Then, in Section \ref{sec:transition} we consider the phenomenon of
\textit{level transition} using the language of under-superradiance.
Lastly, in Section \ref{sec:baryonic}
we study the case where the ambient accretion is sourced by
a baryonic disk. The same phenomena of over- and
under-superradiance occur here, with the advantage that disk
accretion can be more efficient, leading to a substantial cloud build-up
in a shorter amount of time.
\subsection{(Wave) dark matter accretion}
\label{sec:dm-accretion}
Black holes in nature are inevitably surrounded by dark matter.
If the dark matter is comprised of a scalar field, much of our earlier
discussion regarding the mass and angular momentum fluxes into the
black hole horizon applies to dark matter as well.
A concrete, compelling example is the axion, or axion-like-particles
(see \cite{Marsh:2015xka,Graham:2015ouw} for reviews). We assume their self-interaction strength is
sufficiently weak to be ignored, but will return to a discussion of
this in Section \ref{sec:discuss}. The same axion could be both the
dark matter of the ambient environment, as well as the scalar that
makes up the superradiance cloud. Or the two could be different scalar
fields. We will use $\mu', m'$ to refer to the mass and angular
momentum of the dark matter scalar, and $\mu, m$ to refer to
the mass and angular momentum of the superradiance scalar.
In the language of Table \ref{tab1}, the dark matter scalar from
the ambient environment is ``unbound'' whereas the scalar of the
superradiance cloud is ``bound''.
To determine the dark matter ambient accretion rate onto the
black hole, we use Eqs.~(\ref{eqn:mass-evolution-single-mode-approx})
and (\ref{eqn:J-evolution-single-mode-approx}), with the horizon
scalar amplitude $R_+ \equiv R_{\rm acc} (r_+)$ fixed by using the stationary accretion flow
solution which connects it to the scalar amplitude far away $R_{\rm
acc} (r_i)$, at a radius we call $r_i \gg r_+$. This is described in detail in Appendix
\ref{app1}, giving us the following useful approximations, from wave
to particle limits:\footnote{Approximate fitting formulae are needed if one wants to obtain an approximate analytical treatment and not to solely rely on a numerical study. This is due to the nature of the scalar wave equation \eqref{scalareqKG}, which is of the (confluent) Heun type (see Appendix~\ref{app1} for further details---see also Refs.~\cite{Bonelli:2021uvf,Bonelli:2022ten} for recent progress on connection formulae for the Heun function).}
\begin{equation}
\frac{|R_{\rm acc}(r_+)|^2}{|R_{\rm acc}(r_i)|^2}=\begin{cases}
\biggl(\dfrac{r_i}{\rs}\biggr)^{3/2} & \quad \qquad \dfrac12(\ell'+1)\lesssim\mu'\rs \qquad \qquad \text{(Particle)}\; ,\\
\biggl(\dfrac{r_i}{\rs}\biggr)^{3/2}\biggl(\dfrac{2\mu'\rs}{\ell'+1}\biggr)^{6\ell'+3} & \quad 2\sqrt{\dfrac{\rs}{r_i}}\ll\mu'\rs\lesssim\dfrac12(\ell'+1) \quad \text{(Intermediate)}\; ,\\
\biggl(\dfrac{r_i}{\rs}\biggr)^{-2\ell'} & \quad \qquad \mu'\rs\ll2\sqrt{\dfrac{\rs}{r_i}} \qquad \quad \; \text{(Wave/Ultralight)}\; ,
\end{cases}
\label{eqn:guanhaos-fit}
\end{equation}
where $\ell'$ is the second quantum number of the accreting mode and
$\mu'$ is the mass of the field, which we label differently from $\mu$
for the sake of generality, to allow the dark matter field and the
superradiance field to be different. The precise expressions and
bounds in \eqref{eqn:guanhaos-fit} actually depend on the
dimensionless spin $a_*$ and magnetic quantum number $m'$; however, as
shown in Appendix~\ref{app1}, their effects on the estimates
\eqref{eqn:guanhaos-fit} are within $O(1)$ unless $a_*>0.95$. In other
words, the spin of the black hole does not have a significant impact
unless it is close to extremal. These
expressions generalize to non-zero angular
momentum those given in \cite{Hui:2019aqm}.
The quantity $r_i$ is taken to be the radius at which the dark matter
density matches the typical density $\rho_i$ in the broader
environment, i.e.~$\rho_i=\rho(r_i)$. A
quantitative estimate of $r_i$ is needed to fix the amplitude of the
accreting mode, and thus the timescale of the whole process.
For spherically symmetric accretion, we follow \cite{Hui:2019aqm}
and take $r_i$ to be the radius of impact of the black
hole (the radius at which the gravitational potential of the black
hole is similar to that of the dark matter halo), i.e.~$r_i/r_s \sim 10^6 (v_{\rm typical}/300 {\,\rm km/s})^{-2}$ where
$v_{\rm typical}$ is the velocity dispersion of the dark matter halo.
For dark matter accretion flow of angular momentum (per particle)
$m' \ne 0$, we will take $r_i$ to be the minimum of the de Broglie
wavelength (the length scale over which wave dark matter is roughly
coherent or homogeneous \cite{Hui:2021tkt}), and
and the radius of impact:
\begin{equation}
\frac{r_i}{\rs}=\min\biggl\{\underbrace{10^3(\mu'\rs)^{-1}\biggl(\frac{v_{\rm typical}}{\SI{300}{\km/s}}\biggr)^{-1}}_{\text{de Broglie wavelength}},\underbrace{10^6\biggl(\frac{v_{\rm typical}}{\SI{300}{\km/s}}\biggr)^{-2}}_{\text{virial radius}}\biggr\}.
\label{eqn:r_i}
\end{equation}
The motivation for considering the de Broglie wavelength will be clear
in a moment.
Once the asymptotic dark matter density $\rho_i\approx T_{00}\approx
2\mu'{}^2|\Phi_{\rm acc} (r_i)|^2\approx \mu'{}^2|R_{\rm acc}
(r_i)|^2/(2\pi)$ is fixed,\footnote{For the last equality, we took the average over the angular variables.} equations (\ref{eqn:mass-evolution-single-mode-approx}) and (\ref{eqn:J-evolution-single-mode-approx}) supplied with (\ref{eqn:guanhaos-fit}) determine the dark matter accretion onto the black hole. Our goal is to study the subsequent evolution, and especially its interplay with superradiance. Note that equations (\ref{eqn:guanhaos-fit}) and (\ref{eqn:r_i}) are only needed to fix the normalization of the accreting flux, and thus the timescale of the threshold drift. A different normalization would not impact in any way the trajectory of the black hole in the Regge plane, nor the mass of the cloud as function of that of the black hole.
What values of $\ell'$ and $m'$ should we use for the accreting mode?
In general, several modes will be present at the same time, with a
distribution depending on the mass of the scalar and its velocity
dispersion. However, in two specific cases we can make a simplifying
assumption.
One possibility is to assume spherical accretion with $\ell'=m'=0$.
This can be motivated two different ways.
For small values of $\mu'\rs$, the angular momentum barrier strongly
suppresses all modes with $\ell'\ge1$, see second and third line of
(\ref{eqn:guanhaos-fit}). It is thus natural to only consider the
purely radial infall corresponding to $\ell'=m'=0$.
Another motivation is the tendency of wave dark matter, especially in
the ultralight regime, to form solitons at the center of galaxy
halos \cite{Schive:2014hza}. The solitons provide a natural $\ell'=m'=0$ environment in
which the central supermassive black hole resides.
A number of recent papers explore the interaction between a
supermassive
black hole and the soliton that hosts it
\cite{Hui:2019aqm,Chavanis:2019bnu,Davies:2019wgi,Padilla:2020sjy,Cardoso:2022nzc}.
A second possibility is to assume an accretion flow with
$\ell'=m'=1$. This is motivated by the observation that wave dark
matter generically has vortices, at the frequency of one vortex ring per de
Broglie volume \cite{Hui:2020hbq}. A vortex is a one-dimensional
structure, along which the dark matter density vanishes and around
which the dark matter velocity circulates, with winding number $m'$
generically being $\pm 1$. Let us thus consider a situation in which a
black hole happens to coincide with such a vortex, with $\ell'=m'=1$.
In general, there is no reason the black hole spin direction and the
vortex angular momentum align. We will for simplicity assume so,
noting that they would tend to align if the black hole is spun up by
the dark matter accretion. The question of whether a vortex, once it
intersects a black hole, would remain stuck to it, is an interesting
one, which we leave for future work.
It is helpful to have an idea of what the mass accretion rate might be
for these two possible scenarios. For spherical accretion:
\begin{eqnarray}
\label{Mdot0}
\dot M_{\rm acc} \sim 400 {\, \rm M_\odot \,/ \,yr.}
\left( {\rho_i \over 10 {\,\rm M_\odot \,/\, pc^3}} \right)
\left( {M_{\rm BH} \over 10^9 {\,\rm M_\odot} }\right)^2
\left( {r_i / r_s \over 10^6} \right)^{3/2}
\end{eqnarray}
for $\mu' r_s \, \gsim \, 0.5$. The displayed value for density $\rho_i$ corresponds to that of
a soliton of mass $1.12\times 10^9 {\,\rm M_\odot}$ and
$\mu' = 10^{-22}$ eV. For $\ell' = m' = 1$ accretion:
\begin{eqnarray}
\label{Mdot1}
\dot M_{\rm acc} \sim 1.2 \times 10^{-2} {\, \rm M_\odot \,/ \,yr.}
\left( {\rho_i \over 10 {\,\rm M_\odot \,/\, pc^3}} \right)
\left( M_{\rm BH} \over 10^9 {\,\rm M_\odot} \right)^2
\left( {r_i/r_s \over 10^3} \right)^{3/2}\,
\end{eqnarray}
in the particle regime $\mu' r_s \, \gsim \, 1$. There is suppression due to the angular momentum
barrier if the particle mass is low $\mu' r_s \, \lsim \, 1$ (for $\ell' =
1$), in which case the accretion rate becomes
\begin{eqnarray}
\label{Mdot2}
\dot M_{\rm acc} \sim 2.5 \times 10^{-5} {\, \rm M_\odot \,/ \,yr.}
\left( {\rho_i \over 10 {\,\rm M_\odot \,/\, pc^3}} \right)
\left( M_{\rm BH} \over 10^9 {\,\rm M_\odot} \right)^2
\left( {r_i/r_s \over 10^3} \right)^{3/2}
\left({\mu' r_s \over 0.5}\right)^9
\, .
\end{eqnarray}
Thus we see that the timescale for mass accretion, $M_{\rm BH}/\dot M_{\rm acc}$,
tends to be quite long (longer than a Hubble time) if
the dark matter accretion flow has non-vanishing angular momentum.
When (\ref{eqn:guanhaos-fit}) is substituted into either
(\ref{eqn:mass-evolution-single-mode-approx}) or
(\ref{eqn:J-evolution-single-mode-approx}), and the threshold
condition $\mu = m\Omega_+ = am/(r_s r_+)$ (i.e., we are studying
threshold drift at the superradiance threshold for $m$)
is imposed together with the conservation of the total mass, we get the following equations for the evolution of the mass of the black hole and of the cloud:
\begin{equation}
\frac{1-x^2}{(1+x^2)^2}\frac{\rd x}{\rd t}=K\frac{(m'-m\mu'/\mu)^2}{1+1/x^2},
\label{eqn:dx-dm}
\end{equation}
\begin{equation}
\frac{\rd(x+x_c)}{\rd t}=K\frac{(m\mu'/\mu-m')}{1+1/x^2}\frac{m\mu'}\mu,
\label{eqn:dxc-dm}
\end{equation}
where, as in Section \ref{sec:accretion+superradiance}, we defined
$x=\mu\rs/m$ and $x_c=(M_c/M_{\rm BH})x$. In these expressions, we set
$K\equiv4G|R_{\mathrm{acc}}(r_+)|^2(\mu/m)$, where
$R_{\mathrm{acc}}(r_+)$ is related to the asymptotic dark matter
density by (\ref{eqn:guanhaos-fit}).
To derive the above, we have set $\dot J_{\mathrm{acc}}/\dot
M_{\mathrm{acc}} = m'/\mu'$ in
Eqs.~(\ref{eqn:accretion-effective-x-evolution}) and
(\ref{eqn:accretion-effective-xc-evolution}), thanks to the
simplifying assumption that the wave dark matter accretion is due to a
single $m'$ mode. We have also used Eqs.~(\ref{eqn:mass-evolution-single-mode-approx}) and
(\ref{eqn:J-evolution-single-mode-approx}) which determine the mass
and angular momentum fluxes (with the unprimed $\mu$ and $m$ replaced
by $\mu'$ and $m'$). The relation $a^2 + r_+^2 = r_s r_+$ was useful
for relating $r_s/r_+$ to $x \equiv \mu r_s / m = a/r_+$ (assuming we are at
superradiance threshold for $m$), giving us $r_s/r_+ = 1 + x^2$.
From \eqref{eqn:dx-dm} we see that the mass of the black hole can only
increase (recall that $x < 1$; see Eq.~\ref{supercondition2}); on the
other hand, the sign of $\rd x_c/\rd t$ can be read off from
\eqref{eqn:over-under-superradiant}, which is
\begin{equation}
\frac{\rd x_c}{\rd t}=\biggl(\frac{1}{1-{\cal
R}}\frac{1-x^2}{(1+x^2)^2}-1\biggr)\frac{\rd x}{\rd t},\qquad
{\cal R}\equiv\frac{m'\mu}{m\mu'},
\label{eqn:dxc/dt}
\end{equation}
which can be integrated to give a simple relation between the mass of the cloud and the mass of the black hole:
\begin{equation}
x_c+x-\frac{1}{1-{\cal R}}\frac{x}{x^2+1}=\text{constant}.
\label{eqn:xc(x)}
\end{equation}
This relation eliminates the time dependence and gives the mass of the
cloud as function of the mass of the black hole, $x_c(x)$.
It tells us that the total mass of the black hole + cloud system is
determined by a constant (fixed by initial conditions for $x$ and
$x_c$ on the superradiance threshold) plus $x/[(1 - {\cal R}) (x^2 + 1)]$.
{Three main different cases can be distinguished.}
\begin{description}
\item[Case 1.] If ${\cal R} \equiv m'\mu/(m\mu') \le0$, then the parenthesis on the right-hand side of (\ref{eqn:dxc/dt}) is always negative for $0<x<1$.
This is under-superradiance: the mass of the cloud
\textit{decreases}, $\rd x_c/\rd t\le0$, while the mass of the black hole
increases, $\rd x/\rd t \ge 0$ (recall that on the threshold,
the black hole can only move to the
right in the Regge plane, by the second law).
Moreover, the black hole's mass will increase faster than
the ambient accretion rate, i.e.~$\rd x/\rd
t\ge(\mu/m)[\rd r_s/\rd t]_{\mathrm{acc}}$, because it receives
mass from both the ambient environment and the cloud that
was built up by superradiance.
The black hole will thus
move along the superradiance threshold faster than naively expected
based on ambient accretion alone,
but only before the cloud is depleted. An example of this case is
shown in Figure~\ref{fig:dm-accretion-R=0}, for the case of
spherically-symmetric accretion $\ell'=m'=0$. After the cloud's
depletion, the black hole will leave the threshold and move under the
effect of accretion alone. How much faster can the black hole increase its mass? From (\ref{eqn:dxc/dt}), we find that the speed is increased by a factor
\begin{equation}
\frac{\dot x}{\dot x_{\rm acc}}=(1-{\cal R})\frac{(1+x^2)^2}{1-x^2},
\end{equation}
which can in principle become very large near the edge of the
superradiant threshold, around $x\lesssim1$, where the black hole
approaches extremality. ($\dot x_{\rm acc} \equiv (\mu/m) \rd r_s/\rd t
|_{\rm acc} = \dot x + \dot x_c$.) Remember however that this would also mean
that the cloud depletes very fast, and thus that the threshold drift
will end very soon.
\item[Case 2.] If $0< {\cal R} \equiv m'
\mu / (m \mu') <1$, then we have over-superradiance for $x<x_\star$ and under-superradiance for $x>x_\star$, where $x_\star$ is the zero of (\ref{eqn:dxc/dt}):
\begin{equation}
\frac{1-x_\star^2}{(1+x_\star^2)^2}=1-{\cal R}\implies
x_\star=\sqrt{\frac{-3+2{\cal R}+\sqrt{9-8{\cal R}}}{2(1-{\cal R})}}.
\label{eqn:x_star-result}
\end{equation}
The point $x=x_\star$ (depicted with a star in
Figure \ref{fig:dm-accretion-super}; $x_\star \sim 0.5$ for
${\cal R} = 1/2$)
corresponds to a local maximum for the mass of
the cloud during threshold drift.
This case is perhaps the most
interesting one, because over-superradiance provides a mechanism to
boost the cloud's mass.
Note that the cloud-to-black hole mass ratio,
\begin{equation}
\frac{x_c}x=\frac1x\biggl(x_c(0)+x(0)-\frac1{1-{\cal
R}}\frac{x(0)}{1+x(0)^2}\biggr)-1+\frac1{1-{\cal R}}\frac1{1+x^2} \, ,
\label{eqn:x_cx}
\end{equation}
reaches maximum at a certain $x<x_\star$. Depending on the initial
masses $x(0)$ and $x_c(0)$, as well as on the value of ${\cal R}$,
this cloud-to-black hole mass ratio can reach higher values than the
10\% mentioned in Section~\ref{sec:single-evolution}. We show an
example in Figure~\ref{fig:dm-accretion-R=0.5}, where we use the same
parameters as in Figure~\ref{fig:dm-accretion-R=0}, but change the
values of $m'$ and $m$ to 1 and 2 respectively. First of all, we see
that, even if the asymptotic dark matter density is the same in both
cases, the evolution is now much slower, due to the suppression by
angular momentum as indicated by (\ref{eqn:guanhaos-fit}). Second, we observe
that the cloud's mass is indeed boosted, reaching values as high as
$x_c= 0.31 \, x$ at the peak, beyond the limit of about $0.1$ derived
in (\ref{eqn:limit-analytical}), though this requires a
long timescale ($\sim 5 \times 10^{12}$ yrs in Figure
\ref{fig:dm-accretion-R=0.5}), because typical dark matter density in
the environment gives only a somewhat low accretion rate.
Higher values of $x_c$ are possible, but require starting the
evolution at lower values of $x(0)$, so that the black hole drifts
along the superradiant threshold for a longer time, giving the cloud
more time to grow. In fact, from (\ref{eqn:x_cx}), the highest
possible cloud-to-black-hole mass ratio is $\mathcal R/(1-\mathcal R)$
and is formally attained in the $x(0) \rightarrow 0$ limit.
For the parameters chosen for Figure~\ref{fig:dm-accretion-super},
${\cal R} = 1/2$ which means the cloud-to-black-hole mass ratio
could in principle reach unity.
Because (\ref{eqn:guanhaos-fit}) is suppressed for
small values of the black hole mass, however, this would require
waiting for an exponentially longer time before reaching the peak. In the
right panel of Figure~\ref{fig:dm-accretion-R=0.5}, we show that the
evolution is faster for larger $x(0)$, but also that the cloud's mass
cannot grow as large.
The upshot is that the potentially large cloud
mass that could be attained by threshold drift is a bit academic, if
the source of ambient accretion is dark matter, due to its
modest density in typical environments, resulting in a long
timescale for cloud build-up.
However, we will see below the case of ambient accretion from a
baryonic disk, which gives a much shorter timescale.
\item[Case 3.] If ${\cal R} \equiv m'\mu/(m \mu') >1$, then from
equation \eqref{eqn:dxc-dm} we see that the
total (black hole + cloud) accretion rate is negative. This means that
the ``ambient accretion'' is actually not accreting at all, but is itself in
the regime of superradiance, extracting mass and angular momentum
from the combined black hole + cloud system, i.e.~it is more accurate to call
it ambient extraction. This is not bound
superradiance, as in
the case of the superradiance cloud, but unbound
superradiance (i.e., entries 1 and 3 respectively in Table~\ref{tab1}).
However, recall from \eqref{eqn:dx-dm} that the
black hole cannot lose mass while moving along the threshold. This is not
a contradiction: it means the presence of an external extraction of
mass (and angular momentum) induces the already existing cloud to lose
to the black hole more mass (and angular momentum) than what is
extracted. In other words, what we have is under-superradiance:
the system as a whole (black hole + cloud) loses mass; the cloud loses
mass faster than the whole system; the black hole gains
mass.
This case with ${\cal R} \equiv m'\mu/(m \mu') >1$ can be achieved for
instance by $m' = m = 1$ and $\mu > \mu'$.
If $\mu'=\mu$, this threshold drift by ambient extraction requires
$m'\ge2$ since $m$ has to be at least unity. Winding higher than unity
is not generically expected for a vortex formed out of chance
destructive interference in wave dark matter \cite{Hui:2020hbq}.
However, in Section \ref{sec:transition} we will see how this case
can be realized fairly naturally, by replacing the unbound
superradiance (from the ambient environment) with bound superradiance
(from another level $m' \ne m$ in the superradiance cloud).
\end{description}
\subsection{Level transition}
\label{sec:transition}
The evolution of mass and spin of an isolated black hole, due to the superradiance instability, is
initially dominated by the fastest-growing mode. Let us use $m$ to
denote the angular momentum of this mode.
This phase of the evolution, described in Section \ref{sec:single-evolution}, brings the black hole from its initial position in the Regge plane (black circle in Figure~\ref{fig:transition}) to the threshold of the instability region of the grown mode (blue circle) along a trajectory determined by
\begin{equation}
\frac{\rd(a\rs)/\rd t}{\rd\rs/\rd t}=\frac{m}\mu.
\end{equation}
The final parameters ($\rs',a')$ are linked to the initial ones ($\rs,a)$ by equation (\ref{eqn:analytic-formulae}).
If other modes (and the ambient environment) are neglected, the system
is now in stable equilibrium, with the black hole sitting on the level
$m$ superradiance threshold.
What happens when other modes (of the bound cloud) are taken into account? The exponential dependence of the instability rate on the angular momentum number, $\Im(\omega)\sim\alpha^{4\ell+5}$, ensures a large separation of timescales between the growth of the first and of the next levels. The next-fastest growing mode will thus `silently' grow without noticeably affecting the black hole parameters for a long time, until its mass eventually becomes comparable to that of the, previously grown, fastest mode.
Given this large separation of timescales, we can describe the
evolution of the system with the approach developed in
Section~\ref{sec:accretion+superradiance}, similar to ``Case 3'' of
Section \ref{sec:dm-accretion}. The main difference from Case 3 there
is this: the mass and spin extracted by the
next-fastest growing mode (let us denote its angular momentum by $m'$),
do not escape to infinity, but rather go into the $m'$ level of the bound superradiance cloud.
The black hole undergoes threshold drift, from the blue to the red
circle as in Figure~\ref{fig:transition}), according to equation
(\ref{eqn:xc(x)}) with $\mu'=\mu$.
The drift is of the under-superradiance type, just as in
Case 3 before, such that the level $m$ part of the cloud
loses mass to the black hole, while the level $m'$ part gains
mass from the black hole, and the black hole as a whole
gains mass. The threshold drift stops when level $m$
is completely depleted, indicated by the position of the red circle
in Figure~\ref{fig:transition}. The threshold drift from blue circle to red
is what we call {\it level transition}.\footnote{{Note that this is different from the ``atomic level transition'' that accompanies the emission of gravitational waves, see e.g.~\cite{Arvanitaki:2014wva}.}} The superradiance cloud
switches from being dominated by level $m$ to being dominated by level
$m'$.
Once level $m$ is emptied out, level $m'$ continues its
superradiance growth on its own. Without level $m$, there is no
longer the glue to keep the black hole stuck at the level $m$
threshold. Thus, the black hole moves from red to green circle.
This is just the standard single mode Regge trajectory. The green
circle is where the trajectory hits the level $m'$ threshold.
The trajectory is determined by
\begin{equation}
\frac{\rd(a\rs)/\rd t}{\rd\rs/\rd t}=\frac{m'}\mu,
\label{eqn:trajectory-m'}
\end{equation}
The whole evolution from the initial black circle to the final green
circle is thus a zig-zag in the Regge plane.
The locations of all the colored circles in the Regge plane can be
written down analytically given the initial (black) circle.
The blue circle is given by (\ref{eqn:analytic-formulae}).
The green circle is given by the same expression with
$m \rightarrow m'$, $a' \rightarrow a''$, $r_s' \rightarrow r_s''$:
\begin{equation}
\frac{\mu\rs''}{m'}=\frac{1-\sqrt{1-(2(\mu\rs/m')(1-\mu a/m'))^2}}{2(\mu\rs/m')(1-\mu a/m')},\qquad \frac{a''}{\rs''}=\frac{\mu\rs}{m'}\Bigl(1-\frac{\mu a}{m'}\Bigr).
\end{equation}
Notice how, because total mass and angular momentum are conserved,
the green circle is no different from where the black hole would end
if there were only level $m'$ superradiance all along.
The intersection of such a Regge trajectory with the level $m$
threshold gives the location of the red circle.
All these conclusions can be verified by computing the time evolution
of the black hole parameters with the two-mode model developed in
\cite{Ficarra:2018rfu}. The black line in Figure~\ref{fig:transition}
is the result of a numerical integration of equations (36) of
\cite{Ficarra:2018rfu} for the case with $\ell=m=1$ and $\ell'=m'=2$,
with small initial seeds for both modes. Its zig-zag shape is evident,
as well as its perfect match with the position of the circles, which
are computed with our analytical description of the evolution. The
results as a function of time are reported in Figure~\ref{fig:transition-in-time}.
\subsection{Baryonic accretion}
\label{sec:baryonic}
A natural kind of ambient accretion around a black hole is due to a
baryonic accretion disk. Its interplay with the superradiance was
considered in \cite{Brito:2014wla}. The results of
\cite{Brito:2014wla} showed that, for a significant portion of its
evolution, the black hole evolved along the superradiance threshold,
in the sense we described in
Section~\ref{sec:accretion+superradiance}. In this section, we show
how their results can be understood in a simple way by considering (\ref{eqn:accretion-effective-x-evolution}) and (\ref{eqn:over-under-superradiant}).
The accretion rate considered in \cite{Brito:2014wla} is a fraction $f_{\rm Edd}$ of the Eddington rate\footnote{Here, $\sigma_{\rm T}$ is the Thomson cross section and $m_p$ is the proton's mass.} \cite{Barausse:2014tra,Lynden-Bell:1969gsv,Soltan:1982vf},
\begin{equation}
\dot M_{\rm acc}=\frac{f_{\rm Edd}}{\tau_{\rm Sal}}M_{\rm BH},\qquad \tau_{\rm Sal}=\frac{2\Mpl^2\sigma_{\rm T}}{m_p}=\SI{4.5e+7}{yrs},
\label{eqn:dotMacc_baryonic}
\end{equation}
with the angular-momentum-to-mass accretion ratio given by that of particles on the ISCO \cite{Bardeen:1970zz},
\begin{equation}
\frac{\dot J_{\rm acc}}{\dot M_{\rm acc}}=\frac{\rs}{3\sqrt3}\frac{1+2\sqrt{3r_{\rm ISCO}/GM-2}}{\sqrt{1-2GM/(3r_{\rm ISCO})}},
\label{eqn:dotJacc_baryonic}
\end{equation}
where $r_{\rm ISCO}$ is the radius of the innermost stable circular
orbit. It is straightforward to implement these formulae in the
effective equations derived in
Section~\ref{sec:accretion+superradiance}.
It should be noted that the above accretion model
has certain limitations. For instance, $f_{\rm Edd}$ is likely
a function of time. Also, we expect modifications to
$\dot J_{\rm acc} / \dot M_{\rm acc}$ as the black hole gets
spun up close to extremality (\cite{Thorne:1974ve} gave an upper bound of
$a_*\sim 0.998$).
We solve (\ref{eqn:accretion-effective-x-evolution}) and
(\ref{eqn:over-under-superradiant}), with $f_{\rm Edd} = 0.01$,
an initial black hole mass of $x=0.2$ and cloud mass of $x_c = 0.1 \, x$.
(Recall $x \equiv \mu r_s/m$, $x_c \equiv M_c x / M_{\rm BH}$; the
precise value for $\mu/m$ does not matter once one expresses
everything in terms of $x$ and $x_c$.) The results are shown in Figure~\ref{fig:bar-accretion}. The threshold drift associated with baryonic accretion is like that depicted earlier in Figure~\ref{fig:dm-accretion-super}, except that the entire drift is in the over-superradiance regime: the cloud's mass in Figure~\ref{fig:bar-accretion} increases with
time.
In other words, there is no under-superradiance portion, and over-superradiance continues all the way until the black hole is close to extremality.\footnote{\label{cloudcorrections} Close to the end, at high masses and spins,
the cloud's mass can be seen decreasing slightly.
This is an artifact of the $\omega \approx \mu$ approximation
used in deriving (\ref{eqn:accretion-effective-x-evolution}) and
(\ref{eqn:over-under-superradiant}). We have checked that including
higher-order corrections in $\alpha$ goes in the direction of
restoring over-superradiance at the very end of the cloud's
evolution. One should also keep in mind, as remarked above,
additional effects could prevent the black hole from reaching extremality.
}
The smaller the initial mass of the black hole, the larger the $x_c/x$ ratio can grow. We are able to confirm the results of \cite{Brito:2014wla}, finding a maximum value of $x_c/x$ of about 36\%.
\section{Discussion}
\label{sec:discuss}
In summary, we have explored how superradiance (which extracts mass
and angular momentum from a black hole) could work in tandem with
accretion (which donate both to the black hole).
Superradiance, because of its ability to build up a substantial cloud
around the black hole, is often more efficient than accretion from the
ambient environment, see discussion at the end of Section
\ref{sec:single-evolution}, around Eqs.~({\ref{Mdot0})-(\ref{Mdot2})
and Eq.~(\ref{eqn:dotMacc_baryonic}).
This means the black hole will generically evolve
towards the superradiance threshold in the Regge (black hole spin
versus mass) plane. Once sufficiently close to the threshold, the
superradiance rate is reduced to an extent that accretion can
compete. The subsequent evolution of the black hole spin and mass,
a climb along the threshold we call threshold drift, is the focus of
this paper. We provide simple evolution equations describing the
climb: Eqs.~(\ref{eqn:dx-dm}) and (\ref{eqn:dxc/dt}).
We give an analytic relation between the black hole mass and superradiance
cloud mass (\ref{eqn:xc(x)}), and a formula for the end-point of
over-superradiance (\ref{eqn:x_star-result}).
Of the possible scenarios, perhaps the most interesting ones are cases where
$\mu/m$ (mass-to-angular-momentum ratio for the superradiance scalar)
is less than $\dot M_{\rm acc}/\dot J_{\rm acc}$ (mass-to-angular-momentum accretion rate
ratio), assuming both have the same sign as the black hole spin.
The black hole {\it gains} mass and angular momentum during
the threshold drift, even as the superradiance cloud does {\it the
same}. Effectively, the ambient accretion serves to feed the
superradiance cloud {\it via} the black hole. We refer to this process as
over-superradiance. This way, the
superradiance cloud can acquire a mass that exceeds the standard
maximum of $10\% M_{\rm BH}$ from superradiance
alone without accretion. We have considered two separate examples of
ambient accretion: one is accretion of the surrounding dark matter
from a wave dark matter vortex (with $\dot J_{\rm acc}/\dot M_{\rm
acc} = m'/\mu'$ and $m'=1$; Case 2 of Section \ref{sec:dm-accretion}); the other is the accretion of baryons
from a disk (Section \ref{sec:baryonic}). The latter is more efficient, and the highest
superradiance cloud mass we find is about $35 \%$ that of the black
hole, consistent with the results of \cite{Brito:2014wla}.
Dark matter accretion can in principle achieve an even higher
cloud mass (comparable to that of the black hole), with the caveat that the accretion rate is slow and the cloud
build-up takes longer than a Hubble time (see Eqs.~\ref{Mdot1} and
\ref{Mdot2}). The long timescale for dark matter accretion is
due both to the moderate dark matter density in typical environments,
and to the suppression of accretion by the angular momentum barrier
(\ref{eqn:guanhaos-fit}). One could get around the angular
momentum suppression by increasing the dark matter particle mass
$\mu'$ (while keeping the superradiance scalar mass $\mu$ fixed).
But it can be shown the resulting cloud-to-black-hole mass ratio
is diminished (\ref{eqn:x_cx}), due to the smaller ${\cal R} \equiv m'\mu/(m\mu')$.
Dark matter accretion can proceed at a much faster rate for spherical
accretion, which is the example depicted in Figure
\ref{fig:dm-accretion-R=0} (Case 1 in Section \ref{sec:dm-accretion}).
The process illustrated there is under-superradiance, where the cloud
mass shrinks while both the cloud and the ambient accretion feeds the
black hole. Perhaps most interesting is the fact that the black hole
spins up during the threshold drift (it has to, by the second law),
despite the fact that the ambient accretion gives mass but not angular
momentum to the black hole. The spin-up of the black hole is entirely due to the angular
momentum from the diminishing cloud.
The possibility of a substantial superradiance cloud raises a number
of interesting questions. (1)~The cloud's own gravity
cannot be ignored, that is to say, the geometry is no longer
completely dominated by the black hole. How will this affect the
dynamics and evolution of the cloud? It is known that a
self-gravitating, rotating boson cloud (without a black hole) is
unstable on short timescales
\cite{Sanchis-Gual:2019ljs,Dmitriev:2021utv}.
How would accounting for both the gravity of the black hole and
that of the cloud modify the story? As one dials up the
cloud-to-black-hole mass ratio, when will the instability
observed by \cite{Sanchis-Gual:2019ljs,Dmitriev:2021utv} become relevant? (See also \cite{Cardoso:2022nzc} for a recent numerical solution of the accretion process of a boson star by a black hole.)
(2)~An increased mass of the cloud
will, in general, enhance nonlinear effects such as
self-interaction and the gravitational backreaction on the geometry. At a minimum, such nonlinear effects would change the profile of
the cloud and, possibly, the associated flux through the horizon. The relative importance
of the cloud's self-gravity versus the black hole's gravity
is obviously determined by $M_{c}/M_{\rm BH}$. For
self-interaction, the relevant self-interaction to gravity ratio
is $\lambda \Phi^4 / (\mu^2 \Phi^2 r_s/r)$
where $r \sim 1/(r_s \mu^2)$ is the cloud size, and $\lambda$ is the
self-coupling strength (for an axion, $\lambda
\sim \mu^2/F^2$ where $F$ is the axion decay constant).
This ratio is roughly $\alpha^2 (M_{c}/M_{\rm BH}) (M_{\rm
Pl}^2/F^2)$. Moreover, self-interaction is able to shutdown the
growth of subdominant superradiant modes via level mixing, as
explained in \cite{Arvanitaki:2010sy}, and also trigger scalar
emission \cite{Baryakhtar:2020gao}.
It would be useful to explore these nonlinear effects further \cite{Gruzinov:2016hcq}, in
light of the possibility of a substantial cloud mass, and thus cloud density.
(3)~In a binary setting, if one (or
both) of the binary components has a substantial cloud, the inspiral
dynamics can be heavily affected.
For example, in cases of extreme
mass ratios, a small compact object can move through the cloud of the
big black hole.
A more massive cloud will lead to enhanced dynamical friction,
accretion and
orbital resonances \cite{Hui:2016ltb,Baumann:2018vus,Zhang:2019eid,Baumann:2021fkf,Baumann:2022pkl, Traykova:2021dua,Buehler:2022tmr,Boudon:2022dxi,Vicente:2022ivh}.
(4)~The threshold drift phenomenon implies that black
holes can experience interesting evolution along superradiance
thresholds. Under what circumstances is such an evolution observable in
real time, such as in Event Horizon Telescope data?
These questions deserve further investigation. We hope to do so
in the near future.
\section*{Acknowledgements}
We thank Dan Kabat and Ted Jacobson for useful discussions.
LH and GS are supported by the DOE DE-SC0011941 and a
Simons Fellowship in Theoretical Physics.
AL is supported in part by the Croucher Foundation and DOE grant de-sc/0007870.
ET is partly supported by the Italian MIUR under contract 2017FMJFMW (PRIN2017).
\appendix
\section{Scalar hair around Kerr black holes}
\label{app1}
In this appendix we collect relevant facts about scalar hair solutions around a Kerr black hole. We start by presenting the full exact solution to the Klein-Gordon equation, followed by analytic estimates for the particle and wave regimes. Finally we present numerical results to support the estimates in \eqref{eqn:guanhaos-fit}.
\subsection{Klein-Gordon equation in a Kerr background} \label{KG kerr}
The exact solutions to the Klein-Gordon equation in a Kerr-Newman background are constructed in \cite{Vieira:2014waa}. Here we restrict ourselves to the Kerr case ($Q=0$), with metric \eqref{kerr}. The Klein-Gordon equation for a (complex) scalar field $\Phi$ with mass $\mu$ in Boyer-Lindquist coordinates reads
\begin{align}\label{KG Kerr}
0=& \, \bigg\{ \frac{1}{\Delta} \Big[ (r^2+a^2)^2-\Delta a^2 \sin^2 \theta\Big]\frac{\partial^2}{\partial t^2}-\frac{\partial}{\partial r} \bigg(\Delta \frac{\partial}{\partial r}\bigg)-\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\bigg( \sin\theta\frac{\partial}{\partial\theta}\bigg)\nn\\
&-\frac{1}{\Delta\sin^2\theta}(\Delta-a^2\sin^2\theta)\frac{\partial^2}{\partial\phi^2}+\frac{2a}{\Delta}\Big[(r^2+a^2)-\Delta\Big]\frac{\partial^2}{\partial t\,\partial\phi}+\mu^2 \varrho^2 \bigg\}\Phi\,,
\end{align}
where $\varrho^2 \equiv r^2 + a^2 {\,\rm cos\,}^2\theta$ and $\Delta \equiv r^2 - rr_s + a^2 = (r-r_+)(r-r_-)$ with $r_\pm \equiv
r_s/2 \pm \sqrt{(r_s/2)^2 - a^2}$.
To solve \eqref{KG Kerr}, we make the ansatz
\begin{align}
\Phi= e^{-i\omega t} e^{im\phi} S(\theta) R(r)\,.
\end{align}
Substituting this into \eqref{KG Kerr} leads to
\begin{align}
0=& \, \frac{1}{\Delta} \Big[ (r^2+a^2)^2-\Delta a^2 \sin^2 \theta\Big](-\omega^2)-\frac{1}{R}\frac{\rd}{\rd r} \bigg(\Delta \frac{\rd R}{\rd r}\bigg)-\frac{1}{S}\frac{1}{\sin\theta}\frac{\rd}{\rd\theta}\bigg( \sin\theta\frac{\rd S}{\rd \theta}\bigg)\nn\\
&-\frac{1}{\Delta\sin^2\theta}(\Delta-a^2\sin^2\theta)(-m^2)+\frac{2a}{\Delta}\Big[(r^2+a^2)-\Delta\Big](-i\omega)(im)+\mu^2 \varrho^2 .
\end{align}
We isolate the $r$- and $\theta$-dependent terms and pick the separation constant $\lambda$ such that the angular and radial equations are \cite{Teukolsky1973PerturbationsOA}
\begin{align}\label{radial}
0=\frac{1}{\Delta}\frac{\rd}{\rd r} \bigg(\Delta \frac{\rd R}{\rd r}\bigg)+\frac{1}{\Delta}\bigg[ \frac{1}{\Delta}\Big(\omega(r^2+a^2)-am\Big)^2-(\mu^2 r^2+\lambda)\bigg] R
\end{align}
and
\begin{align}\label{angular}
0=\frac{1}{\sin\theta}\frac{\rd}{\rd\theta}\bigg( \sin\theta\frac{\rd S}{\rd \theta}\bigg)+\bigg[-\bigg( a \omega \sin\theta-\frac{m}{\sin\theta}\bigg)^2-\mu^2 a^2 \cos^2\theta+\lambda\bigg]S\,,
\end{align}
respectively.
\subsubsection{Angular dependence}
Changing variable $z=\cos \theta$, the angular equation \eqref{angular} becomes
\begin{align}\label{spherical eq}
\frac{\rd}{\rd z}\bigg( (1-z^2)\frac{\rd S}{\rd z}\bigg)+\Big( \Lambda_{\ell m} +g^2 (1-z^2)-\frac{m^2}{1-z^2}\Big)S=0 \, ,
\end{align}
with
\begin{align}
\Lambda_{\ell m}(g)=\lambda_{\ell m}+2a\omega m-\mu^2 a^2\,,\qquad g^2=a^2 (\mu^2-\omega^2)=-a^2 \bar{k}^2 \,.
\end{align}
The labels $(\ell,m)$ correspond to successive solutions and eigenvalues $\Lambda_{\ell m}(g)$ to \eqref{spherical eq}. Here $m$ is an integer such that $-\ell\leq m\leq \ell$.
For $g^2=0$, i.e.~$\omega=\mu$ or $a=0$, \eqref{spherical eq} reduces to the associated Legendre equation, in which case $\Lambda_{\ell m}=\ell (\ell+1)$. The full angular dependence in such case is the spherical harmonics $Y_{\ell m}(\theta,\phi)\propto e^{i m\phi}P_\ell^{m}(\cos\theta)$, where $P_\ell^{m}(z)$ is the associated Legendre polynomial of the first kind.
The solutions to \eqref{spherical eq} for $g^2>0$ ($g^2<0$) are known as prolate (oblate) angular spheroidal wave functions, which we denote with $PS_{\ell m}(g,z)$, as in Mathematica \cite{reference.wolfram_2021_spheroidalps}. The basic properties of $PS_{\ell m}(g,z)$ can be found in e.g.~\cite{flammer_spheroidal_1957}. The parameter $g^2$ controls the deviation from $P_\ell^{m}(z)$, which has been taken to be zero throughout this paper. If $g^2$ is small but nonzero, we can include small corrections with the series expansions of $PS_{\ell m}(g,z)$ and $\Lambda_{\ell m}$ in powers of $g^2$, which to $O(g^2)$ read
\begin{multline}\label{small g PS}
PS_{\ell m}(g,z)\\
=P_\ell^m(z)+g^2 \left(\frac{(\ell-m+1) (\ell-m+2) P_{\ell+2}^m(z)}{2 (2\ell+1) (2\ell+3)^2}-\frac{(\ell+m-1) (\ell+m) P_{\ell-2}^m(z)}{2 (2\ell-1)^2 (2\ell+1)}\right)+O(g^4)
\end{multline}
and
\begin{align}
\Lambda_{\ell m}(g^2) =\ell (\ell+1)-g^2\frac{2 \left(\ell^2+\ell+m^2-1\right)}{(2\ell-1) (2\ell+3)}+O(g^4)\,.
\end{align}
In this paper, for fixed $g$ we adopt the same normalization as in Mathematica \cite{reference.wolfram_2021_spheroidalps}:\footnote{For different $g$ and $g'$, $PS_{\ell,m}(g,z)$ and $PS_{\ell,m}(g',z)$ are not orthogonal. However, the non-orthogonality is small if both $g$ and $g'$ are small, as we have been assuming in this paper.}
\begin{align}\label{angular norm}
\int_{-1}^1 dz PS_{\ell m}(g,z)PS_{\ell',m'}(g,z)=\delta_{\ell,\ell'}\delta_{m,m'}\frac{2(\ell+m)!}{(2\ell+1)(\ell-m)!}\,,
\end{align}
that is, we normalize $PS_{\ell m}(g,z)$ in the same way as $P_\ell^m(z)$. Therefore, our unit-normalized angular solution is
\begin{align}
S_{\ell m}(\theta) = \sqrt{\frac{(2\ell+1)(\ell-m)!}{2(\ell+m)!}}PS_{\ell m}(g,\cos \theta) \approx \sqrt{\frac{(2\ell+1)(\ell-m)!}{2(\ell+m)!}} P_\ell^m(\cos\theta)+O(g^2)\,.
\end{align}
Clearly as $g\to 0$, $e^{im\phi} S_{\ell m}(\theta)$ reduces to the usual spherical harmonics $Y_{\ell m}(\theta ,\phi)$.
\subsubsection{Radial dependence}
We rewrite \eqref{radial} as
\begin{align}
0=\frac{\rd^2R}{\rd r^2}+\bigg(\frac{1}{r-r_+}+\frac{1}{r-r_-}\bigg) \frac{\rd R}{\rd r}+\frac{1}{\Delta}\bigg[ \frac{1}{\Delta}\Big(\omega(r^2+a^2)-am\Big)^2-(\mu^2 r^2+\lambda_{\ell m} )\bigg] R \, ,
\end{align}
which has singularities at $r=r_\pm$ and $r=\infty$. Making the change of variable
\begin{align}\label{change var}
x=\frac{r-r_+}{r_- -r_+}
\end{align}
puts the equation into the form
\begin{align}\label{radial x}
0=\frac{\rd R^2}{\rd x^2}+\bigg(\frac{1}{x}+\frac{1}{x-1} \bigg)\frac{\rd R}{\rd x}+\bigg(A^2_1 +\frac{A_2}{x}+\frac{A_3}{x-1} +\frac{A^2_4}{x^2}+\frac{A^2_5}{(x-1)^2} \bigg)R \, ,
\end{align}
where
\begin{gather}
A_1 =\bar{k}(r_+ -r_-)\,,\qquad A_2 =\frac{2 a^2 (m-2 a \omega )^2}{(r_+ -r_-)^2}- \left(\bar{k}^2+ \omega ^2\right) r_+^2+\lambda_{\ell m} \, , \nn\\
A_3 =-\left[\frac{2 a^2 (m-2 a \omega )^2}{(r_+ -r_-)^2}- \left(\bar{k}^2+ \omega ^2\right) r_-^2+\lambda_{\ell m}\right] \, , \nn \\
A_4 =\frac{ r_+ r_s \omega- m a }{r_+ -r_-}\,,\qquad A_5 =\frac{r_- r_s \omega- m a }{r_+ -r_-} \,.
\end{gather}
Here we recall $\bar{k}$ is defined by $\omega^2 = \bar{k}^2+\mu^2$. Note that these expressions break down when the black hole is exactly extremal so that $r_+ =r_- $. Throughout this paper we focus on the case where the black hole is not exactly extremal.
To proceed, we introduce a new function $R(x)=e^{iA_1 x}(-x)^{i A_4}(1-x)^{i A_5}f(x)$ to bring the equation into the form
\begin{align}\label{Heun eq}
f''(x)+\bigg( \alpha+\frac{1+\beta}{x}+\frac{1+\gamma}{x-1}\bigg)f'(x)+\bigg( \frac{C}{x}+\frac{D}{x-1}\bigg) f(x)=0
\end{align}
with
\begin{gather}
\alpha = 2iA_1=2i\bar{k}(r_+ -r_-)\,,\qquad \eta=-A_2 =-\left[ \frac{2 a^2 (m-2 a \omega )^2}{(r_+ -r_-)^2}- \left(\bar{k}^2+ \omega ^2\right) r_+^2+\lambda_{\ell m} \right] \, , \nn\\
\delta = A_3+A_2=-r_s \left(r_+-r_-\right) \left(\bar{k}^2+ \omega ^2\right)\,,\qquad \beta = 2i A_4=2i \frac{ r_+ r_s \omega- m a }{r_+ -r_-}\, , \nn\\
\gamma=2i A_5=2i \frac{ r_- r_s \omega- m a }{r_+ -r_-} \, ,
\end{gather}
and
\begin{align}
C=\frac{1}{2}-\frac{(1+\beta)(1+\gamma-\alpha)}{2}-\eta,\quad D=-\frac{1}{2}+\frac{(1+\beta+\alpha)(1+\gamma)}{2}+\delta+\eta\,.
\end{align}
The equation \eqref{Heun eq} is known as the confluent Heun equation,\footnote{Some properties of the confluent Heun equation and its solutions can be found in e.g.~\cite{ronveaux_heuns_1995}.} with linearly independent solutions
\begin{align}
\text{HeunC}\left(\alpha,\beta,\gamma,\delta,\eta;x\right)\quad \text{ and }\quad (-x)^{-\beta}\text{HeunC}\left(\alpha,-\beta,\gamma,\delta,\eta;x\right)
\end{align}
normalized so that $\text{HeunC}\left(\alpha,\beta,\gamma,\delta,\eta;0\right)=1$. Therefore, we conclude that the full radial function is
\begin{align}\label{full radial x}
R_{\omega\ell m}(x)=& \, e^{\frac{1}{2}\alpha x}(-x)^{\beta_m/2}(1-x)^{\gamma_m/2}\Big[ C_1 \text{HeunC}\left(\alpha,\beta_m,\gamma_m,\delta,\eta_{\ell m};x\right)\nn\\
&+C_2 (-x)^{-\beta_m}\text{HeunC}\left(\alpha,-\beta_m,\gamma_m,\delta,\eta_{\ell m};x\right)\Big].
\end{align}
\subsubsection{Full solution and boundary condition at the horizon}
Putting everything together and restoring the original radial coordinate, the full solution is
\begin{align}\label{full sol}
\Phi_{\omega\ell m}(t,r,\theta,\phi)=e^{-i\omega t}e^{i m\phi}S_{\ell m}(\theta) R_{\omega\ell m}(r)
\end{align}
with
\begin{align}\label{full radial}
R_{\omega\ell m}(r)=& \, \, e^{-i\bar{k}(r-r_+)}\left(-\frac{r-r_+}{r_- -r_+}\right)^{\frac{\beta_m}{2}}\left(-\frac{r-r_-}{r_- -r_+}\right)^{\frac{\gamma_m}{2}}\bigg[ C_1 \text{HeunC}\left(\alpha,\beta_m,\gamma_m,\delta,\eta_{\ell m};\frac{r-r_+}{r_- -r_+}\right)\nn\\
&+C_2 \left(-\frac{r-r_+}{r_- -r_+}\right)^{-\beta_m} \text{HeunC}\left(\alpha,-\beta_m,\gamma_m,\delta,\eta_{\ell m};\frac{r-r_+}{r_- -r_+}\right)\bigg]\,.
\end{align}
Now we would like to pick the solution that is purely ingoing at the outer horizon $r=r_+$. This can be thought of as the solution that has a constant phase along an infalling null curve. As $r\to r_+$, the confluent Heun functions in \eqref{full radial} tend to one and the full solution approaches
\begin{equation}\label{near hor radial}
\Phi_{\omega\ell m}(t,r\to r_+,\theta,\phi)
=S_{\ell m}(\theta)\Big( C_1 e^{-i\omega (t-r^*)}e^{-i\omega r_+}e^{i m\phi_\text{out}}+C_2 e^{-i\omega (t+r^*)}e^{i\omega r_+}e^{i m\phi_\text{in}}\Big) \,.
\end{equation}
Here we have introduced the infalling and outgoing Eddington-Finkelstein coordinates
\begin{align}\label{Change of var}
t_\text{in} & = t +r^*\,, &\phi_\text{in} =\phi+ \frac{a}{r_+-r_-} \ln \frac{r-r_+}{r-r_-} \, , \\
t_\text{out} & = t -r^*\,, & \phi_\text{out} =\phi- \frac{a}{r_+-r_-} \ln \frac{r-r_+}{r-r_-} \, ,
\end{align}
with the tortoise coordinate $r^*$ defined by
\begin{align}\label{tortoise}
r^* =
r+\frac{r_++r_-}{r_+-r_-} \left[ r_+ \ln \left( -\frac{r-r_+}{r_- -r_+}\right) -r_- \ln \left( -\frac{r-r_-}{r_- -r_+}\right) \right] \, .
\end{align}
Now, infalling null curves are those with constant $v=t+r^*$. Therefore, to impose the purely infalling boundary condition, we set $C_1=0$.
To summarize, the solution for $\Phi$ with the correct infalling condition at the horizon is given by \eqref{full sol} with
\begin{equation}
\begin{split}
\label{ang rad sol}
S_{\ell m}(\theta) &= \sqrt{\frac{(2\ell+1)(\ell-m)!}{2(\ell+m)!}}PS_{\ell m}(g,\cos \theta) \\
R_{\omega\ell m}(r) &= |R_{\omega\ell m}(r_+) | e^{-i\bar{k}(r-r_+)}\left( -\frac{r-r_+}{r_- -r_+}\right) ^{-\frac{\beta_m}{2}}\left( -\frac{r-r_-}{r_- -r_+}\right) ^{\frac{\gamma_m}{2}}
\\
&\qquad \times\text{HeunC}\left( \alpha,-\beta_m,\gamma_m,\delta,\eta_{\ell m};\frac{r-r_+}{r_- -r_+}\right)\,.
\end{split}
\end{equation}
Note that, far away from the black hole, the geometry is approximately flat and spatial gradients of the scalar field can be neglected, so that the field density for a single mode takes the form
\begin{align}
\rho_{\mu\ell m} (r,\theta ,\phi)= -T^t{}_t\approx |\partial_t\Phi_{\mu\ell m} |^2+\mu^2 |\Phi_{\mu\ell m} |^2 =2\mu^2 |\Phi_{\mu\ell m} |^2 \, ,
\end{align}
where we are taking $\omega\approx \mu$. The field amplitude $|R_{\mu\ell m}(r_+) |$ at the horizon in \eqref{ang rad sol} is then related to the {\it angular average} $\bar\rho_{i,\ell m}$ of the field density at $r=r_i\gg r_s$ through
\begin{align}
\bar\rho_{i,\ell m}&=2\mu^2 \int_{S^2} |\Phi_{\mu\ell m}(r_i)|^2 =2\mu^2 |R_{\mu\ell m}(r_i)|^2\nn\\
&=2\mu^2 |R_{\mu\ell m}(r_+)|^2\left|\text{HeunC}\left( \alpha,-\beta_m,\gamma_m,\delta,\eta_{\ell m};\frac{r_i-r_+}{r_- -r_+}\right)\right|^2\,.
\end{align}
\subsection{The $r\to \infty$ limit}
In this section we study the large distance behavior of the radial solution \eqref{ang rad sol}. To this end, we go back to the radial equation \eqref{radial x} and we write
\begin{equation}\label{large x rewrite}
R(x)= e^{\pm iA_1x} (1-x)^{-\frac{1}{2}}(-x)^{\mp i \frac{B}{2}} F(x) \, ,
\end{equation}
with
\begin{equation}
B =\sqrt{4 (A_3+ A_4^2+ A_5^2)-1} \,.
\end{equation}
The $\pm$ signs correspond to the two linearly independent solutions. In terms of $F$ in \eqref{large x rewrite}, the radial equation \eqref{radial x} in the large-$x$ limit reads
\begin{equation}\label{1F1 eq}
x F'' + (c_\pm \pm 2 i A_1 x )F' \pm 2 i A_1 a_\pm F= 0 \qquad (x\gg 0) \, ,
\end{equation}
where
\begin{equation}
a_\pm= \pm\frac{A_2+A_3}{2 i A_1}+\frac{c_\pm}{2} \, , \qquad \qquad
c_\pm = 1\mp i B \,.
\end{equation}
When $ \bar{k}^2 \neq 0$, the solution to \eqref{1F1 eq} is exactly the confluent hypergeometric function:
\begin{equation}
_1F_1(a_\pm,c_\pm,\mp i 2 A_1 x) \,.
\end{equation}
The general large-$x$ radial solution is then a linear combination of the two $\pm$ solutions:
\begin{equation}\label{general large r}
R (x) \approx \tilde{C_1} \, e^{i A_1 x} (-x)^{\frac{-1-i B}{2}} \, _1F_1(a_+,c_+,-i 2 A_1 x)+ \tilde{C_2}\, e^{-i A_1 x} (-x)^{\frac{-1+iB}{2}} \, _1F_1(a_-,c_-,i 2 A_1 x)\; .
\end{equation}
Using the fact that
\begin{align}
_1F_1(a,c,z\to\infty) \propto e^z z^{a-c} \left( 1+O\left( \frac{1}{z}\right) \right) \, ,
\end{align}
one has, for $\bar{k}^2 \neq 0$,
\begin{equation}
R(r)\approx C_3\, \frac{e^{i A_1r}}{r}e^{-i\frac{A_2+A_3}{4 A_1}\log \left( \frac{r-r_+}{r_+- r_- }\right) }+ C_4 \, \frac{e^{-iA_1r}}{r}e^{i\frac{A_2+A_3}{4 A_1}\log \left( \frac{r-r_+}{r_+- r_- }\right) } \,.
\end{equation}
However, we are interested in the case $ \bar{k}^2 = 0$. In this limit the confluent hypergeometric functions in \eqref{general large r} become degenerate.
We can take $\bar k \to 0$ in \eqref{general large r} using
\begin{align}
\lim_{\lambda\to0} \,_1F_1 \left(\frac{a}{\lambda},c;\lambda z\right)=\,_0F_1(c,az)\, .
\end{align}
Combining this with
\begin{align}
J_\alpha(x) =\frac{(\frac{x}{2})^2}{\Gamma(\alpha+1)} \,_0F_1\left( \alpha+1;-\frac{x^2}{4}\right) \,,
\end{align}
we have, for large $r$,
\begin{align}\label{large r gen}
R (r)\; \stackrel{\bar{k}=0}{\approx} \; & \tilde{C_3}\, \frac{J_{B}\left(2\mu \sqrt{r_s \, r}\right)}{\sqrt{r}}+ \tilde{C_4}\, \frac{J_{-B}\left(2\mu \sqrt{r_s \, r}\right)}{\sqrt{r}}\, ,
\end{align}
where we have absorbed $r$-independent factors into the constants $\tilde{C_3}$ and $\tilde{C_4}$. For $\bar k =0$, the quantity $B$ is
\begin{align}
B =2\sqrt{-\left( \ell+\frac{1}{2}\right) ^2+ \mu ^2 r_s (2r_s-r_+)} \, .
\end{align}
The precise relation between $\tilde{C_3}$, $\tilde{C_4}$ and the overall amplitude $|R_{\omega\ell m}(r_+) |$ in \eqref{ang rad sol} depends on the scalar mass $\mu$, the black hole mass $r_s$ and spin $a$,
and the angular momentum quantum numbers $\ell$, $ m$. Such relation is usually not easy to find analytically in closed form. In the following, we first discuss two limiting cases, i.e.~the particle and the wave regimes, for which it is possible to find a simple expression for the ratio $|R(r_+)|^2/|R(r_i)|^2$ where $r_i\gg r_s$. We will later discuss the intermediate regime in Appendix~\ref{app:num}, where we will obtain an approximate connection formula by fitting the numerical solution of the radial equation. The results are summarized in Eq.~\eqref{eqn:guanhaos-fit}.
\paragraph{The particle limit.}
The approximation \eqref{large r gen} is valid when $\mu^2r_s(r-r_+)\gg 1$. For this to be valid all the way down to the near-horizon region $r \approx r_+$, we need in particular $\mu r_s \gg 1$. Now, if we further have
\begin{align}
2\mu \sqrt{r_s(r-r_+)} \gg \left| B^2 \right| \, ,
\end{align}
we can use the asymptotic expression for the Bessel function
\begin{align}\label{bessel asym}
J_\alpha (y) =\sqrt{\frac{2}{\pi y}}\left[ \cos\left( y-\frac{\alpha \pi}{2}-\frac{\pi}{4}\right) +O\left( y^{-1}\right) \right] \, , \quad y\gg \left| \alpha^2-\frac{1}{4} \right|
\end{align}
to obtain a simple estimate. Excluding the extreme case $\mu r_s \gtrsim \sqrt{r_i/r_s}$ and focusing on $\ell\sim O(1)$, we can use the following approximation for large enough $r$:
\begin{align}\label{far field large mass}
|R_{\mu\ell m} (\to \infty)|^2\stackrel{\bar{k}=0}{\sim} r^{-\frac{3}{2}} \, , \quad \text{ for } \quad \mu r_s \gtrsim \frac{\ell +1}{2} \, .
\end{align}
We will later confirm numerically this approximation as well as its range of applicability.
\paragraph{The wave (ultralight) limit.}
For an ultralight scalar with mass $\mu^2\lesssim 1/r_s r_i$, the approximation \eqref{large r gen} does not hold for any distance $r\lesssim r_i$. However, to the leading order this case can be approximated in terms of a static massless scalar, i.e.~$\omega = \mu =0$. Writing
\begin{align}
R(x)= \left( -\frac{x}{1-x}\right)^{\frac{i m a}{r_+ - r_-}}Y(x) \; ,
\end{align}
equation \eqref{radial x} takes the hypergeometric form
\begin{align}
x(1-x) Y''(x)+\left(1+\frac{ 2 i m a}{r_+-r_-} -2x\right) Y'(x)+\ell (\ell+1) Y(x) =0 \, .
\end{align}
Picking the solution that is regular at the outer horizon, $r=r_+$, we have
\begin{align}
R_{\ell m}(r) = |R_{\ell m}(r_+) | \left( \frac{r-r_+}{r-r_-}\right)^{\frac{i m a}{r_+ - r_-}} \, _2F_1\left(-\ell ,\ell+1 ;1+\frac{ 2 i m a}{r_+-r_-};\frac{r-r_+}{r_- - r_+}\right) \, .
\end{align}
Finally, the asymptotics for the hypergeometric functions implies the following large-$r$ behavior:
\begin{align}\label{far field ultralight}
|R_{\ell m}(r\to \infty) |^2 \propto r^{2\ell} \, , \qquad \text{ for } \mu^2\lesssim 1/r_s r_i \, .
\end{align}
To conclude, we have derived the first and third line of \eqref{eqn:guanhaos-fit}. Unfortunately, the most interesting regime for the interplay between scalar hair and superradiance is the intermediate regime $2\sqrt{\rs/r_i}\ll\mu\rs\lesssim\frac12(\ell+1)$, i.e.~the second line of \eqref{eqn:guanhaos-fit}. In this case \eqref{far field ultralight} is not applicable, while \eqref{large r gen} is only valid down to some distance $r_0\gg r_+$ outside the near-horizon region such that $2\mu \sqrt{r_s r_0}\sim 1$. We will need to rely on numerical studies to obtain a good estimate, which is what we discuss next.
\subsection{Numerical results for hair solutions}
\label{app:num}
In this section we carry out numerical studies of the hair solution in the different regimes, proving in particular evidence for our estimates \eqref{eqn:guanhaos-fit}.
\subsubsection{$|R_+|/|R_i|$ as a function of $\alpha$ at $a_*=0$ and the three regimes}
To demonstrate the separation of three regimes \eqref{eqn:guanhaos-fit}, we first focus on the case with $a_*=0$. Recall that $m$ enters the radial function \eqref{ang rad sol} only through the combination $ma_*$, and thus a nonzero $m$ has no effect in this case. Figure~\ref{ratio0} shows plots of the ratio $|R_+|/|R_i|$ as a function of $\alpha$ for $\ell=0,1,2$, with $R_+\equiv R(r_+)$ and $R_i \equiv R(r_i)$.\footnote{We will see in Figure~\ref{Rvsr} that $|R(r)|$ is an oscillatory function of $r$ in the intermediate regime.
Therefore, normalizing the scalar profile $|R(r)|$ at fixed $r_i$ would result in the presence of spikes in $|R_+|/|R_i|$ as a function of $\alpha$, which correspond to the minima of the oscillations in Figure~\ref{Rvsr}. %
We get around this problem by sampling several $|R_+|/|R_i|$ values within the range $350r_s < r < 450r_s$, and take the minimum $|R_+|/|R_i|$ value. Some remnants of the spikes can still be seen in the plot in Figure~\ref{ratio0}, most prominently for $\ell=0$.} Here we choose $r_i = 400r_s$ (we set $r_s=2$ when making the plots).
It is clear from Figure~\ref{ratio0} that, for fixed $\ell$, there are three qualitatively different phases as $\alpha$ increases from 0 to 1: two asymptotic flattened regions and an intermediate phase. The flattening of the ratio $|R_+|/|R_i|$ as $\alpha\to 1$ and $\alpha \to 0$ correspond respectively to the ``particle" and ``wave" regimes studied analytically in the previous section.\footnote{Note that the upper bound of the wave regime here is the boundary between ``regime II" and ``regime III" defined in \cite{Hui:2019aqm}, not the one between ``regime I" and ``regime II". However, the behavior of $|R_+|/|R_i|$ is identical in regime I and regime II, which means we cannot distinguish them by plotting $|R_+|/|R_i|$. We therefore merge regime I and II of \cite{Hui:2019aqm} into a single one in our discussion, and call it the ``wave regime". }
As expected, in the particle regime, the value of $|R_+|/|R_i|$ plateaus around $(r_i/r_s)^{3/4}$.\footnote{We have chosen $r_i = 400 r_s$ in our numerical calculations, but the height of the plateau in Figure~\ref{ratio0} is actually $350^{3/4}$. This is an artifact of our procedure of smoothing out the oscillations in the intermediate regime. The behavior of $|R_+|/|R_i|$ remains qualitatively the same.} The numerical values in the wave regime agree with the $(r_i/r_s)^{-2\ell}$ approximation, although not distinguishable in Figure~\ref{ratio0} due to their small size. Figure~\ref{Rvsr} illustrates the typical behaviors for the radial function $|R(r)|^2/|R_+|^2$ as a function of $r$ in the three regimes: monotonically increasing (wave regime), oscillatory (intermediate regime), and monotonically decreasing (particle regime).
In the previous section, we obtained the analytic approximations \eqref{far field large mass} and \eqref{far field ultralight} for the large-$r$ behavior of $R(r)$, and thus for the ratio $|R_+|/|R_i|$, in the particle and wave limits. Even though we have little analytic control over the intermediate regime, we can use the numerical results in Figure~\ref{ratio0} to extract some simple estimates for the scaling of $R(r)$. The results are summarized in Eqs.~\eqref{eqn:guanhaos-fit}. In Figure~\ref{NumVsAppr}, we show that a very good agreement between \eqref{eqn:guanhaos-fit} and the exact numerical results is achieved within $O(1)$ error for $\ell=1$ and $\ell=2$.
The region where our approximation is the least accurate is the transition between the wave and intermediate regimes, which we simply define as the point where the approximations \eqref{eqn:guanhaos-fit} for the two regimes meet. It is worth noting that numerical studies show that this bound decreases with increasing the ratio $r_i/r_s$, as described in our approximation, while the rate of the drop in the intermediate regime and the lower bound of the particle regime are not affected by this ratio. Therefore, for a larger $r_i$, $|R_+|/|R_i|$ is smaller in the wave regime due to a longer drop.
\subsubsection{$|R_+|/|R_i|$ on the Regge plane}
In the previous section we have identified three different regimes for the study of the scalar hair solution around a non-rotating Schwarzschild black hole ($a_*=0$). Here we study the effect of turning on $a_*$. The main conclusion is that our approximations \eqref{eqn:guanhaos-fit} receive modifications of at most $O(1)$ unless $a_*>0.95$. As a first example, Figure~\ref{ratio1} shows the ratio $|R_+|/|R_i|$ as a function of $\alpha$ at $a_* \simeq 0.505$.
Compared with Figure \ref{ratio0}, $|R_+|/|R_i|$ for the same $\ell$ but different $m$ now behave differently. We see that the value of $|R_+|/|R_i|$ in the particle regime now depends on the value of $m$, while it remains largely unaffected in the wave regime. Also, the boundaries separating the regimes are shifted, depending on the sign and size of $ma_*$. However, the effect of a nonzero $a_*$ on these plots is within $O(1)$ unless $a_* \gtrsim 0.95$. The dependence of $|R_+|/|R_i|$ on $a_*$ in the three different values of $\alpha$ is illustrated in Figure \ref{ratioastar}.
Finally, Figure \ref{CRegge} shows the value of $|R_+|/|R_i|$ on the Regge plane $(\alpha,a_*)$, for $(\ell,m) = (2,2)$. In this figure, the deep blue and light yellow regions correspond to the wave and particle regimes respectively, while the greenish region is the intermediate regime.
From both Figure \ref{ratioastar} and \ref{CRegge}, it is clear that the change of transition points between the three regimes as $a_*$ increases is well within $O(1)$ unless the black hole is near-extremal ($a_*>0.95$). Therefore, we have established the validity of the approximations \eqref{eqn:guanhaos-fit} up to $a_* \sim 0.95$, and therefore justifying dropping the $a_*$ (and thus $m$) dependence in \eqref{eqn:guanhaos-fit}. When the black hole is near extremal ($a_*>0.95$), all our analysis breaks down and a separate discussion is required.
\section{Superradiance}
\label{app2}
In this appendix, we will review some aspects of black hole superradiance and the corresponding system of the ``gravitational atom''.
\subsection{Bound states}
At distances much larger than the Schwarzschild radius, $r \gg \rs$, it is convenient to consider the following ansatz for the scalar field $\Phi$:
\beq
\Phi(t,\mathbf r)=\frac{1}{\sqrt{2\mu}}\left[\psi(t,\mathbf r)e^{-i\mu t}+\psi(t,\mathbf r)^*e^{i\mu t}\right] ,\eeq
where $\psi$ is a complex scalar field which varies on a timescale longer than $\mu^{-1}$. It can be shown that, to leading order in an expansion in powers of $\rs/r$, the Klein-Gordon equation $(\nabla^\nu\nabla_\nu-\mu^2)\Phi=0$ reduces to
\begin{equation}
i\frac{\partial\psi}{\partial t} = \biggl(-\frac1{2\mu}\nabla^2-\frac\alpha{r}\biggr)\psi\,,\qquad\text{where }\, \alpha \equiv \frac{\mu\rs}2\,,
\label{eqn:Schrodinger}
\end{equation}
which is equivalent to the Schr\"odinger equation for the hydrogen atom, if we identify $\alpha$ with the fine structure constant. When $\psi$ is taken to (exponentially) vanish at infinity, Eq.~\eqref{eqn:Schrodinger} is then solved by hydrogenic-like discrete bound states, whose spectrum is the familiar
\begin{equation}
\omega_{n\ell m}= \mu\biggl(1 -\frac{\alpha^2}{2n^2}\biggr).
\end{equation}
Higher-order corrections in powers of $\alpha$ will be present, due to
\begin{enumerate}
\item higher-order terms that we neglected in \eqref{eqn:Schrodinger};
\item the causal ingoing boundary conditions at the horizon, which differ from the demand of regularity at the origin of the hydrogen atom.
\end{enumerate}
The corrections of second type are particularly relevant, because they introduce a small imaginary part to $\omega_{n\ell m}$, making the population of the bound states either exponentially decrease or increase over time. The first terms in expansions are \cite{Baumann:2019eav}
\begin{align}
\label{eqn:ReOmega}
\Re(\omega_{n\ell m})&=\mu\biggl(1-\frac{\alpha^2}{2n^2}-\frac{\alpha^4}{8n^4}+f_{n\ell}\frac{\alpha^4}{n^3}+h_\ell\frac{ma}\rs\frac{\alpha^5}{n^3}+\ldots\biggr),\\
\label{eqn:ImOmega}
\Im(\omega_{n\ell m})&=4\frac{r_+}\rs C_{n\ell}g_{\ell m}\bigl(m\Omega_+-\Re(\omega_{n\ell m})\bigr)\alpha^{4\ell+5},
\end{align}
where the expressions of the coefficients $f_{n\ell}$,
$h_\ell$,$C_{n\ell}$ and $g_{\ell m}$ are given in
\cite{Baumann:2019eav}. The most relevant feature, to our purposes, is
that $\Im(\omega_{n\ell m})$ changes sign in correspondence of the
superradiance threshold, $m\Omega_+=\Re(\omega_{n\ell
m})\approx\mu$. We thus see that the states are
\textit{quasi}-bound, as some of them decay, while others grow by superradiance.
\subsection{Fluxes and nonlinear evolution}
\label{app:superradiant-nonlinear}
Our analysis has so far only dealt with the linear regime; the superradiant states, however, will eventually extract enough mass and angular momentum from the black hole to significantly change its parameters. To study this phase of the evolution, we can write down the fluxes of energy and angular momentum of the scalar field, under the assumption that only one $(n,\ell,m)$ mode is present:
\begin{align}
T^r{}_t&=g^{rr}(\partial_r\Phi^*\partial_t\Phi+\partial_t\Phi^*\partial_r\Phi)=2\frac\Delta{\varrho^2}\Im(\omega R'^*R)|S|^2e^{2\Im(\omega)t},\\
T^r{}_\phi&=g^{rr}(\partial_r\Phi^*\partial_\phi\Phi+\partial_\phi\Phi^*\partial_r\Phi)=-2\frac\Delta{\varrho^2}m\Im(R'^*R)|S|^2e^{2\Im(\omega)t}.
\end{align}
From the near-horizon limit of the radial part of the Klein-Gordon equation,
\begin{equation}
\label{eqn:near-horizon-radial2}
\Delta\frac{\rd}{\rd r}\biggl(\Delta\frac{\rd R}{\rd r}\biggr)+\rs^2r_+^2(\omega-m\Omega_+)^2R=0,
\end{equation}
we can extract the near-horizon behavior of $R(r)$ and write
\begin{align}
\label{eqn:T^r_t2}
T^r{}_t(r_+)&=2\frac{r_sr_+}{\varrho^2}(|\omega|^2-\Re(\omega)m\Omega_+)\Phi^*\Phi(r_+),\\
T^r{}_\phi(r_+)&=-2m\frac{r_sr_+}{\varrho^2}(\Re(\omega)-m\Omega_+)\Phi^*\Phi(r_+).
\label{eqn:T^r_phi2}
\end{align}
Equating the fluxes of mass and angular momentum to the change in the parameters of the black hole, and performing an angular integral, we arrive to
\begin{align}
\label{eqn:mass-evolution}
\frac{\rd \rs}{\rd t}&=4G\sum_{n,\ell,m}\rs r_+(|\omega_{n\ell m}|^2-\Re(\omega_{n\ell m})m\Omega_+)|R_{n\ell m}(r_+)|^2,\\
\label{eqn:J-evolution}
\frac{\rd(a\rs)}{\rd t}&=4G\sum_{n,\ell,m}\rs r_+m(\Re(\omega_{n\ell m})-m\Omega_+)|R_{n\ell m}(r_+)|^2.%
\end{align}
In these equations, we are summing over all $(n,\ell,m)$ modes. Technically, this would not be allowed, because $T_{\mu\nu}$ is quadratic in the field and would thus contain interference terms. However, in the limit of small spheroidicity ($a^2(\mu^2-\omega^2)\approx a^2\mu^2\alpha^2/(2n^2)\ll1$), the angular integral kills the interference terms among states with different $(\ell,m)$ because of the orthonormality of spherical harmonics. Interferences between overtones with same angular momentum, instead, oscillate with frequency $\omega_n-\omega_{n'}\approx(1/2)\mu\alpha^2(1/n'^2-1/n^2)$. Comparing the power of $\alpha$ with \eqref{eqn:ImOmega}, it is easy to see that this frequency is much faster than the superradiance growth timescale, therefore it is same to mediate these interferences to zero.
The way equations (\ref{eqn:mass-evolution}) and (\ref{eqn:J-evolution}) are used to describe the nonlinear evolution of a superradiance-generated cloud has been described in Section \ref{sec:fluxes-evolution}.
\subsection{Superradiance and area law}
\label{sec:superradiance-area}
The area of the horizon of a Kerr black hole is $4\pi\rs r_+$. Using that
\begin{equation}
\label{eqn:derivative-rirr}
\frac{\rd(\sqrt{\rs r_+})}{\rd t}=\frac{\sqrt{\rs r_+}}{2r_+-\rs}\biggl(\frac{\rd\rs}{\rd t}-\frac{a}{\rs r_+}\frac{\rd(a\rs)}{\rd t}\biggr),
\end{equation}
we can combine \eqref{eqn:mass-evolution} and \eqref{eqn:J-evolution} to get
\begin{equation}
\label{eqn:rirr-evolution}
\frac{\rd(\sqrt{\rs r_+})}{\rd t}=4G\sum_{n,\ell,m}\frac{(\rs r_+)^{3/2}}{2r_+-\rs}|\omega_{n\ell m}-m\Omega_+|^2|R_{n\ell m}(r_+)|^2\ge0.
\end{equation}
This shows that the second law of black hole thermodynamics,
$\rd(\text{Area})/\rd t\ge0$, is respected. Moreover, we see that
along the trajectory due to superradiance, we have
\begin{equation}
\frac{\rd(\sqrt{\rs r_+})}{\rd\rs}=\frac{\sqrt{\rs r_+}}{(2r_+-\rs)\mu}\bigl(\mu-m\Omega_+\bigr),
\end{equation}
where we used (\ref{eqn:nonlin-1-mode-super}) and (\ref{eqn:derivative-rirr}). Using $\rs$ to parametrize the curve of the superradiance trajectory, this equation is telling us that the derivative of the area along the said curve vanishes at $\mu=m\Omega_+$. The superradiance trajectories are thus tangent, on the threshold, to the constant area lines, see Figure \ref{fig:regge-introductory}. This means that the evolution is a quasi-adiabatic process in the vicinity of the superradiance threshold.
\section{A toy model}
\label{sec:toy-model}
Equations \eqref{eqn:M:acc+superr}, \eqref{eqn:J:acc+superr} and \eqref{eqn:R-Im-omega} contain some complications that may hide the physically relevant parts. Consider the following system of differential equations:
\begin{align}
\label{eqn:x-prime}
X'&=\lambda_X-(Y-X)Z,\\
\label{eqn:y-prime}
Y'&=\lambda_Y-2(Y-X)Z,\\
\label{eqn:z-prime}
Z'&=(Y-X)Z.
\end{align}
Here, the variables $X$ and $Y$ play the role of the two coordinates in the Regge plane, say $\alpha$ and $a_*$, while the variable $Z$ plays the role of the mass of the cloud, say $|R(r_+)|^2$. The line $Y=X$ is taken here to represent the superradiance threshold, $\mu=m\Omega_+$, and we used the fact that the growth rate ($\Im(\omega)\propto(m\Omega_+-\mu)$, see \eqref{eqn:ImOmega}) is proportional to the distance from the threshold. Finally, the parameters $\lambda_X$ and $\lambda_Y$ mock up the accretion rates of mass and angular momentum. The factor of 2 in \eqref{eqn:y-prime} is there to make sure that the slope of the threshold is smaller than the slope of the superradiance flux (any other larger-than-1 number would work equally well).
It is easy to see that equations \eqref{eqn:x-prime}, \eqref{eqn:y-prime} and \eqref{eqn:z-prime} imply $X+Z=\lambda_Xt+C_X$ and $Y+2Z=\lambda_Yt+C_X$, where $C_X$ and $C_Y$ are integration constants, and thus $Y-X=(\lambda_Y-\lambda_X)t+C_Y-C_X-Z$. Plugging this back into \eqref{eqn:z-prime}, we get
\begin{equation}
Z'=\bigl((\lambda_Y-\lambda_X)t+C_Y-C_X-Z\bigr)Z.
\end{equation}
Recall that the variable $Z$ represents the mass of the cloud: we thus require $Z(0)>0$. This implies that $Z(t)>0$ for every $t$, as otherwise $Z(t)$ would cross the trivial solution $Z(t)=0$. If $\lambda_Y>\lambda_X$, the line $Z(t)=(\lambda_Y-\lambda_X)t+(C_Y-C_X)$ is an attractor for all $t>0$: at large times, the solution can be expanded perturbatively as
\begin{equation}
Z(t)=(\lambda_Y-\lambda_X)t+(C_Y-C_X)-\text{const.}\times\exp\biggl(-(\lambda_Y-\lambda_X)\frac{t^2}2-(C_Y-C_X)t\biggr)+\ldots
\end{equation}
In this case, the mass of the cloud increases (linearly) with time. If $\lambda_Y<\lambda_X$, instead, the line $Z(t)=(\lambda_Y-\lambda_X)t+(C_Y-C_X)$ will only be an attractor for a finite time, during which the mass of the cloud will decrease linearly; afterwards, the line $Z(t)=0$ will become the new attractor: the solution at large times will be
\begin{equation}
Z(t)=\text{const.}\times\exp\biggl((\lambda_Y-\lambda_X)\frac{t^2}2+(C_Y-C_X)t\biggr)+\ldots
\end{equation}
The two cases $\lambda_Y>\lambda_X$ and $\lambda_Y<\lambda_X$ are in obvious correspondence with the over-superradiance and under-superradiance we described in Section \ref{sec:accretion+superradiance}.
In both cases, when $Z(t)$ is attracted to the line $(\lambda_Y-\lambda_X)t+(C_Y-C_X)$, we see that
\begin{equation}
Y-X=(\lambda_Y-\lambda_X)t+C_Y-C_X-Z
\end{equation}
is attracted to zero. The system therefore drifts along the threshold $Y=X$ as long as the mass of the cloud, $Z$, is large enough. From
\begin{equation}
Y-2X=(\lambda_Y-2\lambda_X)t+C_Y-2C_X,
\end{equation}
we can find the approximate evolution of the individual coordinates $X$ and $Y$, using $Y-2X\approx -X\approx -Y$. Of course, in this toy model we have treated the parameters $\lambda_X$ and $\lambda_Y$ as free. In the realistic case, if the second law of black hole thermodynamics holds (which requires the null energy condition and global hyperbolicity), the possible accretion fluxes are constrained to those that increase the black hole area. We saw in Section (\ref{sec:superradiance-area}) that the superradiance trajectory is tangent, on the threshold, to the constant-area lines. As in this toy model the superradiance trajectory is $Y=2X$, the said constraint on accretion would mean $\lambda_Y<2\lambda_X$.
\bibliographystyle{utphys}
\addcontentsline{toc}{section}{References}
\bibliography{BHbib}
|
Title:
Testbed preparation of a small prototype polarization modulator for LiteBIRD low-frequency telescope |
Abstract: LiteBIRD is the Cosmic Microwave Background (CMB) radiation polarization
satellite mission led by ISAS/JAXA. The main scientific goal is to search for
primordial gravitational wave signals generated from the inflation epoch of the
Universe. LiteBIRD telescopes employ polarization modulation units (PMU) using
continuously rotating half-wave plates (HWP). The PMU is a crucial component to
reach unprecedented sensitivity by mitigating systematic effects, including 1/f
noise. We have developed a 1/10 scale prototype PMU of the LiteBIRD LFT, which
has a 5-layer achromatic HWP and a diameter of 50 mm, spanning the
observational frequency range of 34-161 GHz. The HWP is mounted on a
superconducting magnetic bearing (SMB) as a rotor and levitated by a
high-temperature superconductor as a stator. In this study, the entire PMU
system is cooled down to 10 K in the cryostat chamber by a 4-K Gifford-McMahon
(GM) cooler. We propagate an incident coherent millimeter-wave polarized signal
throughout the rotating HWP and detect the modulated signal. We study the
modulated optical signal and any rotational synchronous signals from the
rotation mechanism. We describe the testbed system and the preliminary data
acquired from this setup. This testbed is built to integrate the broadband HWP
PMU and evaluate the potential systematic effects in the optical data. This
way, we can plan with a full-scale model, which takes a long time for
preparation and testing.
| https://export.arxiv.org/pdf/2208.03673 |
\keywords{CMB polarization, polarization modulation unit, half-wave plate}
\section{INTRODUCTION}
\label{sec:intro}
\LB\ is a satellite mission to measure the cosmic microwave background (CMB) over the full sky at a large angular scale, searching for the inflationary B-mode signal. The \LB's\ main scientific goal is to achieve the sensitivity in the tensor-to-scalar ratio, $r<0.001$ ~\cite{LB_PTEP_2022}. The \LB's\ focal plane accommodates $\sim5000$ multi-chroic polarized Transition Edge-sensor (TES) bolometers covering a wide range of frequencies from 34 to 448~GHz. \LB\ payload module (PLM) consists of Low-Frequency Telescope (LFT), Mid-Frequency Telescope (MHT), and High-Frequency Telescope (HFT). These telescopes operate at a cryogenic temperature of 5~K. Each telescope deploys a polarization modulation unit (PMU) which is the first optical element of the satellite. The PMU contains a continuously rotating broadband half-wave plate (HWP) to modulate the incident CMB polarization signal.
Measurement of the B-mode polarization signal at large angular scales can be contaminated by $\rm 1/f$ noise\cite{Kusaka_2014, Hill2020}. The temperature-to-polarization leakage also contaminates it by the instrumentally induced systematic effects, e.g. beam shapes, band-passes filter mismatch \cite{Hoang_2017}, and different gains among detectors. A HWP is the key instrumental element in mitigating systematic effects. The precise measurement of the CMB polarization signal requires accurate characterizations and models of all satellite subsystems, including the first optical element, PMU.
This study is a part of the development program of the \LB\ LFT PMU~\cite{Tomo_2016, Yuki_2020, Komatsu_2020, Takaku_2020, Komatsu_2021}. We made a $\rm 1/10$ scale prototype PMU, which contains a five-layer achromatic HWP (AHWP). A single wave plate is a sapphire disc with a diameter of 50~mm. The first and fifth layers include anti-reflective sub-wavelength structures. This small prototype PMU contains all the sub-components, which are to be scaled to a full-scale LFT PMU model, e.g. a cryogenic holder mechanism, rotation mechanism, superconducting magnetic bearing (SMB), encoder, readout monitoring, and drive electronics. Therefore, this prototype PMU system is a valuable development model for understanding the potential malfunctions, systematic effects, and unexpected features we anticipate in an upcoming full-scale model, reducing the risk involved in the full-scale development program. Examples are the HWP rotor vibrations, the HWP position angle reconstruction, the heat dissipation, and the realistic modulated signal from the HWP rotation.
In this paper, we describe a small prototype PMU, which is placed inside a 4-K Gifford-McMahon (GM) cryostat operating at 10~K. A part of the development status and the components of the small prototype PMU have been previously reported in Komatsu et al.\cite{Komatsu_2020}. We describe the progress and some of the sub-components, which are not detailed in the previous report.
The paper is structured as follows: Section \ref{sec:exp} presents the experimental setup of the small prototype PMU inside a cryostat. Section \ref{sec:result} shows the preliminary results of the angle reconstruction, the heat dissipation estimation, and the modulated signal of the continuously rotating AHWP. Section \ref{sec:discus} discusses future improvements, the misalignment of the optical setup, and the impact of the inhomogeneous magnetic fields on the focal plane. Finally, we summarize the proceeding of this paper in Section \ref{sec:conclu}.
\section{EXPERIMENT SETUP}
\label{sec:exp}
In this section, we describe the sub-components of the PMU in detail: the rotational mechanism, the gripper, the encoder, and the AHWP. Figure~\ref{fig:instru} shows the photograph of the sub-components. Then we describe an assembled setup.
\subsection{Cryogenic rotation mechanism}
We aim to maintain a stable rotation of the AHWP at 5~K. The cryogenic rotation mechanism employs a superconducting magnetic bearing (SMB) \cite{Tomo_2016}. The SMB consists of a high-temperature superconductor (HTS) YBCO and a NdFeB permanent magnet ring. The ten segments of YBCO bulk superconductors are arranged in a ring shape with an outer diameter, an inner diameter, and a height of 95~mm, 55~mm, and 20~mm, respectively. The NdFeB permanent magnet ring has an outer diameter of 85~mm, an inner diameter of 65~mm, and a height of 10~mm. At temperatures below 20~K, the frozen magnetic fields of the HTS will levitate the permanent magnet ring as a rotor. Therefore, the SMB with no mechanical contact can avoid the heat dissipation from the physical friction. However, we anticipate the energy loss due to the magnetic interaction, e.g. eddy current and hysteresis loss in the rotor and stator components~\cite{Yuki_2020}. One of the magnetic field sources is the rotor magnet. Any inhomogeneity of the ring magnet can produce the ac time-varying magnetic field for the stator components. We installed a Hall sensor (BHT921) and monitored its magnetic field inhomogeneity. The SMB can levitate the rotor magnet and thus the HWP. We must, however, drive the rotor and maintain its rotation at a constant rotational frequency. We employ a custom motor and drive electronics developed by Tamagawa Seiki. This AC driver motor outputs a three-phase signal with changing frequency depending on the feedback of the rotor speed.
\subsection{Cryogenic holder mechanism}
We use three cryogenic holder mechanisms separated by 120~degrees around the rotor. Each of them is controlled by a cryogenic stepping motor with a variable resistant slide volume, which implements the linear movement triggered by the stepping motor. This holder mechanism holds the rotor until the YBCO cools down below its critical temperature and functions as an SMB. Also, this holder serves as a conductive thermal path to cool the rotor because the thermal path is only through radiative heat exchange once the rotor levitates.
\subsection{Encoder system}
We measure the HWP position angle by an optical encoder. It consists of an encoder chopper disk on the rotor and a set of LED and silicon photodiode. We prepare three sets of LED (L9337-01) as emitters and SiPD (S2386-18L) as receivers. Each pair is placed face-to-face. The encoder disk has 64 slots for the relative angle and one for the absolute angle within one revolution. Two pairs of the LED-SiPD have represented phases A and B for the relative angle, while the other pair is used for the absolute angle named phase Z. Figure~\ref{fig:instru} shows the photograph of the encoder jigs and the encoder disk.
\subsection{Achromatic Half-Wave Plate}
The AHWP is a five-layer sapphire stack with a diameter of 50~mm in diameter\cite{}. The detailed description of this AHWP can be found in \cite{Komatsu_2020, Komatsu_2021}. The thickness of the single a-cut HWP is chosen based on the center frequency of the LFT at 97.5 GHz. Sapphire HWP can reflect about half of the incident signal due to its high refractive index. Therefore, the first and fifth layers include a moth-eye anti-reflective sub-wavelength structure (SWS) by laser machining. This method performs $ >90 \%$ of transmittance over the frequency range from 43 to 161~GHz. The details can be found in \cite{Takaku_2020}.
\begin{comment}
The formalism of using a continuously rotating HWP to modulate the signal is well described in several studies \cite{Tomo_thesis_2006, Komatsu_2020, Komatsu_2021}. We employed a 1/10 scale prototype of the polarization modulation unit for the \LB\ LFT. This prototype design has a 5-layer sapphire AHWP which has 50 mm in diameter and stacked together. We selected sapphire because the it has about $10\%$ difference in the ordinary and the extraordinary refractive indices, a low loss-tangent at the \LB\ LFT broadband, and a high thermal conductivity at a cryogenic temperature \cite{Komatsu_thesis}. The detail of the design is described in \cite{Komatsu_2020, Komatsu_2021}. The thickness of the single a-cut HWP is chosen based on the center frequency of the LFT at 97.5 GHz. From the literature, the refractive indices of the ordinary and extraordinary rays are $ n_o=3.047$ and $ n_e=3.361$, respectively \cite{johnson_phd_thesis}. Sapphire HWP can reflect about half of the incident signal due to its high refractive index. Therefore, we developed a moth-eye anti-reflective method so-called the Sub-Wavelength Structure (SWS) by laser machining. This method performs $ >90 \%$ of transmittance with good agreement between the simulation and data. The detail of the method is published in \cite{Takaku_2020}. In application, on the first layer and the fifth layer of the HWP, we fabricated a 2.2 mm thick SWS.
\end{comment}
\subsection{PMU System in the Cryostat}
\label{sub:exp}
Figure~\ref{fig:optic} shows the diagram of the experimental setup. A cryostat has two open windows with a diameter of 100~mm. A UHMWPE window is used to transmit the millimeter-wave source from the outside of the cryostat. We use a 2~mm thick Acrylonitrile butadiene styrene (ABS) plate as an infrared filter. The attenuated signal goes through AHWP mounted on the rotor of the rotational mechanism. Then, a 45~degree plane mirror reflects the signal inside the cryostat. Finally, a diode detector receives the output signal outside of the cryostat.
At room temperature, a mechanical chopper chops the input millimeter-wave signal. In order to obtain the polarized signal, two-wire grids are placed to align the polarization angle.
The PMU system is cooled down to below 10~K using a 4-K GM cryocooler. After the cryogenic holder mechanisms are fully opened, the rotor levitates by the SMB mechanism. The motor applies the driver torque to spin the rotor at a constant speed. We then provide the incident signal through the rotating AHWP into the cryostat and detect the output modulated signal. Finally, we monitor the temperature of the PMU using thermometers.
\begin{comment}
Figure \ref{fig:optic} illustrates the concept of the experimental setup. The cryostat has two open windows with a diameter of 100 mm. The first window is used to transmit the mm-wave source through the AHWP. A 45 degrees plane mirror reflects the signal inside the cryostat. A diode detector receives the output signal in the later window. On the vacuum shell ($ 4^{th}$) of the cryostat, we installed UHMWPE plates on both windows as an attenuator to reduce the optical load on cryogenic chambers. On the $ 3^{rd}$ chamber of the cryostat, two Acrylonitrile butadiene styrene (ABS) plates with a thickness of 2 mm are installed as IR filters.
At room temperature, a mechanical chopper is deployed to chop the input signal. In order to obtain the polarized signal, two wire grids are employed for a better alignment of the polarization angle. The mechanical chopper and the wire grids are placed slightly at an angle concerning the light rays to avoid standing waves. The mm-wave source and the diode detector are operated at room temperature.
The entire PMU system is cooled down to below 10 K using a 4-K GM cryocooler. The grippers are fully opened, and the rotor is levitated by the contactless SMB mechanism. The driver motor is applied to spin the rotor at a constant speed. We then penetrate the incident signal through the rotating HWP inside the cryostat and detect the output modulated signal. We monitor the temperature of the system using thermometers installed in the rotor, a gripper, the cold head of the cryocooler, the 4-K plate, the $ 2^{nd}$ shell, and the $ 3^{rd}$ shell.
\end{comment}
As the description of the AHWP formalism \cite{Tomo_thesis_2006, Komatsu_2020, Komatsu_2021}, the output modulated signal can be fitted using the equation
\begin{equation}
I_{out}(\nu, t)=A_0(\nu) + \sum_{n=1}^{8} A_n (\nu) \cos \left( n \omega_{hwp} t + n \phi_n \right),
\label{eq:modulate}
\end{equation}
where $ A_0$ is a constant amplitude, $ A_n$ are the amplitudes, $ \omega_{hwp}$ and $ \phi_n$ are the HWP modulated angle and the phase, respectively. We limit the harmonic frequency up to the $ 8^{th}$ in Equation \ref{eq:modulate} to study the synchronous signal.
\section{RESULTS}
\label{sec:result}
In this section, we present the preliminary results obtained from the experiment. Once the PMU system cooled down below 10~K, we spin the rotor using the electromagnetic drive mechanism and measure its rotation by the encoder signal. Then, we present the result of the angle reconstruction and the spin-down measurement. We also show the modulated signal, including the rotating AHWP synchronous signal. %
\subsection{Angle reconstruction}
\label{subsec:angle_res}
The left panel of Figure~\ref{fig:encoder} shows the raw encoder data when the PMU spins at 1~Hz. On the right panel, the power spectral density (PSD) of the encoder Z signal shows the dominant peak at 1~Hz and its harmonic oscillations of the PMU rotational frequency. The PSD of encoder A and B signals show the dominant peaks at the expected frequency of 64 Hz, corresponding to the 64 slots of the encoder disk.
Figure~\ref{fig:ang_recontr} shows the period $\Delta t_i$ and the HWP angle position reconstruction $\rho_{t_i}$ from the encoder-A signal. We rotated the HWP at 1 Hz, the encoder disk has 64 slots as mentioned above, thus each slot is equivalent to an angle of $360/64\simeq5.6$~degrees. We estimated the uncertainty of the HWP angle position reconstruction $\sigma_\rho \sim 0.1$~degrees.
\subsection{Spin-down measurement}
We conducted spin-down measurements to estimate the heat dissipation due to the rotational loss. Spin-down measurements help to estimate the energy loss from the contactless rotational mechanism. We set the speed of the rotor at a constant frequency and then let it freely decelerate. The angular deceleration can be expressed as a function of the HWP rotational frequency $f_{hwp}$. \cite{hanany_2003}
\begin{equation}
\alpha = 2 \pi \dfrac{df_{hwp}}{dt} = a_0 + 2\pi a_1 f_{hwp}.
\label{eq:spindown}
\end{equation}
The coefficient $a_0$ represents the contribution of the hysteresis loss due to the deceleration of the rotor. The coefficient $a_1$ determines the amount of the eddy current loss. The solution of the differential equation \ref{eq:spindown} has an exponential form
\begin{equation}
f_{hwp} \sim \dfrac{-a_0}{2 \pi a_1} + \dfrac{1}{2 \pi a_1} e^{-a_1 (t + c)},
\label{eq:sol_spindown}
\end{equation}
where $ c$ is a constant represents the starting point of the fitting time.
The left panel of Figure~\ref{fig:spindown} shows the spin-down measurements for a range of rotational frequencies from 0.2 to 1.1~Hz. We fit the rotational frequency data with the model Equation~\ref{eq:sol_spindown} to extract the parameters $a_0$ and $a_1$. The performance of the fit is shown in Figure~\ref{fig:spindown} (left). The typical values for $a_0$ and $a_1$ are $5.8\times10^{-3}$ $1/$s and $7.5 \times 10^{-4}$ \ 1/s$^2$. It is clear that the system is dominated by the hysteresis.
We assume the energy loss during the spin-down is dissipated as heat energy. The heat dissipation power $P$ is estimated as
\begin{equation}
P = I\tau = I \dfrac{d \omega}{d t} \omega \\
= I \left( a_0 + 2 \pi a_1 f_{hwp} \right) 2 \pi f_{hwp},
\end{equation}
where $\tau$ is the torque of the rotor. The moment of inertia of the rotor is assumed to be $3.6\times10^{-3}$~kgm$^2$, and the corresponding heat dissipation is $ 0.23$~mW.
The right panel of Figure~\ref{fig:spindown} shows the expected heat dissipation as a function frequency. The two contributions, hysteresis and the eddy current losses have a different frequency dependence. The hysteresis loss is higher than the eddy current loss at this rotational frequency rage.
\subsection{Modulated signal}
Figure \ref{fig:mod_sig} shows the modulated signal as a function of the HWP angle for one revolution. We fitted the data with the model Equation~\ref{eq:modulate}. We first extract the modulation efficiency $A_4/A_0 \sim 0.94$ and the phase $\phi_4 = - 17.6$~degrees.
Figure~\ref{fig:lockin} shows the PSD of the optical signal (left) from the lock-in amplifier and the PSD of the Hall sensor signal (right) for three configurations, no rotation with levitation and no input signal, no rotation with levitation and input signal, and rotation with levitation and input signal. We identified the peaks at the modulated frequency, $4f_{hwp}$, and rotational synchronous frequencies. The rich data set contains various subjects to be addressed. The examples are to identify the origin of all the peaks, determine the width and stability of the peaks, and demonstrate the demodulated PSD with the encoder data.
\begin{comment}
The experimental setup is described in \ref{sub:exp}. We propagated a 90~GHz coherent source through the continuously rotating AHWP system. The input signal is chopped with a mechanical chopper at 200~Hz. The signal is detected with the diode detector. We use a lock-in amplifier (SR830 DSP) to obtain the modulated signal. After collecting the data, we employ the angle reconstruction method as presented in \ref{subsec:angle_res} to obtain the HWP rotation angle. Figure \ref{fig:mod_sig} shows the modulated signal with respect to the HWP angle for one revolution. The amplitude is not constant due to synchronous optical signal proportion to the HWP rotational frequency, called HWP synchronous signal (HWPSS). We fitted the data with the model Equation \ref{eq:modulate}. We first extract the modulation efficiency $A_4/A_0 \sim 0.94$ and the phase $\phi_4 = - 17.6$ degrees.
Figure \ref{fig:lockin} shows the PSD of the optical signal (left) using a lock-in amplifier and the PSD of the Hall sensor signal (right) for three configurations. %
\begin{itemize}
\item We want to measure the background noise of the readout system including the lock-in amplifier. The rotor is levitated. The input source is turned off. We do not observe any signal along with the noise and the harmonic peaks of the electrical system at high frequency (50 Hz, 100 Hz). In the PSD of the Hall sensor signal, we observed a peak at $\sim 1.5 \ Hz$ which could arise from the cryocooler. Furthermore, there are harmonic peaks at higher frequencies that can be interpreted as the natural frequency of the rotor as discussed in \cite{Sakurai_2017, shinya2022}.
\item The second configuration has the same setup as the first configuration, and turns on a 90GHz coherent source. The signal is propagated through the AHWP. In the PSD of the modulated signal, there is a harmonic peak at $ \sim 8\ Hz$, we believe its source is from the mechanical chopper because if we change the speed of the chopper to 100 Hz or 30 Hz, the same configuration does not show that peak.
\item In the third configuration, the rotor is spun to $ f_{hwp}=0.5 \ Hz$. We obtained the rotational synchronous optical signal at $ 1f_{hwp}, \ 2f_{hwp}, \ 3f_{hwp}, \ 4f_{hwp}$, ... The modulated signal is supposed to be located at the $ 4f_{hwp}$. We observe the Hall sensor signal and found harmonic peaks at same locations as presented in the modulated signal.
\end{itemize}
\end{comment}
\begin{comment}
----
In this section, we present the preliminary results obtained from the experiment. Once the PMU system operates at the cryogenic temperature, we spin the rotor and measure the encoder signal as the first test. We present the result of angle reconstruction. The spin-down measurement study allows us to capture the heat dissipation of the PMU system. We study the modulated signal as well as the rotating AHWP synchronous signal. %
\subsection{Angle reconstruction}
\label{subsec:angle_res}
Robust HWP angle reconstruction is critical for analysis, demodulating the signal, and post-data analysis. From the encoder time stream shown in Figure \ref{fig:encoder} (left), we first define a threshold. We then extract two successive data points if the first data point is below and the second data point is above the threshold. We linearly interpolate the intersection using these two data points, and the threshold to find the time $ t_{i}$. The subscript $i$ represents the encoder disk slot position in the time stream. Therefore the period of the encoder signal is calculated
\begin{equation}
\Delta t_i = t_{i+1} - t_{i}.
\label{eq:en_period}
\end{equation}
If the HWP has the rotational frequency $f_{hwp}$, the HWP angle position is reconstructed as \cite{Yuki_2020}
\begin{equation}
\rho_{t_i} = 360 f_{hwp} \Delta t_i.
\end{equation}
Reconstructed angle accuracy is an important requirement to understand the systematic effect of the leakage from $E$ modes to the observed $B$ modes. If we assume the rotational frequency is a stable constant, the error propagation of the HWP rotational angles can be estimated via the standard deviation of the period as $\sigma_ \rho = 360 f_{hwp} \sigma_{ \Delta t } $.
Figure~\ref{fig:encoder} shows the raw data of the encoder signal when the PMU spins at 1 Hz as an example. The power spectral density (PSD) of the encoder Z signal shows the dominant peak at 1 Hz and its harmonic oscillations of the PMU rotational frequency. The PSD of encoder A and B signals show the dominant peaks at the expected frequency of 64 Hz which is corresponding to the 64 slots of the encoder disk.
Figure~\ref{fig:ang_recontr} shows the period $\Delta t_i$ and the HWP angle position reconstruction $\rho_{t_i}$ from the encoder-A signal. We rotated the HWP at 1 Hz, the encoder disk has 64 slots as mentioned above, thus each slot is equivalent to an angle of $360/64 \simeq 5.625$ degrees. We estimated the uncertainty of the HWP angle position reconstruction $\sigma_\rho \sim 0.098$ degrees. The small prototype PMU is using a commercial AC motor driver which automatically maintains the rotor speed around the set point. This is a source of the angle reconstruction uncertainty. In the development of the PMU we deploy another system that can control better the AC motor driver current. There are additional sources from the SMB system that contribute to the angle reconstruction uncertainty such as the rotational wobbling of the rotor, and the inhomogeneous magnetic fields of the YBCO ring \cite{shinya2020}. The data are taken typically for a period of 14 minutes, but we show only 10 seconds for the visualization purpose.
\subsection{Spin-down measurement}
We conducted spin-down measurements to estimate the heat dissipation due to the rotational loss. Spin-down measurements help to evaluate the thermal characteristic of the PMU. At the cryogenic temperature and the vacuum state, there is no air friction. The hysteresis loss in superconductors and the eddy current loss from the magnet holder conductors are the major causes of the deceleration of the rotor. We set the speed of the rotor at a constant frequency and then let it freely decelerate. The angular deceleration $\omega$ can be expressed as a function of the HWP rotational frequency $f_{hwp}$. \cite{hanany_2003}
\begin{equation}
\omega = 2 \pi \dfrac{df_{hwp}}{dt} = a_0 + 2\pi a_1 f_{hwp}.
\label{eq:spindown}
\end{equation}
The coefficient $a_0$ represents the contribution of the hysteresis loss due to the deceleration of the rotor. The coefficient $a_1$ determines the amount of the eddy current loss. The solution of the differential equation \ref{eq:spindown} has an exponential form
\begin{equation}
f_{hwp} \sim \dfrac{-a_0}{2 \pi a_1} + \dfrac{1}{2 \pi a_1} e^{-a_1 (t + c)},
\label{eq:sol_spindown}
\end{equation}
where $ c$ is a constant represents the starting point of the fitting time.
We carried out the spin-down measurements for a range of rotational frequencies from 0.2 Hz to 1.1 Hz. Using the encoder signal Z, periods are calculated as Equation~\ref{eq:en_period}. The decelerated frequency is simply the inverse of the period. We then fit the data with the model Equation~\ref{eq:sol_spindown} to extract the parameters $a_0$ and $a_1$. The performance of the fit is shown in Figure~\ref{fig:spindown} (left). Table~\ref{tab:spindown} lists the values of the fitted model.
\begin{table}[h]
\centering
\caption{The estimation of heat dissipation at different rotational frequency with spin-down measurements.}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Spin [Hz] & 0.2 & 0.3 & 0.4 & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 1.0 & 1.1 \\
\hline
$a_0$ $[ \times 10^{-5}]$ &546.84 & 569.11 & 601.83 & 565.96 & 592.49 & 595.12 & 583.72 & 577.39 & 576.37 & 579.13 \\
\hline
$a_1$ $[ \times 10^{-4}]$ & 9.61 & 7.50 & 5.53 & 7.70 & 6.85 & 6.91 & 7.52 & 7.81 & 7.31 & 7.25 \\
\hline
$P$ [mW] & 0.03 & 0.05 & 0.07 & 0.09 & 0.12 & 0.14 & 0.17 & 0.21 & 0.23 & 0.27 \\
\hline
\end{tabular}
\label{tab:spindown}
\end{table}
We assume the energy loss during the spin-down is dissipated as heat energy. The heat dissipation power $P$ is estimated as
\begin{equation}
P = I\tau = I \dfrac{d \omega}{d t} \omega \\
= I \left( a_0 + 2 \pi a_1 f_{hwp} \right) 2 \pi f_{hwp},
\end{equation}
where $\tau$ is the torque of the rotor. The inertia of the rotor $ I= \frac{1}{2} m R^2$. $m$ and $R$ are the weight and radius of the rotor, respectively. $\omega=2\pi f_{hwp}$ is the angular speed. Given the fitted model finding the parameters $ a_0$, and $ a_1$ at 1 Hz, the heat dissipation is found about $ 0.23 \ mW$. This is below the expected heat dissipation $1~mW$ referred in the thermal analysis study~\cite{Iida_2017}. The HWP operation requires maintaining the temperature below 10~K. The rotor system is levitated by no mechanical contact of the SMB mechanism at cryogenic temperature. It has very poor heat exchange by radiation. Therefore, the small energy can heat the HWP. As shown in Figure~\ref{fig:spindown} (right) and Table~\ref{tab:spindown} the increment of the rotational frequency will increase the losses. As a conclusion, both hysteresis losses and eddy current losses have a rotational frequency dependency. The method to study the hysteresis loss by measuring the inhomogeneity of the magnetic field is discussed in \cite{HULL19961, hanany_2003, Sakurai_2017b}. Materials of the prototype PMU and the surrounding structure are made of aluminium, we can improve by non-metallic material to reduce the eddy current loss. We are also investigating the idea of blackening the rotor structure to improve radiation cooling.
\subsection{Modulated signal}
The experimental setup is described in \ref{sub:exp}. We propagated a 90~GHz coherent source through the continuously rotating AHWP system. The input signal is chopped with a mechanical chopper at 200~Hz. The signal is detected with the diode detector. We use a lock-in amplifier (SR830 DSP) to obtain the modulated signal. After collecting the data, we employ the angle reconstruction method as presented in \ref{subsec:angle_res} to obtain the HWP rotation angle. Figure \ref{fig:mod_sig} shows the modulated signal with respect to the HWP angle for one revolution. The amplitude is not constant due to synchronous optical signal proportion to the HWP rotational frequency, called HWP synchronous signal (HWPSS). We fitted the data with the model Equation \ref{eq:modulate}. We first extract the modulation efficiency $A_4/A_0 \sim 0.94$ and the phase $\phi_4 = - 17.6$ degrees.
Figure \ref{fig:lockin} shows the PSD of the optical signal (left) using a lock-in amplifier and the PSD of the Hall sensor signal (right) for three configurations. %
\begin{itemize}
\item We want to measure the background noise of the readout system including the lock-in amplifier. The rotor is levitated. The input source is turned off. We do not observe any signal along with the noise and the harmonic peaks of the electrical system at high frequency (50 Hz, 100 Hz). In the PSD of the Hall sensor signal, we observed a peak at $\sim 1.5 \ Hz$ which could arise from the cryocooler. Furthermore, there are harmonic peaks at higher frequencies that can be interpreted as the natural frequency of the rotor as discussed in \cite{Sakurai_2017, shinya2022}.
\item The second configuration has the same setup as the first configuration, and turns on a 90GHz coherent source. The signal is propagated through the AHWP. In the PSD of the modulated signal, there is a harmonic peak at $ \sim 8\ Hz$, we believe its source is from the mechanical chopper because if we change the speed of the chopper to 100 Hz or 30 Hz, the same configuration does not show that peak.
\item In the third configuration, the rotor is spun to $ f_{hwp}=0.5 \ Hz$. We obtained the rotational synchronous optical signal at $ 1f_{hwp}, \ 2f_{hwp}, \ 3f_{hwp}, \ 4f_{hwp}$, ... The modulated signal is supposed to be located at the $ 4f_{hwp}$. We observe the Hall sensor signal and found harmonic peaks at same locations as presented in the modulated signal.
\end{itemize}
\end{comment}
\section{DISCUSSIONS}
\label{sec:discus}
We have presented the current status of the prototype PMU system and the evaluation of the millimeter-wave polarimetric performance. We are aware that this system is much smaller than the prospective flight size. Thus, the results may not fully represent the effects we will encounter with the flight size. However, we plan to explore broad parameter spaces using this setup with the following motivations.
Our measurements in this paper are limited to a single electromagnetic frequency at 90~GHz. We can expand this to a broader frequency range and study the spectroscopic properties of the AHWP. We also plan to study the coupling between the AHWP and the neighbor geometry, e.g. baffle and aperture. Any rotational synchronous signal may originate from the AHWP itself or the rotational parts that hold the AHWP. In addition to relying on a standard optics simulation, it is beneficial to study this effect experimentally with fast turn-round design modifications. Also, it is of great interest to experimentally investigate the stability of the rotational synchronous signal.
This setup also helps prepare to combine with a TES detector and readout system. When the PMU operates with the TES and its readout electronics, there may be many additional effects, e.g. magnetic and EMI interferences and temperature of the AHWP itself, that we cannot investigate in this setup\cite{tommaso2020, shinya2022}. Therefore, the current setup is helpful in the hardware preparation and should also help disentangle some of the effects in future data with the TES and its readout system.
Last but not least, the small setup is easy to handle regarding the hardware preparation time and cryostat run time. Any measurement from this setup provides the recipe of the measurement methods and its sequence for the flight scale model.
\section{CONCLUSIONS}
\label{sec:conclu}
We have demonstrated a 1/10 scale prototype of the PMU for the \LB\ LFT. The prototype PMU AHWP consists of a five-layer AHWP with the anti-reflection sub-wavelength structure on the first and fifth layers. The rotational mechanism contains the SMB, the cryogenic holder mechanism, and the encoder. We cooled down the prototype PMU to 10~K with a 4-K GM cryocooler. Then, we successfully levitated the rotor, and the AHWP rotated at several rotational frequencies. We carried out the spin-down measurement to estimate the heat dissipation due to the hysteresis loss and the eddy current loss. The coherent millimeter-wave source at 90~GHz is provided from outside of the cryostat to the inside, and the continuously rotating AHWP modulates the polarized signal. Given this setup presented in this paper, we are ready for more extensive tests, which are the preparation for the forthcoming full-scale model.
\begin{comment}
\appendix
\section{Half-Wave Plate formalism}
\label{sec:appendA}
We show here the formalism of the output signal through an achromatic Half-Wave Plate (AHWP). Similar approaches of the formalism are well described in \cite{Tomo_thesis_2006, Komatsu_2020, Komatsu_2021}. The measurement of a single detector at a given frequency $ S_{out} (\nu)$ through a N-layered continuously rotating AHWP is
\begin{equation}
S_{out} (\nu) \equiv
\begin{pmatrix}
I_{out} (\nu) \\
Q_{out} (\nu) \\
U_{out} (\nu) \\
V_{out} (\nu)
\end{pmatrix}
=
G R(-\rho ) \mathcal{M}_{hwp} (\nu) R(\rho)
\begin{pmatrix}
I_{in} (\nu) \\
Q_{in} (\nu) \\
U_{in} (\nu) \\
V_{in} (\nu)
\end{pmatrix}.
\end{equation}
Where $ I, \ Q, \ U, \ V$ are Stokes parameters and then the incident signal
\begin{equation}
S_{in} (\nu) \equiv
\begin{pmatrix}
I_{in} (\nu) \\
Q_{in} (\nu) \\
U_{in} (\nu) \\
V_{in} (\nu)
\end{pmatrix}.
\end{equation}
G is the response of the polarized detector. The polarization sensitive detector can be non-linearity for different detectors. In a simple case, we assume an ideal detector
\begin{equation}
G = \frac{1}{2}
\begin{pmatrix}
1 & 1 & 0 & 0 \\
1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix}.
\end{equation}
The rotational matrix
\begin{equation}
R(\rho) =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & \cos (2 \omega_{hwp} t ) & -\sin (2\omega_{hwp} t) & 0\\
0 & \sin (2 \omega_{hwp} t) & \cos (2 \omega_{hwp} t) & 0\\
0 & 0 & 0 & 1
\end{pmatrix}.
\end{equation}
The AHWP rotating angle at a given time $ \rho = \omega_{hwp} t$, with $ \omega_{hwp}$ is the angular frequency. The N-layered wave plates with optical axis angle $ \chi_i$ can be written using a Mueller matrix
\begin{equation}
\mathcal{M}(\nu) = \prod_i^N R(-\chi_i) \gamma(\nu) R(\chi_i)
=
\begin{pmatrix}
M_{II}(\nu) & M_{IQ}(\nu) & M_{IU}(\nu) & M_{IV}(\nu) \\
M_{QI}(\nu) & M_{QQ}(\nu) & M_{QU}(\nu) & M_{QV}(\nu) \\
M_{UI}(\nu) & M_{UQ}(\nu) & M_{UU}(\nu) & M_{UV}(\nu) \\
M_{VI}(\nu) & M_{VQ}(\nu) & M_{VU}(\nu) & M_{VV}(\nu)
\end{pmatrix}.
\end{equation}
Without any reflections, the Mueller matrix of a birefringent material is given
\begin{equation}
\gamma(\nu) =
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & \cos \delta(\nu) & -\sin \delta(\nu)\\
0 & 0 & \sin \delta(\nu) & \cos \delta(\nu)
\end{pmatrix},
\end{equation}
the retardance for a single plate is calculated using the thickness $ d$, refraction indexes $ n_e, n_o$:
\begin{equation}
\delta (\nu) = 2\pi \nu \frac{d |n_e-n_o|}{c}.
\end{equation}
Using the designed parameters of optical axis angles, thickness, and refractive indices listed in the Table \ref{tab:HWP}, the frequency range from 34-161 GHz for \LB\ LFT. We simulate the Mueller matrix elements for 5-layer HWP, as shown in Figure \ref{fig:mueller}.
We assume that the CMB has no circularly polarized $ V_{in} (\nu) = 0$. In this assumption the detector intensity at a given time by the AHWP can be expressed as \cite{Komatsu_2021}
\begin{equation}
\begin{split}
I_{ out}(\nu,t)=& D_{ 0I}(\nu)I_{ in}(\nu)+D_{ 0Q}(\nu)Q_{ in}(\nu)+D_{ 0U}(\nu)U_{ in}(\nu) \\
& +D_{ 2I}(\nu)I_{ in}(\nu)\cos(2\omega_{ hwp} t-2\phi_{ 0}(\nu)) \\
& +D_{ 2}(\nu)\sqrt{Q_{ in}(\nu)^{2}+U_{ in}(\nu)^{2}}\cos(2\omega_{ hwp} t-2\phi_{2}(\nu)) \\
& +D_{ 4}(\nu)\sqrt{Q_{ in}(\nu)^{2}+U_{ in}(\nu)^{2}}\cos(4\omega_{ hwp} t-4\phi_{ 4}(\nu)), \\
\label{eq:Iout}
\end{split}
\end{equation}
where the Mueller matrix elements are relevant
\begin{equation}
\begin{split}
D_{ 0I}(\nu)&= \frac{1}{2}M_{ II}(\nu), \\
D_{ 0Q}(\nu)&= \frac{1}{4}(M_{ QQ}(\nu)+M_{ UU}(\nu)), \\
D_{ 0U}(\nu)&= \frac{1}{4}(M_{ QU}(\nu)-M_{ UQ}(\nu)), \\
D_{ 2I}(\nu)&= \frac{1}{2}\sqrt{M_{ UI}(\nu)^{2}+M_{ QI}(\nu)^{2}}, \\
\phi_{ 0}(\nu)&= \frac{1}{2}\arctan\frac{M_{ UI}(\nu)}{M_{ QI}(\nu)}, \\
D_{ 2}(\nu)&= \frac{1}{2}\sqrt{M_{ IQ}(\nu)^{2}+M_{ IU}(\nu)^{2}}, \\
\phi_{ 2}(\nu)&= \frac{1}{2}\arctan\frac{M_{ IU}(\nu)}{M_{ IQ}(\nu)} +\frac{1}{2}\arctan\frac{U_{ in}(\nu)}{Q_{ in}(\nu)}, \\
D_{ 4}(\nu)&= \frac{1}{4}\sqrt{(M_{ QQ}(\nu)-M_{ UU}(\nu))^{2}+(M_{ QU}(\nu)+M_{ UQ}(\nu))^{2}},\\
\phi_{ 4}(\nu)&= \frac{1}{4}\arctan\frac{M_{ QU}(\nu)+M_{ UQ}(\nu)}{M_{ QQ}(\nu)-M_{ UU}(\nu)}+\frac{1}{4}\arctan\frac{U_{ in}(\nu)}{Q_{ in}(\nu)}. \\
\end{split}
\end{equation}
The last term in the Equation \ref{eq:Iout} contains the $ 4\omega_{hwp}$ angular frequency and $\phi_4$ which are the modulated signal and the phase, respectively. $ 2D_4$ is basically the polarization efficiency. We define the modulation efficiency as
\begin{equation}
\epsilon(\nu)=\frac{D_{ 4}(\nu)\sqrt{Q_{ in}(\nu)^{2}+U_{ in}(\nu)^{2}}}{D_{ 0I}(\nu)I_{ in}(\nu)+D_{ 0Q}(\nu)Q_{ in}(\nu)+D_{ 0U}(\nu)U_{ in}(\nu)}.
\label{eq:modeff}
\end{equation}
If we simply assume the incident signal $ S_{in} = (1, 0, 1, 0)$. The modulation efficiency and the phase of 5-layer AHWP with calculated Mueller matrix elements are shown in Figure \ref{fig:pol_eff}.
The polarization efficiency and the phase can be used to optimize the optical axis of the wave plates as studied in \cite{Komatsu_2021, Komatsu_thesis}. The equation \ref{eq:Iout} can be rearranged to form Equation \ref{eq:modulate} in order to fit the modulated signal of this experimental measurement.
\end{comment}
\acknowledgments %
We would like to thank Fabio Columbro and Peter Hargrave for fruitful comments. We thank the World Premier International Research Center Initiative (WPI), MEXT, Japan for support through Kavli IPMU. This work was also supported by JSPS KAKENHI Grant Numbers 18KK0083, and JSPS Core-to-Core Program number JPJSCCA20200003, A. Advanced Research Networks. \textit{LiteBIRD} (phase A) activities are supported by the following funding sources: ISAS/JAXA, MEXT, JSPS, KEK (Japan); CSA (Canada); CNES, CNRS, CEA (France); DFG (Germany); ASI, INFN, INAF (Italy); RCN (Norway); AEI (Spain); SNSA, SRC (Sweden); NASA, DOE (USA).
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Unidentified aerial phenomena I. Observations of events |
Abstract: NASA commissioned a research team to study Unidentified Aerial Phenomena
(UAP), observations of events that cannot scientifically be identified as known
natural phenomena. The Main Astronomical Observatory of NAS of Ukraine conducts
an independent study of UAP also. For UAP observations, we used two meteor
stations. Observations were performed with colour video cameras in the daytime
sky. We have developed a special observation technique, for detecting and
evaluating UAP characteristics. According to our data, there are two types of
UAP, which we conventionally call: (1) Cosmics, and (2) Phantoms. We note that
Cosmics are luminous objects, brighter than the background of the sky. Phantoms
are dark objects, with contrast from several to about 50 per cent. We observe a
significant number of objects whose nature is not clear. Flights of single,
group and squadrons of the ships were detected, moving at speeds from 3 to 15
degrees per second. Some bright objects exhibit regular brightness variability
in the range of 10 - 20 Hz. We use colourimetry methods to determine of
distance to objects and evaluate their colour characteristics. Objects RGB
colours of the Adobe colour system had converted to the Johnson BVR
astronomical colour system using the colour corrections. Phantom shows the
colour characteristics inherent in an object with zero albedos. It is a
completely black body that does not emit and absorbs all the radiation falling
on it. We see an object because it shields radiation due to Rayleigh
scattering. An object contrast makes it possible to estimate the distance using
colourimetric methods. Phantoms are observed in the troposphere at distances up
to 10 - 12 km. We estimate their size from 3 to 12 meters and speeds up to 15
km/s.
| https://export.arxiv.org/pdf/2208.11215 |
\fontsize{11}{11}\selectfont %
\title{Unidentified aerial phenomena I. Observations of events}
\author{B.E.~Zhilyaev, V.\,N.~Petukhov, V.\,M.~Reshetnyk}
\date{\vspace*{-6ex}}
\begin{center} {\small $Main \,Astronomical \, Observatory, NAS \,\, of \, Ukraine, Zabalotnoho \,27, 03680, Kyiv, Ukraine$}\\
{\tt [email protected]}
\end{center}
\section*{\sc introduction}
\indent \indent The Pentagon is interested in UFOs and created the All-domain Anomaly Resolution Office (AARO). The AARO's mission will be to synchronise the efforts of the Department of Defense and other U.S. federal departments and agencies to detect, identify, and attribute objects in the airspace of military interest associated with threats to air safety and national security. This includes unidentified anomalous, air, space, underwater and trans-medium objects.%
NASA will conduct an independent study of unidentified phenomena in the atmosphere. NASA commissions a research team to study Unidentified Aerial Phenomena (UAP) - that is, observations of events that cannot scientifically be identified as known natural phenomena. The agency's independent research group will be led by astrophysicist David Spergel, formerly chairman of the Department of Astrophysics at Princeton University. Daniel Evans, Research Officer at NASA's Science Mission Directorate, will be the NASA official responsible for organizing the study.
The Main Astronomical Observatory of NAS of Ukraine conducts an independent study of unidentified phenomena in the atmosphere. Our astronomical work is daytime observations of meteors and space invasions. Unidentified anomalous, air, and space objects are deeply concealed phenomena. The main feature of the UAP is its extremely high speed.
Helmholtz established that the eye does not fix phenomena lasting less than one-tenth of a second. It takes four-tenths of a second to recognize an event. Ordinary photo and video recordings will also not capture the UAP. To detect UAP, you need to fine-tune (tuning) the equipment: shutter speed, frame rate, and dynamic range (14 - 16 stops).
According to our data, there are two types of UAP, which we conventionally call: (1) Cosmics (COS), and (2) Phantoms (PHA). We note that Cosmics are luminous objects, brighter than the background of the sky. We call them names of birds (swift, falcon, eagle). Phantoms are dark objects, with a contrast, according to our data, from 50\% to several per cent. Both types of UAPs exhibit extremely high movement speeds. Their detection is a difficult experimental problem. They are a by-product of our main astronomical work, daytime observations of meteors and space intrusions.
\section*{\sc OBSERVATIONS AND DATA PROCESSING}%
For UAP observations, we used two meteor stations installed in Kyiv and in the Vinarivka village in the south of the Kyiv region. The distance between stations is 120 km. The stations are equipped with ASI 178 MC and ASI 294 Pro CCD cameras, and Computar lenses with a focal length of 6 mm. The SharpCap 4.0 program was used for data recording.
Observations of objects were carried out in the daytime sky. The brightness of the sky, depending on the state of the atmosphere and the distance from the Sun, ranges from minus 3 to minus 5 stellar magnitudes per square arc minute.
We have developed a special observation technique, taking into account the high speeds of the observed objects. The exposure time was chosen so that the image of the object did not shift significantly during exposure. The frame rate was chosen to take into account the speed of the object and the field of view of the camera. In practice, the exposure time was less than 1 ms, and the frame rate was no less than 50 Hz.
Frames were recorded in the .ser format with 14 and 16 bits. Violation of these conditions leads to the fact that objects will not be registered during observations.
To determine the coordinates of objects, the cameras were installed in the direction of the zenith or the Moon.
\subsection*{\sc Results}
Fig.1 shows the shoot of ordinary swift objects at a rate of at least 50 frames per second. Two consecutive shots.
The bright objects in Fig. 1 show a constant brightness. Fig. 5 shows an image of an object about 10 pixels in size (about 3 arc minutes), which indicates the final dimensions of the object and a contrast of about 20\%. Fig. 6 shows the color diagram of the object in the RGB filters of the Adobe color system. Object colors can be converted to the Johnson BVR astronomical color system using the color corrections published in \cite{Zhilyaev}.
\begin{equation}\label{}
(B - V)_{J} = (B - G)_{Ad} +0.60 ;\,\, (V - R)_{J} = (G - R)_{Ad} + 0.40
\end{equation}
This makes it possible to compare the colors of the object with the color of the reflected solar radiation. Sun radiation colors (B - V)$_{J}$ =+0.65, (V - R)$_{J}$ =+0.52. The colors of the radiation of the object (B - V)$_{J}$ = +2.86, (V - R)$_{J}$ = +2.88 significantly exceed the colors of the radiation of the Sun.
Fig. 2 shows a group of luminous objects (flotilla) of class "swift" of different brightness. Objects move at different speeds in different directions. Fig. 3 shows the transversal velocities of the objects. The velocities are represented by segments of straight lines. They obtained from the positions of the objects on two consecutive images. Fig. 4 shows that the "speeds" correlate with the brightness, namely, the greater the brightness, the greater the speed.
\subsection*{\sc Determination of distance to an object by colorimetry methods}
The colors of the object and the background of the sky make it possible to determine the distance using colorimetric methods. The necessary conditions are (1) Rayleigh scattering as the main source of atmospheric radiation; (2) and the estimated value of the object's albedo. The object partially shields the diffuse sky background and thus becomes visible.
The scattered radiation intensity observed at sea level has the form:
\begin{equation}\label{}
I=I_{0}e^{-\sigma s}
\end{equation}
Here $s$ is the distance to the object, $\sigma $ is the Rayleigh scattering coefficient, and $I_{0}$ is the value of the intensity observed at sea level. The linear Rayleigh scattering coefficient $\sigma $ has the form \cite{Allen}:
\begin{equation}\label{}
\sigma = 3\cdot 10^{18}\cdot \delta \cdot (n-1)^{2}/ \lambda ^{4}/ N
\end{equation}
Here $n$ is the refractive index of air, $ \lambda$ is the wavelength of light in microns, $ \delta$ is the depolarization coefficient equal to 0.97 for the Earth's atmosphere, and $N$ is the number of molecules in 1 cm (Loshmidt number).
Expression (2) can be represented in stellar magnitudes as:
\begin{equation}\label{}
\Delta m=1.086\cdot \sigma\cdot s
\end{equation}
Formally, the magnitude difference $\Delta $m can be considered as a decrease in intensity due to Rayleigh scattering screened by the object against the sky. The value of $\Delta $m per air mass for a clean atmosphere in the visual region (V) is $\Delta$mV $\approx $ 0.20 magnitudes and in the blue region (B) $\Delta$mB $\approx $ 0.34 magnitudes \cite{Allen}.
Thus, by measuring the difference between the stellar magnitudes of an object and the sky background, one can find the magnitude of the air mass before the object.
We use the approximation of a homogeneous atmosphere for calculations. The homogeneous atmosphere approximation assumes that the entire atmosphere is concentrated in the troposphere (8 - 10 km) and has a constant density. In the approximation of a homogeneous atmosphere by simple algebra, without integration, we get the path length $s$, i.e., distance to the object.
In a real atmosphere, the number of scattering centers (Loshmidt number) at a height of 10 km decreases by a factor of 2.5. When calculating the Rayleigh scattering coefficient in the homogeneous atmosphere approximation in the visual region (V), this introduces an error of about 6\% ($ \sigma$ = 0.251 instead of 0.223 \cite{Allen}.
Figures 7 and 8 show the image and color charts of the phantom object. The object is present in only one frame, which allows us to determine its speed of at least 52 degrees per second, taking into account the angular dimensions of the frame.
Fig. 8 shows the color characteristics inherent in an object with zero albedo. This means that the object is a completely black body that does not emit and absorbs all the radiation falling on it. We see an object only because it shields radiation in the atmosphere due to Rayleigh scattering. An object contrast of about 0.4 makes it possible to estimate the distance to the object as about 5 km. The estimate of the angular velocity given above makes it possible to estimate the linear velocity not less than 7.2 km/s.
Fig. 9 shows the shoot of another phantom object against the background of the Moon at a rate of at least 50 frames per second. Fig. 10 shows the color diagram of the object and the Moon in the RGB filters of the Adobe color system. Fig. 11 shows an object contrast of about 0.3. It makes it possible to estimate the distance to the object as about 3.5 km. Knowing the distance, we determine the size and speed. Track width is 175 arc seconds, size is 3.0 meters, the track length is 14 meters, exposure time is 1 ms, and speed is 14 km/s.
The color chart in Fig. 10 allows us to evaluate the color characteristics of the Moon and check the calibration of our cameras. The Moon has a color relative to the sky background: B - G = -2.5 log (1.7 / 2.7) = 0.5. We take into account the color correction in the Jhonson B - V system according to [x] due to Rayleigh scattering equal to 0.14 magnitude. Let's get the estimate B - V of the Moon: B - V = 0.50 + 0.60 - 0.14 = 0.96. The actual color of the Moon is B - V = 0.91 according to \cite{Allen} and differs from our estimate by 0.05 magnitudes within the photometric error.
In Figure 9 we can see a local feature (water tower). The color diagram of the tower in Fig. 12 gives a distance estimate of 0 $\pm $ 1 km. The actual distance is about 300 meters. Thus, colorimetric measurements confirm our estimates.
Fig. 13 shows a composite image with the phantom object and bright swifts. Objects move in the same direction, at roughly the same speed.
Fig. 14 shows an object contrast of about 0.55. This makes it possible to estimate the distance to the object at about 6.0 km. Knowing the distance, we determine the size and speed. The width of the object is 400 arc seconds, the size is about 12.0 meters.
The object crosses the field of view of 3 degrees in 0.18 seconds with a linear speed of about 15 km/s.
The colors of the swifts radiation in Fig. 14 significantly differ from the color presented in Fig. 6.
Fig. 15 shows an image with two bright swifts with variable intensities. Objects cross frame of 3 degrees with 50 frames per second with 1 ms exposure. For 0.35 sec they demonstrate speed of 8 degrees per second.
Fig. 16 shows the light curves of two bright swifts with the sample time of 20 ms. One swift demonstrates regular intensity variations of about 25 Hz. Another shows variations of about 10 Hz.
Figures 17 and 18 show UAPs over Kyiv. Objects cross frame of 2.2 degrees for 0.40 sec with 50 frames per second with 1 ms exposure. They demonstrate speed of 5.5 degrees per second.
Fig. 17 shows a composite image with the bright eagle and swift. It is obtained by dividing of two consecutive frames. We can see that objects are moving at different speeds.
Fig. 18 shows an object called by us as "eagle". An object is in size of about 12.5 arc minutes, which indicates the final dimensions. Its contrast is about of 28\%.
If we assume that the "eagle" is at a distance of 1 km, its size will be about 6 meters, if at a distance of 4 km, then 25 meters. In the latter case, its speed will be about 380 m/s (about 1M).
Fig. 19 shows a composite image with the bright falcon, swift, and high-speed phantom. Figure presents a broad range of UAPs. We see them everywhere. We observe a significant number of objects whose nature is not clear.
Fig. 20 demonstrates the phantom crosses the image of the bright falcon. It is easy to see that the phantom is indeed an opaque body that shields the radiation of a bright object.
Fig. 21 demonstrates two-site observations of UAPs. It is necessary to synchronize two cameras with an accuracy of one millisecond. Shoot at a rate of at least 50 frames per second is needed. In a field of view of 5 degrees at a base of 120 km, objects above 1000 km can be detected.
An object against the background of the Moon was detected at zenith angle 56 degrees. Parallax about 5 degrees was evaluated. This allow us to evaluate distance equal to 1524 km, altitude 1174 km, and linear speed of 282 km/s.
Coincidence of 2-point light curves in Fig. 22 means: we observe the same object. Fig. 23 shows the light curve at a sampling rate of 125 Hz. The object flashes for one-hundredth of a second at an average of 20 times per second.
\vspace*{1ex}
\vspace*{1ex}
\section*{\sc Conclusions}
The Main Astronomical Observatory of NAS of Ukraine conducts a study of UAP. We used two meteor stations installed in Kyiv and in the Vinarivka village in the south of the Kyiv region.
Observations were performed with colour video cameras in the daytime sky. A special observation technique had developed for detecting and evaluating UAP characteristics.
There are two types of UAP, conventionally called Cosmics, and Phantoms. Cosmics are luminous objects, brighter than the background of the sky. Phantoms are dark objects, with contrast from several to about 50 per cent.
We observed a broad range of UAPs everywhere. We state a significant number of objects whose nature is not clear.
Flights of single, group and squadrons of the ships were detected, moving at speeds from 3 to 15 degrees per second. Some bright objects exhibit regular brightness variability in the range of 10 - 20 Hz.
Two-site observations of UAPs at a base of 120 km with two synchronised cameras allowed the detection of a variable object, at an altitude of 1170 km. It flashes for one hundredth of a second at an average of 20 Hz.
Phantom shows the colur characteristics inherent in an object with zero albedos. We see an object because it shields radiation due to Rayleigh scattering. An object contrast made it possible to estimate the distance using colorimetric methods.
Phantoms are observed in the troposphere at distances up to 10 - 12 km. We estimate their size from 3 to 12 meters and speeds up to 15 km/s.
|
Title:
The Impact of Inelastic Collisions with Hydrogen on NLTE Copper Abundances in Metal-Poor Stars |
Abstract: We investigate the non-local thermodynamic equilibrium (NLTE) analysis for
\ion{Cu}{1} lines with the updated model atom that includes quantum-mechanical
rate coefficients of Cu\,$+$\,H and Cu$^+$\,$+$\,H$^-$ inelastic collisions
from the recent study of Belyaev et al. (2021). The influence of these data on
NLTE abundance determinations has been performed for six metal-poor stars in a
metallicity range of $-$2.59\,dex$\,\le$\,[Fe/H]\,$\le$\,$-$0.95\,dex. For
\ion{Cu}{1} lines, the application of accurate atomic data leads to a decrease
in the departure from LTE and lower copper abundances compared to that obtained
with the Drawin's theoretical approximation. To verify our adopted copper
atomic model, we also derived the LTE copper abundances of \ion{Cu}{2} lines
for the sample stars. A consistent copper abundance from the \ion{Cu}{1} (NLTE)
and \ion{Cu}{2} (LTE) lines has been obtained, which indicates the reliability
of our copper atomic model. It is noted that the [Cu/Fe] ratios increase with
increasing metallicity when
$\sim$\,$-$2.0\,dex\,$<$\,[Fe/H]\,$<$\,$\sim$\,$-$1.0\,dex, favoring a
secondary (metallicity-dependent) copper production.
| https://export.arxiv.org/pdf/2208.11812 |
\thispagestyle{plain}
\newcommand{\btx}{\textsc{Bib}\TeX}
\newcommand{\thestyle}{\texttt{\filename}}
\begin{center}{\bfseries\Large
Reference sheet for \thestyle\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \thestyle\ package, \LaTeX\ the
source file \thestyle\texttt{.dtx}.
\end{quote}
\head{Overview}
The \thestyle\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \thestyle.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\thestyle|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \thestyle\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \thestyle\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \thestyle\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \thestyle\ is also loaded; instead, add
the option to \thestyle.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \thestyle; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \thestyle\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description} |
Title:
Do Cellular Automaton Avalanche Models Simulate the Quasi-Periodic Pulsations of Solar Flares? |
Abstract: Quasi-periodic pulsations (QPPs) with various periods that originate in the
underlying magnetohydrodynamic processes of the flaring structures are detected
repeatedly in the solar flare emissions. We apply a 2D cellular automaton (CA)
avalanche model to simulate QPPs as a result of a repetitive load/unload
mechanism. We show that the frequent occurrence of magnetic reconnections in a
flaring loop could induce quasi-periodic patterns in the detected emissions. We
obtain that among 21070 simulated flares, 813 events endure over 50 seconds,
scaled with the temporal resolution of the Yohkoh Hard X-ray Telescope, and
about 70 percent of these rather long-lasting events exhibit QPPs. We also
illustrate that the applied CA model provides a wide range of periodicities for
QPPs. Furthermore, we observe the presence of multiple periods in nearly 50
percent of the cases applying the Lomb-Scargle periodogram. A lognormal
distribution is fitted to the unimodal distribution of the periods as a
manifestation of an underlying multiplicative mechanism that typifies the
effect of the system's independent varying parameters. The global maximum of
the periods' lognormal distribution is located at 29.29 seconds. We compare
statistics of the simulated QPPs with parameters of the host flares and discuss
the impacts of flare properties on QPPs' periods. Considering the intrinsic
characteristic of CA models, namely the repetitive load/unload mechanism, and
the obtained pieces of evidence, we suggest that CA models may generate QPPs.
We also examine the applicability of the autoregressive integrated moving
average models to describe the simulated and observational QPPs.
| https://export.arxiv.org/pdf/2208.02493 |
\title{Do Cellular Automaton Avalanche Models Simulate the Quasi-Periodic Pulsations of Solar Flares?}
\correspondingauthor{Nastaran Farhang}
\email{[email protected]}
\email{[email protected]}
\author{Nastaran Farhang}
\affil{Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran}
\author{Farhad Shahbazi}
\affil{Department of Physics, Isfahan University of Technology, 84156-83111, Isfahan, Iran}
\author{Hossein Safari}
\affiliation{Department of Physics, Faculty of Science, University of Zanjan, 45195-313, Zanjan, Iran}
\section{Introduction}\label{sec:intro}
Quasi-periodic pulsations (QPPs) with periods ranging from a few seconds to several minutes are often observed across the entire flare emissions, synchronously \citep{dennis1985, nakariakov2009, van2016quasi, dominique2018, nakariakov2018, hayes2019, zimovets2021}. The understanding of QPPs and their underlying physical mechanism has been subject to many studies in recent decades \cite[see][for a recent review]{mclaughlin2018}. In general, QPPs are believed to be generated by either propagation and dissipation of MHD modes in the solar corona (i.e., oscillatory processes) or an underlying load/unload mechanism (i.e., self-oscillatory processes). Each mechanism involves individual characteristics for these modulated patterns.
The main idea of the oscillatory processes is that small perturbations in plasma parameters (e.g., magnetic field strength, temperature, etc.) produce magnetohydrodynamic (MHD) waves. The evanescence of MHD waves modulates the plasma emissions, namely QPPs occur. Various models have been developed to investigate different oscillatory modes as possible candidates for energy modulations \cite[e.g.,][]{nakariakov2004b, nakariakov2005, chen2006, nakariakov2006, nakariakov2009, de2012, takasao2016, dennis2017}. According to this perspective, all MHD modes can modulate microwave emissions. The longitudinal and torsional modes could not describe the simultaneous modulation of microwave and HXR bands. Nevertheless, sausage and kink modes could produce synchronized modulation across the entire electromagnetic spectrum. These models are capable of producing a wide range of periodicities as well as describing multiple periods observed in QPPs \citep{nakariakov2003, melnikov2005, van2011, hong2021, lu2021}.
In self-oscillatory processes, the system is governed by a cyclical mechanism in which energy accumulates as a power supply (e.g., photospheric motions) slowly and continuously derives the system. If the free energy exceeds some threshold, the system releases a significant amount of energy through an avalanche process. In this context, the QPPs are triggered as a result of the frequent occurrence of magnetic reconnections which modulates the released energy and the acceleration of charged particles. The load/unload mechanism could describe synchronous emissions at different wavelengths \citep{aschwanden1987, craig1991, ofman2006p, zaitsev2008, mclaughlin2012p, mclaughlin2012, hayes2016, li2020quasi}.
Despite many theoretical and observational studies, it is yet unclear whether QPPs are derived by restoring forces in the perturbed coronal plasma, or they are direct outputs of repetitive reconnection regimes. It is also possible that each of these mechanisms effectively plays a role. Statistical analysis of oscillatory parameters and their governing scaling laws could practically provide information about the formation of QPPs. In addition to theoretical modeling and data analysis, numerical studies may provide a different and new approach to interpret QPPs.
Taking the idea of self-oscillatory processes into account, we perform a numerical study to simulate and investigate the statistics of QPPs. For this purpose, we consider solar flares as a self-organized critical (SOC) system and apply a modified version of the \cite{LH1991} model to numerically reproduce magnetic relaxations (flare-like events) and their accompanying QPPs. The remainder of this paper is organized as follows: In Section \ref{sec:load}, we argue the possibility of simulating QPPs as a result of a load/unload mechanism by cellular automaton (CA) models. Then, we introduce the characteristics of the applied model in Section \ref{sec:NSIM}. We describe the detection of QPPs in the numerical modeling in Section \ref{sec:stst}. We also present the statistics of simulated QPPs and compare them with observational reports. In Section \ref{sec:tsmodel}, we briefly review some classes of linear processes applicable in time series modeling and investigate their productivity in examining both simulated and observational QPPs. Finally, we conclude in Section \ref{sec:con}.
\section{Is It Technically Possible to Simulate QPPs by CA Models?}\label{sec:load}
By definition, the natural tendency of a system to automatically adjust its components to establish a critical state is called SOC, provided that exceeding some threshold leads to scale-free fluctuations in the system \citep{aschwbook2011}. In the SOC mechanism, a gradual energy supply drives the system towards a critical state at which it relaxes through a sequence (avalanche) of nonlinear energy-dissipative events. The frequency-size distribution of released energies manifests a power-law-like behavior. The threshold is an intrinsic feature of SOC and the driving timescale is much larger than the avalanche timescale.
SOC was introduced by \cite{bak1987} who established the first CA avalanche model, namely the sand-pile model. The model employs a grid with nodal values representing the sand distribution in the system. The initial state is constructed using random numbers and the energy supply mechanism operates as grains gradually drop into randomly selected sites, which builds up local piles. In the case of exceeding a critical slope in the system, the sand grains fall off the unstable pile to reduce the slant and relax the system. Since the driving operation time is negligible compared to the avalanche timescale, in numerical modeling the driving mechanism halts once an instability occurs and automatically reactivates after the release of the free energy. Hence, the energy balance is maintained in the SOC systems. The CA approach provides the ability to investigate the complex behavior of avalanche processes by breaking them down into smaller pieces.
The scale-free distribution of solar flare energies evokes the application of SOC models to study the stochastic nature of these events \citep{georg1998, mike2001, charbon2001, Litvinenko2001, nita2002, Mike2008, morales2008a, morales2010, mendoza2014, dani2015, farhang2018, alipour2019, farhang2020}. Solar flares occur due to a gradual and continual increase of magnetic stress in the solar atmosphere. The coronal magnetic field evolves as magnetic structures emerge, twist, braid, or annihilate on the Sun's surface, and relaxes through e.g., magnetic reconnections. Local relaxations may raise the stress elsewhere in the system and lead to a sequence of reconnections with massive release of energy \citep{Gold1960, forbes1991, longcope1996, biskamp2000, priest2000, somov2010, Loureiro2016}. As a result of the energy load and unload cycle, coronal magnetic topology changes. The stored magnetic energy converts to heat (e.g., ohmic dissipation) and kinetic energy of charged particles (generating MHD oscillatory modes, shock waves, and turbulence in the plasma). Therefore, various physical processes trigger thermal (EUV and SXR) and non-thermal (e.g., microwave, HXR, etc.) emissions during a solar flare \citep{Carmichael1964, sturrock1968, hirayama1974, kopp1976, somov1991, svestka1992, somov1997, fletcher2011, priest2014, Benz2017}. These emissions are often associate with QPPs.
\cite{Parker1988} proposed that gradual perturbations in the solar atmosphere lead the coronal magnetic structures towards an unstable state. Hence, small bursts (magnetic reconnections) occur as the building blocks of flaring events. Accordingly, \cite{LH1991} introduced a lattice-based model to investigate the efficiency of the CA approach in simulating solar coronal explosive events. Lu and Hamilton defined a discrete magnetic field over a 3D grid of nodes and applied a driving mechanism for topological evolution of the solar magnetic field. In the constructed model, the stressed magnetic field relaxes along with a series of magnetic reconnections, after which the system dives into a new equilibrium state. The frequency-size distribution of simulated flaring events is shown to follow a power law.
Based on the above discussion, CA models are technically adequate for simulating QPPs by a load/unload mechanism since they are well capable of reproducing repetitive reconnection regimes. Here, we use a modified version of the \citeauthor{LH1991} model in two dimensions and reassess its outputs and their physical interpretations. In our simulation, instead of a magnetic field we define a magnetic vector potential field over a 2D grid. Therefore, the magnetic field remains divergence-free over time. The model is introduced in detail in the next section.
\section{Model Properties}\label{sec:NSIM}
We consider a 2D lattice as a cross-section of a coronal loop and study the variation of the magnetic field inside this sector. Uniformly distributed random numbers are used to construct the initial state. The nodal values represent the magnetic vector potential field, $ \textbf{A}=A(x,y)\hat{z} $ in which $\hat{z}$ is a unit vector along the loop axis. We also consider the open boundary condition by keeping $ A = 0 $ on borders. This allows the energy to leak out of the system's boundaries. Otherwise, there will be an exponential growth in the energy and the system could never reach a stable state.
To imitate the evolution of the coronal magnetic field, a local driving mechanism is applied. Therefore, a small disturbance, generated from a uniform distribution, is added to an arbitrary node at each \textit{driving step}. Then, the stability of the entire system is checked through the criterion:
\begin{equation}
\label{eqb1}
\left| \Delta A_{i,j}\right| \hspace{1mm}\equiv \hspace{1mm} \left| A_{i,j} - \frac{1}{4}\sum_{l=1}^{4} A_{l}\right| > A_{c},
\end{equation}
where the sum runs over the four nearest neighbors $ (A_{l}) $. The instability threshold, $ A_{c}, $ is a random number generated from a Gaussian distribution with e.g., $\mu=1$ and $ \sigma=0.01 $. Should the instability criterion fulfil anywhere in the system, the field locally relaxes through a set of redistributions:
\begin{eqnarray}
\label{eqb2}
A_{i,j}^{n+1}=&&A_{i,j}^{n}- \frac{4}{5} A_{c}, \vspace{13mm} \nonumber \\
A_{l}^{n+1}=&&A_{l}^{n}+ \frac{1}{5}A_{c},
\end{eqnarray}
where $ n $ denotes the \textit{evolution step}, $A_{i,j}$ is the central node, and $ l=1,2,3,4 $.
In this perspective, a magnetic reconnection is regarded as short-range interactions between an unstable node and its nearest neighbors. A succession of redistributions from an inceptive instability somewhere to ultimate stability everywhere is called an avalanche (flare). Various choices of redistribution rules are possible including conservative or nonconservative, isotropic or anisotropic, and deterministic or probabilistic. Also, long-distance interactions are conceivable \cite[see:][]{isliker1998, charbon2001, stru2014, farhang2018, farhang2019}.
There are two different aspects in measuring the constructed configuration's energy: the lattice energy and the released energy. The development of the former should expectedly demonstrate an energy balance in the system as the implemented open boundary condition avoids a continuous increase of energy. Therefore, the system reaches an approximate stationary state on top of which the energy fluctuates due to excitements and relaxations. Besides exhibiting a power-law-like behavior, the latter could provide important information about the underlying physical mechanisms.
In a magnetic configuration, the energy is proportional to the square of the magnetic field strength. Therefore, we have:
\begin{eqnarray}
\label{eqbb3}
E_{\rm latt} \propto \sum_{\textrm{lattice}} B^2 \propto \sum_{\textrm{lattice}} |\nabla \times \textbf{A}|^{2} \propto \sum_{\textrm{lattice}} A^{2}.
\end{eqnarray}
The matter of interest is to appraise the ability of CA models in generating QPPs. An extensive understanding of the energy release process is required to achieve this purpose. The amount of released energy during each redistribution is:
\begin{eqnarray}
\label{eqb3}
{e}_{i,j}=\sum {\left({A}^{n+1}\right)}^{2}-\sum {\left({A}^{n}\right)}^{2}= \frac{4}{5}\left(1 - 2 \frac{| {\rm\Delta }{A}_{i,j}|}{{A}_{c}}\right){A}_{c}^{2},
\end{eqnarray}
where the sum runs over the five nodes engaged in the redistribution.
Since there is no preferred direction in the system, all unstable nodes are redistributed simultaneously. One can consider it as a bunch of reconnecting events occurring in a single \textit{time frame}. Therefore, the total released energy during each frame is:
\begin{eqnarray}
\label{eqb4}
e_{frame} = \sum e_{i,j},
\end{eqnarray}
and the sum includes all the identified unstable nodes.
Local redistributions may cause other instabilities in the system. Thus, the system's stability needs to be examined again after each time frame. In the case of appearing new instabilities in the system, the whole \textquotedblleft check-redistribute\textquotedblright~procedure continues until an equilibrium is achieved. Then, the driving mechanism operates again. Accordingly, the flare energy is the sum of the released energies in successive frames between two driving steps or equivalently the energy difference between two consecutive driving steps:
\begin{eqnarray}
\label{eqb5}
E_{\textrm{flare}} = \sum e_{frame} \simeq E_{\textrm{latt}}^{t+1} -E_{\textrm{latt}}^{t},
\end{eqnarray}
where $ t $ denotes the driving step.
\section{Numerical QPPs and their Periodicities}\label{sec:stst}
We established a square lattice of magnetic vector potential field with $\bm{256 \times 256}$ grids and studied its evolution over $200,000$ driving steps. Figure \ref{Fig1} shows the variation of the lattice energy during which $20$ million, $275$ thousand, $220$ magnetic reconnections occurred in the system. Starting from an initially random state, it took over $4$ million evolution steps for the system to reach an approximate stationary state over which the energy fluctuates around a constant average value. We restrict our study to the total number of $21,070$ flaring events registered after the energy balance is delivered to the system (i.e., $124,064$ driving steps or equivalently $15,677,989$ evolution steps).
\subsection{QPPs in CA Models}\label{sec:consept}
We propose that the released energies during individual frames can be considered as the energy recorded by solar detectors within their temporal resolution. Hence, the released/detected energy from a flaring event is supposed to exhibit a quasi-periodic pattern (as a representation of QPPs). Figure \ref{Fig2} displays an example of the detected QPPs for a simulated flare with the total dimensionless energy of $ 5.74 \times 10^{4} $, in which $61,834$ magnetic reconnections occurred during 884 frames.
For comparison, the observed emissions of a solar flare recorded by the Yohkoh satellite are presented in Figure \ref{Fig3}. The subject flare occurred on March 1th, 1998 at 17:09 and lasted over $10$ minutes. The Yohkoh Hard X-ray Telescope (HXT) registered this event in four spectral bands (i.e., L, M1, M2, and H bands) with a temporal resolution of half-seconds. The attended oscillatory patterns are referred to as QPPs.
One important question is whether all registered energies within consecutive frames in both observations and simulations exhibit quasi-periodic characteristics. \cite{inglis2016} conducted an extensive study on several M and X flares recorded by the Geostationary Operational Environmental Satellite (GOES) and the Gamma-ray Burst Monitor (GBM). They found that roughly one-third of the studied flares were attended by quasi-oscillatory patterns. Furthermore, \cite{szaforz2019} investigated the partially occulted flares listed in the Yohkoh Legacy Data Archive (YLA) and determined that less than $ 50\% $ of these emissions exhibit QPPs. Failure to detect QPPs in a large fraction of observations might be due to limited sensitivity of the instruments, deficiency of computational methods, inappropriate interpretation of QPPs, or the absence of these pulsations along with flaring events. In the following, we assess the likelihood of numerical QPPs representing similar statistics. But first, we introduce the algorithm used to study the periodicities.
\subsection{The Lomb-Scargle Periodogram}\label{sec:plomb}
The Lomb-Scargle periodogram is a powerful mathematical tool developed to study the oscillatory parameters in data samples. Having its origins in the Fourier transform, this periodogram decomposes a discrete regular/irregular signal ($ S $) to its independent frequency components:
\begingroup
\Large
\begin{eqnarray}
\label{eq7}
\mathcal{P}_{S} = \frac{1}{2}\bigg(\tfrac{{[\sum_{j}{S}_{j}\cos\omega({t}_{j}-\tau)]}^{2}}{\sum_{j}{\cos}^{2}\omega({t}_{j}-\tau)}+ \tfrac{{[\sum_{j}{S}_{j}\sin\omega({t}_{j}-\tau)]}^{2}}{\sum_{j}{\sin}^{2}\omega({t}_{j}-\tau)}\bigg),
\end{eqnarray}
\endgroup
where the sum includes all data samples and $ \mathcal{P}_{S} $ is the estimate of the power spectral density (PSD). The parameter $ \tau $,
\begin{eqnarray}
\label{eq8}
\tan(2\omega\tau) = \frac{\sum_{j}\sin 2\omega{t}_{j}}{\sum_{j}\cos 2\omega{t}_{j}},
\end{eqnarray}
guarantees the orthogonality of the sinusoidal terms in Equation (\ref{eq8}) and equips the algorithm to be invariant under global time shifts, in contrast to the fast Fourier transformation \citep{lomb1976,scargle1982}.
By some means, the Lomb-Scargle periodogram is equivalent to other frequency analysis techniques, namely the least-square fitting, phase-folding, and Bayesian methods \citep{vanderplas2018}. It also provides a $ \chi^{2}- $distributed power spectrum for Gaussian uncertainties \citep{vio2013}. Due to its convenient properties, the Lomb-Scargle periodogram has a widespread application in time series analysis particularly in astronomical studies \cite[e.g.,][]{tarnopolski2021, zhang2021, saikia2022}. Here, we apply this technique to study the numerical QPPs' periodicities.
Once the analysis is performed, the significance level of each frequency component is validated against the \textquotedblleft false alarm probability\textquotedblright~(FAP). The FAP estimates the likelihood of a frequency to appear in the periodogram due to the existence of noise in the signal rather than the original source. Various methods have been developed to measure the FAP in presence of different contaminative sources \citep{horne1986, baluev2008, delisle2020, delisle2020_2}. In the simplest case, the FAP is measured assuming a white noise in the time series, whilst in the more complicated contexts correlated noise are considered.
Figure \ref{Fig4} shows an example of a PSD obtained for the observational QPPs together with the FAP significance levels of $50, 10, 1,$ and $0.01$ percent. To identify the dominant frequencies in the power spectrum, we use a Gaussian filter:
\begin{eqnarray}
\label{eq9}
F(\mathcal{P},\nu)=H(\nu)\exp\left(- \frac{{\mathcal{P}-C(\nu)}^{2}}{{W(\nu)}^{2}} \right),
\end{eqnarray}
where $ H, W, $ and $ C $ are the height, width, and contribution of each peak in the PSD. We also filter out the harmonics. Further details of the applied frequency extraction technique are discussed in the next section.
\subsection{Statistics of the Simulated flares and QPPs}\label{sec:res}
Applying the CA approach, $21,070$ flares are simulated with dimensionless energies of $0.7$ to $2.4\times 10^{5}$. Figure \ref{Fig5} displays the probability distribution functions (PDFs) of the simulated flare energies and lifetimes (durations, $D$). Expectedly, both distributions exhibit power-law-like behaviors \citep{LH1991, charbon2001}. The simulated events are categorized into $5$ types (labeled as A, B, C, M, and X flares) relevant to the GOES classification system. Among all events, only $813$ flares (about $3.8\%$) released their energy in more than 100 frames. Therefore, considering the liberated (free) energies in consecutive frames analogous to emissions recorded by the HXT in half-seconds intervals, most of the flares lasted less than $50$ seconds. The simulated events with $D>50$ seconds are mostly C-class and above. Durations less than $50$ seconds may also relate to small-scale flaring events such as small-scale brightenings (microflares, campfires, bright coronal points, etc.) that have been recently observed with space missions \cite[see e.g.,][]{chen2021, berghmans2021, shokri2022}.
As shown in Figure \ref{Fig5}, both distributions are fitted with power-law functions. For the energy distribution the power index is $ 1.45 \pm 0.02 $ and the goodness-of-fit is assessed using the Kolmogorov-Smirnov test (KS-test). The KS-test gives a measure of the departure between two distributions based on a decision-making routine (between a null and an alternative hypothesis). It also returns a $p-$value that validates the decision. In this case, the null hypothesis is that the PDF follows a power law and the obtained $p-$value, $0.98$, does not reject the null. This power-law (scale-free) behavior suggests that the CA self-oscillatory processes can be considered as the generative mechanism of self-similar, self-organized or self-organized criticality. It also implies that magnetic reconnections are most probably the leading generator of flaring events \citep{Parker1988, aschwbook2011}. For the duration distribution, the power-law function exhibits a better match with the significant events (the tail of the distribution) rather than the small-scale events with energies $<$ B0-class, including A-class flares and other tiny features. The power index is $1.71 \pm 0.02$ with $p-$value of $0.90$. The obtained $p-$value indicates that the difference between the PDF and the power-law function is not statistically significant.
To perform a cyclical analysis, usually, the first step is to normalize the subject time series. Various forms of normalization are possible \citep{ogasawara2010, panigrahi2013}. Here, we apply two common types and argue their effects on the PSDs. The applied normalization routines are:
\begin{eqnarray}
&&S_{N} = \frac{S-\bar{S}}{\bar{S}}, \label{eq10}\\
&&S_{MA} = \frac{S-\hat{S}_{k}}{\hat{S}_{k}}\label{eq11},
\end{eqnarray}
where $\bar{S}$ is the overall mean value of $S$ and $\hat{S}_{k}$ is the moving average over a window with size $k$. In the remaining of this paper, $S_{N}$ and $S_{MA}$ are referred to as normalized and smoothed time series, respectively.
Figure \ref{Fig4} displays the PSDs obtained for the HXR emissions of a solar flare recorded by Yohkoh together with the FAP levels. The FAP provides an estimation of the accuracy of the extracted periods. Hence, we only consider periods with intensities higher than $ 0.01 \%$ significance level (the green horizontal line) as true findings. The top panels of the figure are the results of applying the Lomb-Scargle periodogram on the smoothed times series with $k=80$. The most prominent peaks of the PSDs are located at $22.24, 21.83,$ and $27.48$ seconds for L, M1, and M2 bands, respectively. No significant peak is obtained for the H band. The bottom panels correspond to the normalized time series. In contrast with the smoothed time series that most likely have one prominent peak, the PSDs of normalized light curves might manifest more peaks. However, the extracted periods of the smoothed samples are highly sensitive to the averaging window size and various choices of $k$ do not reproduce convergent results. Therefore, Equation (\ref{eq10}) provides a more reliable normalization (without any information loss), particularly in cyclical analysis. We apply Equation (\ref{eq10}) to the registered energies and evaluate their periodicities. We include all the peaks above the maximum FAP level in the performed survey.
Figure \ref{Figm} illustrates an example of the obtained PSDs for a few of simulated flares. As seen in the figure, the power spectrums exhibit a power-law-like behavior in the log-frequency presentation. Such a statistic of the PSD have been previously reported in the solar and stellar studies including observations of solar flares and QPPs \cite[e.g.,][]{cenko2010,gruber2011,ireland2014,inglis2016}. This result also attests to the productivity of the CA approach in the reproduction of flares and QPPs.
We investigate the properties of the simulated QPPs in two regimes of $D\leq 50$ and $D>50$ seconds. Figure \ref{Fig6} displays the histograms (number) of extracted periods for both regimes individually (panels a \& b) and collectively (panel c). Among $20,257$ events of the first regime, about $36\%$ of flares lasted long enough (more than 5 frames) to perform the frequency analysis. Our analysis shows that, less than $1\%$ of these flares are accompanied by QPPs. However, in the seconds regime ($D>50$), quasi-periodic patterns are observed in nearly $70\%$ of events. We observe that about $47\%$, $41\%$, and $12\%$ of QPPs exhibit $1, 2,$ and $3$ conspicuous peaks in their PSDs, respectively. We fit a lognormal distribution to the unimodal histograms of Figure \ref{Fig6}. The lognormal distribution is,
\begin{eqnarray}
L(p,\mu,\sigma) = \frac{1}{p\sigma\sqrt{2\pi}}\exp\bigg( \frac{-(\ln(p)-\mu)^{2}}{2 \sigma^{2}} \bigg), \label{eq12}
\end{eqnarray}
where $\mu$ and $\sigma$ are the scale and shape parameters, respectively. The global maximum of the fitted lognormal distribution is:
\begin{eqnarray}
Mode[p]=e^{\mu-\sigma^{2}},
\label{eq14}
\end{eqnarray}
which represents the most probable period of the simulated QPPs and equals to $33.66 \pm 0.71$ and $29.29 \pm 0.67$ seconds for panels (b) and (c), respectively. Applying the KS-test, the $p-$values are also obtained $0.56$ and $0.62$, respectively.
It is well known that, the lognormal distribution is a characteristic of stochastic systems that associate with the multiplicative effect of independent varying parameters \citep{mitzenmacher2004, tokovinin2014, ruocco2017}. Therefore, the obtained lognormal distribution for the periodicities might originate in the driving mechanism, the number of unstable sites contributing to reconnections, or even the amount of released energies. One may ask how two different generative mechanisms i.e., the power-law behavior (Figures 5 and 6) and the lognormal behavior (Figure 7) may work together in a system. \cite{pauluhn2007} developed a stochastic model for generating small-scale flaring emissions with an initial power-law distribution that evolved via random kicks as a multiplicative process. In such a system, although the time series develops due to a multiplying generative mechanism, the inceptive power-law behavior of the event distribution is stored in the memory of the time series. The machine learning methods have been applied to determine the power-law index of such simulated and observational time series \cite[e.g.,][]{bazargan2008,taj2012,sadeghi2019,upendran2021}.
\cite{valdivia2005} argued that intermittent SOC phenomena (e.g., reconnections and substorms) and self-similar turbulent plasma sheets present power-law characteristics. Furthermore, \citeauthor{alipour2022} studied the morphological and intensity structures of the small-scale brightenings (campfires) observed by the Solar Orbiter. They discussed that, to some extent, both the power law and lognormal functions could conveniently model the heavy-tailed distributions of campfires' duration, peak intensity, and size. Moreover, they showed that small-scale brightenings mainly occur at the supergranular cell boundaries (Figure 7, therein). Therefore, these features are analogous to flaring events in the sense of an underlying generative mechanism (i.e., magnetic reconnections), except that they appear in much smaller scales. In other words, one may consider QPPs, or even campfires, related to the reconnection regimes that occur between adjacent magnetic structures as the magnetic field sweeps towards the footpoints via horizontal or turbulent flows. For further discussion on power laws and multiplicative processes see \cite{stefanthurner2018}, section 3.3.3 therein.
Evaluating the existence of any statistical relationship between the oscillatory parameters of the QPPs and the host flares could provide intuition about the underlying generative mechanism. Here, we assess the dependency of the simulated QPPs' periods on the host flare durations and energies. Table \ref{table1} presents the correlations between these variables. The obtained positive covariances indicate the dependency of the QPPs' periods on the flare parameters. However, the Pearson coefficient does not confirm a strong linear correlation. Figure \ref{Fig7} presents the scatter plots of the extracted periods versus the flare durations (top panel) and energies (bottom panel). The fan-shaped diagram of the top panel partially corresponds to the repeated values as multiple periods are detected in the PSDs of some QPPs (see e.g., black rectangles in the figure). It may also indicate the existence of another covariate influencing the dependency of pulsation periods on the flare durations. Nevertheless, no particular pattern is observed in the bottom panel.
We also calculate the non-linear correlations between the QPPs' periods and flare properties using the mutual information \citep{shannon1948,kreer1957}:
\begin{eqnarray}
\mathcal{I}(p;f)=\int\int P(p,f)\log \frac{P(p,f)}{P(p)P(f)}dp df,
\label{eq15}
\end{eqnarray}
where, $P(p)$ and $P(f)$ are marginal distributions of the periodicities and flare properties, respectively and $P(p,f)$ is the joint distribution. Following the \cite{peng2005} routine, the mutual information is obtained a positive value (for both the duration and energy, see Table \ref{table1}) which implies the existence of a missing parameter responsible for the observed non-constant (heterogeneous) variances. Although the measured correlations attest to the impacts of flaring events (reconnection regimes) on the production of QPPs, further observational or theoretical investigations are required to fully understand the involved processes and determine the parameters affecting the QPPs characteristics.
\begin{table}
\caption{The statistical correlations between the QPPs' periods (p) and the host flare properties, i.e., duration (D) and energy (E).}
\begin{center}
\begin{tabular}{c c c c}
\hline
Variables & Covariance & Pearson Correlation & Mutual Information\\
\hline
\hline
(p,D) & $4.1$ $\times$ $10^{3}$ & $0.55$ & $5.17$ \\
(p,E) & $6.8$ $\times$ $10^{5}$ & $0.63$ & $6$ \\
\hline
\end{tabular}
\end{center}
\label{table1}
\end{table}
\section{Modeling The Time Series of QPPs}\label{sec:tsmodel}
The study of real-world data and synthetic times series has a rich history that goes back over a hundred years. Time series analysis provides statistical information that sheds light on the underlying generative mechanisms and may even result in prediction. To perform such analyses, numerous algorithms/tools have been developed which are mainly classified into two categories: frequency-based and time domain methods \citep{kantz2003, box2015, shumway, chatfield2019}. A well-established paradigm of the first, e.g., the Lomb-Scargle periodogram, is introduced and implemented in Section \ref{sec:stst}. The second type operates based on correlation analysis. Here, we study the stochastic time series of both simulated and observational QPPs using a parametric time domain method. We apply an autoregressive integrated moving average (ARIMA) model to the non-stationary time series of QPPs and investigate their efficiency in characterizing the QPPs behavior.
The ARIMA models could adequately study processes with time varying mean values. These models, first, remove the non-stationarity by applying a backshift operator and then, fit either an autoregressive (AR), a moving average (MA), or a combination of these terms on the subject series \citep[The Econometrics Toolbox User's Guide,][]{matlab}. Both the AR and MA terms cope with the serial autocorrelations of a given time series and use the past observations and errors of the sample, respectively, to make predictions for the future values. A MA process of order $q$ is defined as:
\begin{eqnarray}
S_{t}&=&Z_{t}+\theta_{1}Z_{t-1}+\dots+\theta_{q}Z_{t-q},\nonumber \\
&=&(1+\theta_{1}B+\theta_{2}B^{2}+\dots+\theta_{q}B^{q})Z_{t},
\label{eqapp1}
\end{eqnarray}
where $S_{t}$ is the sample value at time $t$, $\theta_{i}$ is the MA coefficient for $i=1..q$, $B$ is the backshift operator, and $Z_{t}$ is a random value, generated from a Gaussian distribution (white noise). For the simplicity and without loss of generality, the expectation of the noise is assumed to be zero. Similarly, an AR process of order $p$ is:
\begin{eqnarray}
S_{t}&=&Z_{t}+\phi_{1}S_{t-1}+\dots+\phi_{p}S_{t-p},\nonumber \\
&=&Z_{t}+(\phi_{1}B+\phi_{2}B^{2}+\dots+\phi_{p}B^{p})S_{t},
\label{eqapp2}
\end{eqnarray}
and $\phi_{i} ~ (i=1..p)$ is AR coefficient.
Considering Equations (\ref{eqapp1}) and (\ref{eqapp2}), an ARIMA$(p,d,q)$ process is defined as:
\begin{eqnarray}
\bigg( 1-\sum_{i=1}^{p} \phi_{i}B^{i} \bigg){(1-B)}^{d}S_{t}=\bigg( 1+\sum_{j=1}^{q} \theta_{j}B^{j} \bigg)Z_{t},
\label{eqapp3}
\end{eqnarray}
where $d$ is the difference parameter and takes integer values. ARIMA models indicate the existence of short-memory autocorrelations in a system. A more general approach is to consider an additional $1/f$-type long-memory autocorrelation in the time series, for which the autoregressive fractionally integrated moving average (FARIMA or ARFIMA) models are applicable. The FARIMA models share the same representation of Equation (\ref{eqapp3}) except that the difference parameter adopts non-integer values.
The application of the time domain methods has been of interest in astronomical studies in recent decades \cite[e.g.,][]{lazio2001dual,stanislavsky2009, templeton2009, feigelson2012modern, kelly2014, feigelson2018}. In the following, we perform a parametric analysis on the registered energies of both simulated and observational flares of Figures \ref{Fig2} and \ref{Fig3}, respectively. We use MATLAB's \textit{EconometricModeler} toolbox to examine several choices of $p, d,$ and $q$ for both ARIMA and FARIMA models and discuss the results. The main motivation for fitting an ARIMA/FARIMA model to the time series of QPPs is to investigate the functionality of these methods in describing the quasi-oscillatory patterns of solar flares.
Figure \ref{Fig9} illustrates the autocorrelation function (ACF) and partial autocorrelation function (PACF) of the simulated QPPs. The lag number and the significant range are considered a quarter of the sample points and $ \pm 2\sigma, $ respectively. As seen in the figure, the ACF tails off gradually which is consistent with the existing trend in the time series. This indicates that a non-zero difference parameter is required for modeling. Moreover, a periodic-like pattern is observed in the ACF. The PACF also spikes strongly at lag 1. The autocorrelations provide some intuition about the order of parameters. However, the preferred values are those minimizing the Akaiki information criterion (AIC). AIC estimates the performance of an applied model on a given data sample. Further to this criterion, the residuals should be stationary and normally distributed. Figure \ref{Fig10} displays AICs for $900$ examined ARIMA models. The minimum AIC corresponds to $p=4, d=1, $ and $q=5$. Strong negative spikes are observed for models with $d=2$ as a manifestation of overdifferencing. Figure \ref{Fig11} shows the simulated time series together with the ARIMA(4,1,5) model, and the residuals (left panels).
Even though the AIC has determined a decent model, an explicit investigation of residuals is still required to avoid possible mis-specifications. Besides stationarity, the residuals of a convenient ARIMA-type model should follow a Gaussian distribution with no correlations. In order to check whether these conditions are met, various tests are available e.g., Ljung-Box Q-test for autocorrelations, KS-test, KPSS test, Dickey–Fuller test, and etc. \citep[see the Econometrics Toolbox User's Guide,][for a detailed review]{matlab}. We performed a detrended fluctuation analysis (DFA) on the residuals and obtained the Hurst exponent equal to $ 0.48 \pm 0.01 $. We also investigated the existence of a unit root applying the mentioned tests and found that the residuals have similar characteristics to a white noise. The ACF of the residuals is shown in Figure \ref{Fig11} right panel.
Generally, quasi-periodic patterns are universal features that arise due to an underlying $1/\nu-$type process \citep{feigelson2012modern}. Such long-memory processes might be described with FARIMA models. The difference parameter of a stationary and invertible fractional ARIMA model lies in the range of $ -1/2 < d_{f} < 1/2 $, where $ d_{f} $ corresponds to the long-term dependency in the subject time series \citep{beran2013}. There are several methods to measure the difference parameter of a FARIMA model such as performing a DFA (as $H = d_{f} + 1/2$), fitting a power law to either the gradual decay of ACF ($ACF(l) \propto l^{2d_{f}-1} $) or the Fourier PSD ($ \mathcal{P}(\nu) \propto \lvert \nu \lvert ^{-2d_{f}}$), or even performing a discrete wavelet transformation \citep{Barkoulas1997, reisen2001}. On the downside, different methods might not necessarily converge to the same result \cite[see][section 11.9]{feigelson2012modern}.
Applying a DFA, the Hurst exponent of the simulated QPPs is obtained $ 1.35 \pm 0.01 $ which means that the time series is non-stationary, and leads to $ d_{f} = 0.85 $. Furthermore, we performed a power-law fit on the ACF of the QPPs and found $ d_{f} = 0.75 $. The obtained fractional differences imply that even though the process is mean reverting, the variance is infinite \citep{granger1980}. \cite{beran2013} discussed cases for which both the fractional and non-fractional difference parameters are required, namely, FARIMA($p,d_{\rm total},q$) models with $ d_{\rm total} = d + d_{f}$. One possible approach is to consider e.g., $d=1$ and take the difference of the time series through $(1-B)S_{t}$, then, compute $d_{f}$. Following this perspective, we obtain $ d_{f} = -0.03$ which indicates a relatively weak long-term dependency.
We have also performed a similar analysis on the HXR emissions of the solar flare of Figure \ref{Fig3}. The ACFs and PACFs relevant to each pass band are shown in Figure \ref{Fig12}. Various choices of ARIMA parameters ($ p,q = 1..20, $ and $d=1,2$) are examined and their AICs are measured. The minimum AICs of the L, M1, M2, and H bands are achieved for ARIMA$(2,1,2)$, ARIMA$(4,1,3)$, ARIMA$(19,1,13)$ and ARIMA$(15,1,15)$, respectively. However, the residuals do not follow a Gaussian distribution in contrast with the initial assumption of Equation (\ref{eqapp3}). Such violation might impact the maximum likelihood estimations performed for AIC calculation and model selection. Nonetheless, the obtained parameters might still be correct as the regression models could practically overcome the non-normality of residuals in many cases \citep[see e.g.,][]{knief2021violating}. Figure \ref{Fig14} displays the registered emissions, ARIMA models and the residuals for each spectral band.
It seems that ARIMA models could conveniently characterize the time series of simulated flaring events. For the observational time series, more study is required to evaluate the influence of initial assumptions on model selection. In the next step, we would like to extend the study by considering a none-Gaussian noise in ARIMA-type models to adequately investigate their practicality in flare studies.
\section{Conclusion}\label{sec:con}
The scale-free nature of solar flare energies inspired the application of SOC models to investigate these phenomena. Despite the development of numerous MHD simulation models, less attention has been paid to the pre-existing CA models and their capabilities in simulating QPPs. In the present study, we took the idea of SOC together with the self-oscillatory processes (load/unload mechanism) into consideration and discussed that CA models could technically generate QPPs due to their intrinsic ability in reproducing repetitive reconnection regimes. Then, we reappraised the well-known CA avalanche model of \citeauthor{LH1991} and investigated the productivity of this model in evaluating QPPs' characteristics.
Applying the modified Lu \& Hamilton model, we simulated $21,070$ flaring events. We obtained that nearly $4 \%$ of these events last over $50$ seconds, scaled with the HXT temporal resolution, which mostly includes the C-class flares and more energetic explosions (some B-class flares are also included in this list). The QPPs are found in $70 \%$ of these flares ($565$ out of $813$). However, less than $1 \%$ of small-scale events ($D \le 50$ seconds) exhibited quasi-periodic patterns. We applied the Lomb-Scargle periodogram to study the quasi-periodic patterns. The distribution of extracted periodicities follows a lognormal distribution with a global maximum of $29.29 \pm 0.67$ seconds which illustrates the most probable period for the simulated QPPs. The obtained lognormal behavior indicates the presence of a multiplicative mechanism (typifying the effect of independent varying parameters e.g., driving mechanism, number of unstable sites, released energies, etc.) due to which the system evolves. However, the stochastic and scale-free nature is preserved in the memory of the time series as an essential characteristic of the observational/simulated flaring events.
We observed that the CA models could practically produce a wide range of periodicities for QPPs. Moreover, the presence of multiple periods is observed in nearly $50 \%$ of simulated QPPs. According to our results, although there is a clear dependency between the QPPs' periods and the host flare durations, other covariates might also be involved that affect their relation. We obtained that the applied CA model adequately accomplishes simulating flares, QPPs, and their statistics. Moreover, we examined ARIMA-type models on the time series of observational and simulated QPPs. We observed that ARIMA models could describe the subject QPPs. However, further studies are required to address their utility in modeling flares and accompanying QPPs.
\textbf{Acknowledgments:} We acknowledge the use of data from the YLA. The authors gratefully thank Dr. Aki Takeda and Dr. Keiji Yoshimura from the YLA team for their kind response and preparation of the HXR time profiles. N.F. express her gratitude to the Iran National Science Foundation (INSF) for supporting this research under grant No. 99012824. The authors also gratefully acknowledge the anonymous Reviewer and statistics Editor for their constructive suggestions.
\newpage
\newpage
\clearpage
\bibliography{ref_2022.bib}
|
Title:
Astrophotonic Solutions for Spectral Cross-Correlation Techniques |
Abstract: Using photonic devices, we developed a new approach to traditional
spectroscopy where the spectral cross-correlation with a template spectrum can
be done entirely on-device. By creating photonic devices with a carefully
designed, modulated transmission spectrum, the cross-correlation can be carried
out optically without requiring any dispersion, vastly simplifying the
instrument and reducing its cost. The measured correlation lag can be used for
detecting atomic/molecular species within and determining the radial velocity
of a particular astrophysical object.
We present an overview of two design approaches that are currently being
developed that use different photonic platforms: silicon and fibre-based
photonics. The silicon photonic approach utilizes ring resonators that can be
thermo-optically modulated to carry out the cross-correlation. The fibre
approach uses customized fibre Bragg gratings (FBGs) with transmission spectra
that can be strain-modulated. Both approaches have been able to detect
molecular gas in a lab setting, and we are now in the process of on-sky
testing.
Lastly, we discuss the future for these types of devices as their simplicity
opens up the possibility of developing low-cost, purpose-built multi-object or
integral field spectroscopic instruments that could make significant
contributions to scientific programs requiring stellar RV measurements and
exoplanet detections.
| https://export.arxiv.org/pdf/2208.05983 |
\keywords{Astrophotonics, Spectroscopy, Cross-correlation. Silicon Photonics, Fibre Bragg Gratings}
\section{Introduction}
\label{sec:intro} %
Spectroscopic measurements are fundamental for understanding the properties of astronomical objects. These measurements are essential for determining these objects' kinematics, composition, and excitation states. Spectral cross-correlation techniques are often used to measure precise radial velocities and determine composition of stars and exoplanets. These techniques require spectra from high dispersion echelle spectrographs, which are complex and costly to design and construct. Modern echelle spectrographs require sophisticated optical designs, large echelle gratings, and large format image sensors to reach their design goals of reaching high spectral resolutions of R $>20,000$ over a broad wavelength range. This is needed for both precise stellar radial velocity (RV) measurements and exoplanet atmospheric characterization\cite{2021A&A...645A..96P,2020MNRAS.498.5684D}.
For these measurements, astrophysical information is extracted by cross-correlating spectra rich in atomic and/or molecular features with models to extract radial velocities (RVs) and compositions. In the case of stellar measurements, either binary masks or spectral models are constructed to match the physical conditions of the stellar photosphere to cross-correlate with. Similarly in exoplanet transit or emission spectroscopy, a multitude of exoplanet atmospheric models are cross-correlated with the stellar transit spectrum to determine the composition of its atmosphere.
Attempts to reduce the cost and complexity of high dispersion spectrographs date back several decades primarily due to the lack of availability of large format image sensors\cite{1967ApJ...148..465G,1982PASP...94.1017F}. These instruments measured RVs of stellar objects by using a traditional high dispersion spectrograph, which used a specialized focal plane mask and a photodetector to measure the transmission through the mask, instead of a large format detector. The focal plane mask was designed to mimic the stellar spectral features of the object being studied. By translating the mask by known amounts and measuring the photometer output, the radial velocity of the object can be measured. This technique is in effect carrying out an optical cross-correlation between the stellar spectrum and specialized mask. Despite being several decades old, these instruments were able to reach RVs better than 1 km s$^{-1}$\cite{1982PASP...94.1017F}. Nonetheless, they share a similar level of optical complexity when compared to modern echelle spectrographs. Astrophotonics offers a new and more elegant solution for carrying out the optical cross-correlation at lower complexity and cost.
The field of astrophotonics has been growing dramatically over the past decade. Astrophotonic devices offer new approaches to transitional astronomical instrumentation. These include miniature spectrometers\cite{2021OExpr..2924947S,2021SPIE11819E..0IG}, customizable notch filters\cite{2015SPIE.9507E..0CE,2021OExpr..2915867P}, multi-mode to single mode converters\cite{2010OExpr..18.8430L} (photonic lanterns). Due to the scale of these astrophotonic instruments, they are much more compact and cheaper than one-of-a-kind traditional instruments that use bulk optics. A recent review\cite{2021A&ARv..29....6M} presents the current state-of-the-art in this field. Astrophotonics now offers a modern solution to the decades old cross-correlation spectrograph concept without any need for dispersion. Through the design of customizable notch filters that can be modulated, we have developed a cross-correlation spectrograph, a correlation sensor in short, that is tailor made for specific scientific problems. The solution is entirely non-dispersive and does not require complex optics and can be fabricated on a multitude of platforms (e.g. silicon photonics, fibres).
In Section \ref{sec:technique}, we discuss our proposed optical cross-correlation technique. In Section \ref{sec:implementation}, we present specifics of the implementation of this technique. In Section \ref{sec:future}, we highlight future prospects for this method, which could simplify highly multiplex spectroscopy. Finally, in Section \ref{sec:summary}, we summarize our instrument concept.
\section{Optical Cross-Correlation Technique}
\label{sec:technique}
To develop the technique, the process begins with the identification of the scientific problem that needs to be solved, which requires spectroscopy. Next, the photonic device will need to be customized for that program by first developing the required spectral template needed for the cross-correlation. As an example, we focus on a frequent scientific need to measure stellar radial velocities. Common science cases include the measurement of kinematics of resolved stars within our galaxy and nearby dwarf galaxies to constrain their mass distributions. For this program, we target the Calcium Triplet, which is a particularly prominent feature in bright giant stars that are often used as targets. Located at around 0.85 $\mu$m, the calcium triplet (CaT) forms very strong absorption lines that is frequently used for RV measurements. To illustrate our technique, we have simulated a custom three notch transmission filter that matches the CaT lines (Figure \ref{fig:correlation} Left Panel). In our concept, we then modulate this notch filter across the CaT lines and measure the transmitted intensity (Figure \ref{fig:correlation} Right Panel). The transmitted intensity is the cross-correlation between the notch filter and the input spectrum:
\begin{equation}
C(v) = S(\lambda) \star T(\lambda),
\end{equation}
where $v$ = $\Delta\lambda/\lambda_0$*c, $S$ is the spectrum of the object, $T$ is the transmission profile of the notch filter, $C$ is the observed cross-correlation lag, $\Delta\lambda$ is the wavelength offset from the rest wavelength when cross-correlating, and $\lambda_0$ is where maximum correlation is achieved with an object at rest. $\lambda_0$ corresponds to specific spectral feature you are correlating with. There is a trade-off between the width of each notch of the filter and the time to scan across the features in order to obtain sufficient signal-to-noise ratio (SNR) in the correlation measurement. This needs to be chosen carefully for a particular scientific problem.
\subsection{Measuring Radial Velocity}
In this illustrative case, this spectrum of the object, G4III giant star, was taken from the NASA IRTF SPEX spectral library\cite{2009ApJS..185..289R}. As shown in Figure \ref{fig:correlation}, when the notch filter aligns with the absorption features, there is a significant dip in intensity through the filter, indicating that we have reached the appropriate RV for the object. The width of the notch filter was chosen to be similar to the width of the individual CaT absorption lines in the spectrum. In this case, the spectral features are limited by the IRTF SPEX spectrograph\cite{2003PASP..115..362R}, which has a spectral resolving power of $R\sim2,000-2,500$, requiring a FWHM of each notch filter to have a full-width-half-maximum (FHWM) of 0.35 nm. This FWHM is larger than the typical line width of CaT features ($0.15-0.2$ nm) in red giant stars. In general, the observations done by our cross-correlation instrument will only be ultimately limited by the intrinsic line width of the astrophysical source and therefore the filters need to be designed accordingly. By measuring the position of the dip in the intensity after modulation, the RV of the source can be determined.
\subsection{Measuring Composition}
Another exciting direction is the measurement of exoplanet atmospheres through detection of molecular species in the optical and infrared. At this time, this is typically accomplished by cross-correlating model atmospheric templates with spectra obtained from high dispersion spectrographs\cite{2019AJ....157..114B}. Due to their lower temperatures, molecules are abundant in exoplanet atmospheres and consequently contain a forest of absorption lines that could be correlated with a specialized filter. This includes molecules such as O$_2$, H$_2$O, CH$_4$, CO, and CO$_2$. To illustrate this, some of the molecular features that can be observed in the infrared $H$-band for a transiting hot super-Earth, 55 Cancri e, are illustrated in Figure \ref{fig:exoplanet}. For reference, this planet has a temperature of 2000K with a radius of 1.95 R$_\oplus$ and its star is 0.98 R$_\odot$\cite{2018ApJ...860..122C}. The importance of a molecule is highly wavelength dependent and will need to be specifically targeted based on scientific program of interest. The correlation filter is particularly well-suited for molecules with pseudoperiodic features like CO and HCN in this case where there a few well-resolved lines that can be easily targeted. The principle of measurement is the same as for RVs. If the planet's orbital velocity as well as location in its orbit is known, the SNR of planet correlation signal can be increased by shifting and stacking the correlation signal to account for the planet's changing line-of-sight velocity.
\section{Instrument Concept}
\label{sec:implementation}
As discussed, the instrument concept involves the construction of an astrophotonic correlation filter that contains customizable notch filters, which are generated through optical interference within a single mode device. These filters can be reliably and repeatably modulated, allowing our instrument to scan across the features of interest. The modulation technique is specific to the implementation, and we discuss them in more detail in the following subsections. While the modulation yields an effective loss of throughput compared to a dispersive spectrograph, the simple non-dispersive nature of these devices means that there is a significant throughput advantage over traditional echelle spectrographs (70\% for a photonic device versus 15\% for an echelle spectrograph).
The overall concept is presented in Figure \ref{fig:instrument}. Telescope light is fed via a single mode fibre (SMF) to a single device if it is diffraction-limited (e.g. with adaptive optics correction) or a photonic lantern that couples the beam into multiple SMFs, which each individually feed a device. The output of the device is either the integral of the either the reflected or transmitted spectrum of the correlation sensor, or both. The output is then coupled into a photometer, which is subsequently read out. If multiple devices are read simultaneously, they can feed multiple photometers or a linear sensor array. Thanks to the correlation being done optically, we only require a single photometer for each device, removing the need for large format sensors. This technique has inherent advantages over traditional methods especially for scientific programs that require a significant amount of spectroscopy, such as massively multiplexed spectroscopy, or for lightweight purpose-built instruments required for space applications. The overall cost and footprint is substantially lower and the devices can be easily replicated.
To demonstrate this idea, we have developed two approaches that use either the silicon photonics or optical fibres to implement the correlation sensor. For an initial lab demonstration, we focused on the telecommunication band within the $1.5-1.6$ $\mu$m range. We chose a gas that is rich in pseudoperiodic lines and easy to work with in a lab environment: CO$_2$. The gas was also chosen on the basis that it can also be detected in the Earth's atmosphere and Venus in future experiments.
\subsection{Silicon Photonics}
Discussed in further detail in our previous work\cite{2020OExpr..2827951C,2021ApOpt..6010252C}, our design uses the silicon-on-insulator photonics platform to implement the filter. With the intial batch of devices, we tested a number of gases, which included HCN and CO$_2$. Thanks to the pseudoperiodic nature of these gas absorption features, we used a ring resonator, which has equally spaced resonances, as our correlation filter. This approach was chosen for its simplicity in implementation. While the resonances do not fully match the CO$_2$ features, there is sufficient overlap of up to ten absorption lines at 1.58$\mu$m to obtain a strong correlation signal. We show the device in Figure \ref{fig:siphotonics}. The device is edge coupled by an SMF and consists of multiple ring resonators, which were designed for different gases. The modulation is achieved by a heater patterned on the ring resonator, which is also shown in the same figure. By using the thermooptic effect in silicon waveguide modes, we are able to modulate the ring resonator resonances across the CO$_2$ lines.
The overall experimental setup is shown in Figure \ref{fig:siexp}. The photonic chip is placed on a temperature controlled stage to maintain a fixed base temperature while the current of ring resonator heaters is modulated to correlate with the gas being measured. A photodiode observes the output on the drop or through port of the resonator. As an additional method to reduce systematic errors, a lock-in amplifier is used to drive the heaters\cite{2021ApOpt..6010252C}. With this setup, we were able to measure variations in the CO$_2$ column density at the 10 parts-per-million level. The CO$_2$ column was adjusted by changing the pressure of the gas within the gas cell. As the next step, the input fibre was coupled to a fibre collimator and pointed at the Sun. We were able to successfully detect telluric CO$_2$ on-sky with this experiment.
Our overall goal is to demonstrate the application on planetary and astrophysical sources. To this end, we have two experiments underway, one that couples light from a small astronomical telescope into the fibre (Tonita et al., this conference) and another that couples light from an adaptive optics system implemented on a 1.2-meter telescope. Currently, the overall throughput of our experimental system is low and our next steps are to improve the coupling efficiency. Additionally, we are also evaluating the silicon nitride (SiN) platform, which will allow us to develop devices that operate shortward of the silicon bandgap of 1.1 $\mu$m, allowing us to target other interesting molecules like O$_2$ in the visible regime.
\subsection{Fibre Bragg Gratings}
Fibre Bragg gratings (FBGs) have been a well-established photonic solution for OH skyglow removal in the near-infrared\cite{2015SPIE.9507E..0CE}. This involves the creation of custom FBGs that target specific airglow lines that suppress them to improve the sensitivity of low resolution near-infrared spectroscopy. Recent work has also shown similar Bragg gratings can be made in silicon photonics\cite{2021OExpr..2915867P}. The same concept can be applied to a correlation sensor but instead of holding the notch filters static, we are able modulate them across absorption features of our source by mechanically adjusting the strain of the optical fibre. The overall instrument concept is shown in Figure \ref{fig:fbg} where we cascade individual FBGs that are designed for a specific absorption feature. These FBGs are modulated simultaneously by stretching the fibre to obtain the correlation signal.
Our implementation on this platform is less mature than the previous one, but we have been able to successfully write custom FBGs with shaped femtosecond laser pulses. The laser beam has been shaped into a blade to write periodic structures in fibre cores (Zavyalova et al., this conference) to construct FBGs. The fibres are strained using linear translation. A similar experimental setup as shown in Figure \ref{fig:siexp} was constructed to successfully detect CO$_2$ with the gas cell. The primary advantages of this method over the previously described platform are the ease of coupling, larger free spectral range, and notch filters that can be tailored to specific absorption features at high accuracy. The silicon photonic platform offers other advantages such as much more compact footprint, easier fabrication, and the ability integrate multiple photonic devices on a single chip.
\section{Future Directions}
\label{sec:future}
While our immediate plans are to demonstrate the feasibility of this technique on key astronomical sources, namely solar system planets and bright stars, our overall goal is to make highly multiplexed and high dispersion spectroscopy much more affordable. There are two specific areas of significant potential in the next decade:
One is the field of high contrast imaging spectroscopy for the detection of exoplanets. There is a significant push in the era of extremely large telescopes to combine high contrast imaging with high dispersion spectroscopy to obtain significant improvements in the star-planet contrast ratio. Tiling an extreme AO coronographic focal plane with an astrophotonic chip capable of detecting molecular signals from exoplanets (Figure \ref{fig:concept}) would be very powerful as it would not require a costly integral field, high dispersion spectrograph.
The other is the field of multi-object spectroscopy. Since the age of GAIA, there is an ever increasing need to measure the RVs and compositions of stellar objects within our Galaxy. While there are optical and near-infrared surveys underway to do just that, our solution can over an alternate low cost solution, which could enable massively multiplexed multi-object spectroscopy.
Lastly, one of the challenges that remains is the effective coupling of light from the telescope to the device. While traditional AO systems are able to solve this issue, we are also exploring low-cost photonic phase correctors, implemented using silicon photonics. These devices can correct for atmospheric turbulence and effectively couple the light into a single mode waveguide (Diab et al., this conference). This could effectively mitigate the coupling problem without requiring a complex AO system implemented at the observatory.
\section{Summary}
\label{sec:summary}
We present a novel concept to carry out astrophysics directly on a photonic device by performing non-dispersive cross-correlation spectroscopy with a template transmission spectrum. By tailoring the transmission spectrum accordingly, we can measure RVs, compositions, as well as detect faint objects with known spectral signatures. While designed to solve specific scientific problems, the cost of manufacturing devices is very low, enabling these devices to be easily replicated and made for a host of science cases. Our initial devices have been able to successfully detect molecular gases in a lab environment, and in the coming year, we have an immediate goal to test them on astronomical sources.
\acknowledgments %
S.S. acknowledges that the NSERC Discovery Grant, the Canada Foundation for Innovation, the Ontario Research Fund, and the Dunlap Institute support his research.
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Circumplanetary disk ices. I. Ice formation vs. viscous evolution and grain drift |
Abstract: The large icy moons of Jupiter formed in a circumplanetary disk (CPD). CPDs
are fed by infalling circumstellar gas and dust which may be shock-heated upon
accretion or sublimated while passing through an optically thin gap. Accreted
material is then either incorporated into moons, falls into the planet, or is
lost beyond the disk edge on relatively short timescales. If ices are
sublimated during accretion onto the CPD we know there must be sufficient time
for them to recondense or moons such as Ganymede or Callisto could not form.
The chemical timescale to form sufficiently icy solids places a novel
constraint on the dynamical behaviour and properties of CPDs. We use the
radiation thermochemical code ProDiMo to analyze how the radial ice abundance
evolves in CPDs. We consider different initial chemical conditions of the disk
to explore the consequences of infalling material being inherited from the
circumstellar disk or being reset to atomic conditions by shock-heating. We
contrast the timescales of ice formation with those of viscous evolution and
radial dust drift. Water ice can form very efficiently in the CPD from
initially atomic conditions, as a significant fraction is efficiently
re-deposited on dust grains within < 1 yr. Radial grain drift timescales are in
general longer than those of ice formation on grains. Icy grains of size $a <
3$ mm retain their icy mantles while crossing an optically thin circumstellar
disk gap at 5 au for $L_* < 10 $ L$_{\odot}$. Three-body reactions play an
important role in water formation in the dense midplane condition of CPDs. The
CPD midplane must be depleted in dust relative to the circumstellar disk by a
factor 10-50 to produce solids with the ice to rock ratio of the icy Galilean
satellites. The CPD snowline is not erased by radial grain drift, which is
consistent with the compositional gradient of the Galilean satellites being
primordial.
| https://export.arxiv.org/pdf/2208.11053 |
\title{Circumplanetary disk ices}
\subtitle{I. Ice formation vs. viscous evolution and grain drift}
\author{N. Oberg
\inst{1,2}
\and
I. Kamp
\inst{1}
\and
S. Cazaux
\inst{2,3}
\and
P. Woitke
\inst{4,5}
\and
W.F.Thi
\inst{6}
}
\institute{Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV Groningen, The Netherlands \\
\email{[email protected]}
\and
Faculty of Aerospace Engineering, Delft University of Technology, Delft, The Netherlands
\and
University of Leiden, P.O. Box 9513, 2300 RA, Leiden, The Netherlands
\and
Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria
\and
Centre for Exoplanet Science, University of St. Andrews, North Haugh, St. Andrews, KY16 9SS,UK
\and
Max-Planck-Institut fГјr extraterrestrische Physik, Giessenbachstrasse 1, 85748 Garching, Germany \\
}
\date{Received --- accepted ---}
\abstract
{The large icy moons of Jupiter formed in a circumplanetary disk (CPD). CPDs are fed by vertically infalling circumstellar gas and dust which may be shock-heated upon accretion. Accreted material is then either incorporated into moons, falls into the planet, or is lost beyond the disk edge on relatively short timescales. If ices are sublimated during accretion onto the CPD we know there must be sufficient time for them to recondense or moons such as Ganymede or Callisto could not form. The chemical timescale to form sufficiently icy solids places a novel constraint on the dynamical behaviour and properties of CPDs.}
{We aim to explore the process of ice formation in CPDs to constrain which disk properties (such as the mass, viscosity, and dust-to-gas ratio) are consistent with the formation of an icy moon system.}
{We use the radiation thermochemical code \textsc{ProDiMo} (Protoplanetary Disk Model) to analyze how the radial ice abundance evolves in CPDs. We consider different initial chemical conditions of the disk to explore the consequences of infalling material being inherited from the circumstellar disk or being reset to atomic conditions by shock-heating. We contrast the timescales of ice formation with disk viscous timescales and radial dust drift.}
{We have derived the radial ice abundance and rate of ice formation in a small grid of model CPDs. Water ice can form very efficiently in the CPD from initially atomic conditions, as a significant fraction is efficiently re-deposited on dust grains within < 1 yr. Radial grain drift timescales are in general longer than those of ice formation on grains. Icy grains of size $a < 3$ mm retain their icy mantles while crossing an optically thin circumstellar disk gap at 5 au for $L_* < 10 $ L$_{\odot}$.}
{Three-body reactions play an important role in water formation in the dense midplane condition of CPDs. The CPD midplane must be depleted in dust relative to the circumstellar disk by a factor 10-50 to produce solids with the ice to rock ratio of the icy Galilean satellites. The CPD snowline is not erased by radial grain drift, which is consistent with the compositional gradient of the Galilean satellites being primordial.}
\keywords{Planets and satellites: formation --
Planets and satellites: composition --
Accretion, accretion disks, --
Protoplanetary disks, --
Planets and satellites: individual: Jupiter --
Methods: numerical
}
\section{Introduction}
A general feature of regular satellite formation theory is that the circumplanetary disk (CPD) consists of circumstellar material accreted from within the vicinity of the planet \citep{Lubow1999, Canup2002, Alibert2005,Shibaike2019, Ronnet2020}. If the planet is massive enough to open a gap in the circumstellar disk, material continues to flow into the gap \citep{Kley1999, Teague2019} and falls nearly vertically onto the CPD \citep{Tanigawa2012, Morbi2014}. The CPD achieves a steady-state mass when this inflow is balanced by outflow where gas either spirals into the planet or is decreted beyond the Hill sphere \citep{Canup2002,Batygin2020}. Independently of disk viscosity, stellar tides induce spiral waves in the CPD which transport angular momentum and promote accretion onto the planet at a rate on the order $10^{-7}$\,M$_{\rm J}$ yr$^{-1}$ \citep{Rivier2012}, suggesting that for a CPD of mass $\sim10^{-4}$ M$_{\rm J}$ \citep{2002A&A...385..647D,Gressel2013,2014ApJ...782...65S} infalling gas spends only a limited time inside the CPD before being lost \citep{Canup2002}.
The timescale of radial dust drift in small disks is also predicted to be short \citep{Pinilla2013,Shibaike2017,Rab2019}. A CPD could lose mm-size grains within \mbox{10$^2$-10$^3$}\,yr to aerodynamic drag against highly sub-Keplerian gas due to a very steep radial pressure gradient \citep{Zhu2018}. The CPD is thus a very dynamical system, potentially with both inwards and outwards radial transport of both gas and dust on very short timescales.
The amount of time which gas and dust spend within the CPD becomes highly relevant if a chemical ``reset" occurs. The infalling circumstellar material may pass through one or more accretion shocks \citep{Lubow1999,Tanigawa2012,Zhu2015,Szulagyi2016,Schulik2020} and can be heated $\geq 1000$\,K during accretion onto the CPD \citep{Szulagyi2017,Szulagyi2017b,Aoyama2018}. For pre-shock velocities in excess of 90\,km\,s$^{-1}$ the shock can be sufficiently hot to leave most of the infalling gas atomic or ionized \citep{Aoyama2018}. The shock may also desorb the icy mantles of grains via sputtering and thermal desorption, which for pre-shock velocities in excess of 8-10\,km\,s$^{-1}$ can effectively strip a grain of H$_2$O ice \citep{Woitke1993,Aota2015,tielens2021}. Small icy grains passing through the gap may also lose their volatile contents to photodesorption prior to shock-heating \citep{Turner2012}. We refer to a "reset" scenario if shock-heating or photodesorption effectively destroys molecules in the accretion flow to the CPD. In a reset scenario, the re-formation of ices in the CPD must compete with viscous accretion and decretion of gas and radial drift of dust. Alternatively, if circumstellar disk ices survive incorporation into the CPD, we refer to an ``inheritance" scenario.
The Galilean satellites characteristically exhibit a radial compositional gradient of decreasing density with increasing distance from Jupiter. The inner moon Io is ice-free, while Europa has an ice mass fraction of $\sim 6$-9$\%$ \citep{schubert2004, Kuskov2005} and Ganymede and Callisto have ice mass fractions of 40-55$\%$ \citep{mckinnon1999,schubert2004,Sohl2002}. Theoretically it appears challenging to reproduce the compositional gradient by tidal heating \citep{Bierson2021} or impact-driven escape of volatiles \citep{dwyer2013}. Previously it has been proposed that the gradient was imprinted during the formation of the moons by a radial temperature gradient in the CPD \citep{Lunine1982}, but the relevant chemical timescales have rarely been taken into account \citep{Mousis2006}. It is an open question whether the gradient can be produced primordially if the chemistry of infalling gas and dust is reset. By analyzing the composition and abundance of ices that are able to form within the relevant timescales we can place a lower bound on how efficiently angular momentum is transported within the CPD.
In this work we investigated the balance of the competing timescales of ice formation, dust grain drift, and gas viscous flow to seek constraints on properties of the CPD such as viscosity, ice mass fraction, and dust-to-gas ratio. We considered the two opposing extreme cases of full chemical inheritance and reset in chemically evolving disk models utilizing a rate-based modeling approach. In Sect. \ref{sec:methods} we describe our modeling set-up and the assumptions that we make. In Sect. \ref{sec:results} we analyze the CPD time-dependent ice abundances for both the reset and inheritance cases, and place novel constraints on the properties of CPDs. In Sect. \ref{sec:discussion} we discuss the implications of the these constraints and place them in the context of solar system formation. We consider also the role that radial grain drift has in competing with ice adsorption and desorption. In Sect. \ref{sec:conclusions} we summarize our key findings.
\section{Methods} \label{sec:methods}
We considered two opposing scenarios; one in which the molecular gas-phase and ice chemistry of the circumstellar disk is preserved during accretion onto the CPD (inherit) and one in which it is lost (reset). In the former case the CPD is initially populated with gas and ices extracted from the circumstellar disk. In the latter case the disk is initially populated by a fully atomic gas and dust is free of ice. We followed the build-up of ices in the CPD over time in a thermochemical disk model (see Sect. \ref{sec:methods:diskmodeling}) and extracted the molecular ice abundance and composition as a function of time.
The net inflow and outflow rate of gas and solids from and to the CPD is assumed to be zero, and that the disk mass is in steady-state. The rate of gas outflow then sets an upper limit on the applicable chemical evolutionary timescales. Hereafter we refer to this as the viscous timescale, defined as
\begin{equation} \label{eq:tvisc}
t_{\rm visc} = \frac{M_{\rm CPD}}{\dot M} ,
\end{equation}
\noindent
where $M_{\rm CPD}$ is the mass of the CPD and $\dot M$ is the infall rate of circumstellar material onto the CPD. The relatively short $t_{\rm visc}$ considered in this work (see Sect. \ref{sec:cpdviscosity}) may cause reactions with high activation energies to be kinetically inhibited, although it has been noted that the relatively high densities characteristic of CPDs may allow these reactions to proceed to equilibrium \citep{dePater2010}. Nevertheless we contrast the time-dependent results with the assumption of steady-state chemistry. In steady-state chemistry the disk is allowed to evolve for an indefinite time period until the rate of formation for every gas and ice species is balanced by the corresponding rate of destruction.
\subsection{Disk modeling code} \label{sec:methods:diskmodeling}
We used the radiation thermochemical disk modeling code \textsc{ProDiMo} \footnote{https://www.astro.rug.nl/~prodimo/} (\textbf{Pro}toplanetary \textbf{Di}sk \textbf{Mo}del) \citep{Woitke2009a,Woitke2016,Kamp2010,Kamp2017,Thi2011,Thi2018H2,Thi2020H2} to explore the formation rates and resulting abundances of various ices in CPDs. \textsc{ProDiMo} uses a rate equation based approach to compute the gas chemistry using either a time-dependent or steady-state solver. The model represents a 2D slice through an axisymmetric disk, extending radially in distance from the planet $r$ and vertically in distance from the disk midplane $z$. Our chemical network contains 13 elements and 235 atomic and molecular species. Where not explicitly specified we used the ``large DIANA chemical standard" network as described in \citet{Kamp2017}. Reaction rates are mainly selected from the UMIST2012 database \citep{McElroy2013}. Important three-body collider reactions are adopted from the UMIST2006 rate file \citep{Woodall2007}. Gas-phase reactions within the CPD produce molecular species which can then freeze-out on grain surfaces. The rate of ice formation is determined by the available grain surface area, dust temperature, and the rates of thermal, photo-, and cosmic-ray desorption (see Sect. \ref{sec:iceform} for a detailed description of ice formation).
We made the simplifying assumption that the chemical composition of the infalling material is distributed instantaneously and homogeneously throughout the disk (see appendix \ref{appendix:vertical-mixing} for a discussion of the potential impact of the rate of vertical gas mixing). We assumed that the CPD inherits micrometer to mm-sized dust grains directly from the circumstellar disk. The vertical dust stratification is calculated according to the method of \citet{Dubrulle1995} and kept fixed prior to the calculation of chemical evolution. The timescales of radial dust drift are calculated in a post-processing step described in \ref{sec:methods:drift}. The temperature and radiation structure of the CPD is solved in steady-state and then kept fixed during chemical evolution of inherited or reset infalling material.
In Sect. \ref{sec:results_surface} we considered the implications of including grain-surface chemistry reactions. With surface chemistry \textsc{ProDiMo} models explicitly the formation of H$_2$ in the CPD for which the inclusion of additional chemical species such as hydrogenated PAH is necessitated \citep{Thi2020,Thi2020H2}. The selection of the additional species and reactions in the surface chemistry network and their role in the eventual composition of the ices will be discussed in an accompanying work focused on the composition of the ices (Oberg et al. in prep.).
\subsubsection{Ice formation} \label{sec:iceform}
Where conditions in the disk are appropriate, ices can condense onto the grains in successive layers by physical van der Waals bonding (physisorption). The adsorption rate of species $i$ onto a physisorption sites is
\begin{equation}
R_{i}^{\rm ads} = 4 \pi a^2 v_{i}^{\rm th} n_{\rm d} S_{i} \, \, \rm s^{-1} ,
\end{equation}
\noindent
where 4$\pi a^2 $ is a dust grain surface area, $a$ is the grain radius, v$_{i}^{\rm th}$ is the thermal speed ($k_{\rm B} T_{\rm g} / 2 \pi m_i)^{1/2}$, $k_{\rm B}$ is the Boltzmann constant, $T_{\rm g}$ is the gas temperature, $m_{i}$ is the mass of the gas-phase species, $n_{\rm d}$ is the dust number density, and $S_{i}$ is the sticking coefficient. The ice adsorption rate coefficients are then
\begin{equation}
\frac{dn_{\#,i}}{dt} = R_{i}^{\rm ads} n_i \, \, \rm cm^{-3} \rm s^{-1} ,
\end{equation}
\noindent
where $n_i$ is the number density of the gas-phase species.
Physisorbed species can desorb thermally, photodesorb, or desorb after a cosmic ray impact deposits energy in the grain. For physisorption there is no desorption barrier and the desorption energy is equal to the binding energy E$_{i}^{\mathrm{b}}$. The Arrhenius formulation for the rate of thermal desorption is then
\begin{equation}
R_{i}^{\rm des,\mathrm{th}}=v_{0, i} e^{-E_{i}^{\mathrm{b}} / k_B T_{\mathrm{d}}} \text { in s }^{-1} \text {, }
\end{equation}
\noindent
where the pre-exponential (frequency) factor $v_{\rm 0, i}$ is
\begin{equation}
v_{0, i}=\sqrt{\frac{2 N_{\mathrm{surf}} E_{i}^{\mathrm{b}}}{\pi^{2} m_{i}}}.
\end{equation}
\noindent
$N_{\rm surf}$ is the density of surface binding sites 1.5$\times10^{15}$ cm$^{-2}$ \citep{Hasegawa1992}, and $T_{\rm d}$ is the dust temperature. Adsorption energies of major volatile species are listed in Appendix \ref{appendix:eads}.
The photodesorption rate is computed using the UV field strength calculated by the radiative transfer and photodissociation cross-sections. The photodesorption rate for species $i$ is
\begin{equation}
R_{i}^{\rm des,ph} = \pi a^2 \frac{n_{\rm d}}{n^{\rm act}_i} Y_i \, \chi F_{\rm Draine} \, \textrm{s}^{-1} ,
\end{equation}
\noindent
where $Y_{i}$ is the photodesorption yield, $n^{\rm act}_i$ is the concentration of the species in the active layers,
\begin{equation}
\begin{aligned}
n_{i}^{\text {act }} &=n_{i} & \text { if } n_{\#, \text { tot }} \leqslant 4 \pi N_{\rm surf} a^2 n_{\mathrm{d}} \\
&=n_{i}\left(N_{\text {act }} / N_{\text {layer }}\right) & \text { if } n_{\#, \text { tot }}> 4 \pi N_{\rm surf} a^2 n_{\mathrm{d}} .
\end{aligned}
\end{equation}
\noindent
The number of physisorbed layers $N_{\rm layer}$ = $
n_{\#, \text { tot }} / (4 \pi N_{\rm surf} a^2 n_{\mathrm{d}}) $ and $ n_{\#, \text { tot }}=\sum_{i} n_{\#, i}$ is the total number of the physisorbed species, $4\pi N_{\rm surf}a^2$ is the number of binding sites per layer, and $N_{\rm act}$ is the number of chemically active layers. $\chi F_{\rm Draine}$ is a measure of the local UV energy density from the 2D continuum radiative transfer \citep{Woitke2009a}. The rate of cosmic-ray induced desorption $R_{i}^{\rm des,CR}$ is calculated according to the method of \citep{Hasegawa1993}. The total desorption rate $R_{i}^{\rm des}$ is the sum of the thermal, photo- and cosmic ray induced desorption rates $R_{i}^{\rm des,th}$+$R_{i}^{\rm des,ph}$+$R_{i}^{\rm des,CR}$. The desorption rate of the physisorbed species $i$ is then
\begin{equation}
\frac{dn_i}{dt} = R_{i}^{\mathrm{des}} n_{i}^{\rm act} \, \textrm{cm}^{-3} \textrm{s}^{-1} ,
\end{equation}
\noindent
\subsubsection{Properties of the Disk Models}
\begin{table}
\caption{Parameters for the solar circumstellar disk.}
\centering
\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{llll}
\hline \hline
Parameter & Symbol & Value & Unit \\ \hline
Stellar Mass & $M_*$ & 1.0 & M$_{\odot}$ \\
Stellar Luminosity & $L_*$ & 0.84 & L$_{\odot}$ \\
Effective Temperature & $T_{\rm eff}$ & 4395 & K \\
UV Luminosity & $L_{\rm UV,*}$ & 0.01 & L$_{\odot}$ \\
X-ray Luminosity & $L_{\rm X}$ & 10$^{30}$ & erg s$^{-1}$ \\
\hline
Disk Mass & $M_{\rm disk}$ & 0.001 & M$_{\odot}$ \\
Disk Inner Radius & $R_{\rm in} $ & 0.1 & au \\
Disk Outer Radius & $R_{\rm out} $ & 100 & au \\
Column Density Power Ind. & $\epsilon$ & 1.5 & - \\
Reference Scale Height & H$_{\rm 10 au}$ & 1 & au \\
Flaring Index & $\beta$ & 1.15 & - \\ \hline
Minimum dust size & $a_{\rm min}$ & 0.05 & \textmu m \\
Maximum dust size & $a_{\rm max}$ & 3000 & \textmu m \\
Dust size power law index & $a_{\rm pow}$ & 3.5 & - \\
Dust-to-Gas ratio & $d/g$ & $10^{-2}$ & - \\ \hline
Dust composition: \\
\hspace{0.5cm} Mg$_{0.7}$Fe$_{0.3}$SiO$_3$ & & $60\%$ \\
\hspace{0.5cm} Amorphous carbon & & $15\%$ \\
\hspace{0.5cm} Vacuum & & $25\%$ \\
\end{tabular}
\caption*{\textbf{Note}: Stellar temperature and luminosity are selected from the pre-main sequence stellar evolutionary tracks of \citet{2000A&A...358..593S} for $t = 4$ Myr. Stellar UV and X-ray luminosities for a representative Class II T Tauri star are adopted from \citet{2016A&A...586A.103W}. }
\label{tab:ppds}
\end{table}
\subsubsubsection{The Circumstellar Disk}
To generate the chemical abundances for our ``inheritance" scenario we considered the properties of the surrounding circumstellar disk from which material is accreted onto the CPD. We used a two-step approach to model the chemistry in our circumstellar disk. As a first step, the initial conditions are derived from a zero-dimensional ``molecular cloud" model, the parameters of which are listed in Table \ref{tab:mol_cloud}. This stage represents 1.7$\times10^5$\,yr (the estimated lifetime of the Taurus Molecular Cloud TMC-1) of chemical evolution in a pre-collapse molecular cloud state \citep{McElroy2013}. The resulting chemical abundances of the majority of the most common species agree within a factor 10 with the observed abundances in TMC-1(see Fig. \ref{fig:appendix-mc}). These abundances are then used as initial conditions for the 2D grid of cells in the circumstellar disk model in the second step.
In the second step the circumstellar disk model is evolved for an additional 4\,Myr to be consistent with the formation timeline of Jupiter proposed to account for the distinct isotopic population of meteorites found in the solar system wherein Jupiter undergoes runaway accretion $>$3.46\,Myr after the formation of calcium-aluminium rich refractory inclusions \citep{Kruijer, Weiss2021}. The surface density power law and physical extent of the circumstellar disk is based on a modified ``Minimum Mass Solar Nebula" (MMSN) \citep{1981PThPS..70...35H}. A parameteric gap has been introduced which reduces the dust and gas density at 5.2\,au centered on the location of Jupiter. The gap dimensions are parameterized by an analytical gap scaling relation derived from hydrodynamical simulations and are consistent with a circumstellar disk viscosity of $\alpha\sim10^{-4}$ and disk age of 4\,Myr \citep{2016PASJ...68...43K}. A detailed description of the circumstellar disk model and gap structure methodology can be found in \citep{Oberg2020}. The relevant circumstellar disk model parameters can be found in Table \ref{tab:ppds}. The dust temperature, surface density profile, and UV field strength in and around the circumstellar disk gap can be seen in Fig. \ref{fig:mmsn-gap-conditions}.
Finally we extract the chemical abundances from the circumstellar disk model at the outer edge of the gap at a radius of 8.2\,au. The gap edge is defined as the radius at which the perturbed surface density profile reaches 50$\%$ of the unperturbed profile. As material flows into the gap from above one pressure scale height \citep{Morbidelli2014}, we extract our initial conditions for the CPD model at \mbox{$z$ = $H$} (\mbox{$z$ = 0.5\,au at $r = 8.2$\,au}). The ambient conditions at this point are listed in Table \ref{tab:inherit_point}. The extracted abundances are then used as the initial chemical composition for our ``inheritance" scenario CPD.
Throughout this work we quantify the iciness of solids with the ice mass fraction,
\begin{equation}
f_{\rm ice} = \frac{m_{\rm ice}}{m_{\rm rock} + m_{\rm ice}},
\end{equation}
\noindent
where $m_{\rm ice}$ is the ice mass and $m_{\rm rock}$ is the total rock (in this case, dust) mass. The $f_{\rm ice}$ of solids in the inherited circumstellar material is 0.48. At a single pressure scale height ($z$ = $H$) there is a factor $\sim$20 reduction of the initial, global dust-to-gas ratio due to settling which is calculated according to the method of \citet{Dubrulle1995} with $\alpha_{\rm settle} = 10^{-2}$ such that the dust-to-gas ratio $d/g$ at one scale height \mbox{d/g$_{z = H}$ = 10$^{-3.2}$}. Nevertheless we test a range of different $d/g$ values for the CPDs both above and below this value ($10^{-4}$-$10^{-2}$).
\begin{table}[]
\caption{Conditions at the gap edge "inheritance" point of the circumstellar disk at heliocentric distance 8.2\,au, altitude 0.5\,au (1 pressure scale height) above the midplane.}
\centering
\setlength{\tabcolsep}{1pt}
\begin{tabular}{llll}
\hline \hline
Parameter & Symbol & Value & Unit \\ \hline
Hydrogen density & $n_{\rm H,in}$ & $10^{10}$ & cm$^{-3}$ \\
Optical Extinction & A$_{\rm V,in}$ & 1.01 & - \\
Dust-to-gas ratio & $d/g\,_{\rm in}$ & $10^{-3.23}$ & - \\
Dust Temperature & $T_{\rm d,in}$ & 57.0 & K \\
Gas Temperature & $T_{\rm g,in}$ & 57.3 & K \\
Ice Mass Fraction & f$_{\rm ice, in}$ & 0.48 & - \\
\end{tabular}
\label{tab:inherit_point}
\end{table}
\subsubsubsection{Survival of icy grains passing through the gap}
Icy grains must orbit within the optically thin gap for an unknown amount of time prior to being accreted onto the CPD. We considered whether ices on grains can survive against thermal- or photodesorption while crossing through the circumstellar disk gap. The dust and gas temperatures in the gap are not closely coupled as a consequence of the low densities. While the gas temperature extracted from the model at the midplane is $\sim$200\,K, the corresponding dust temperature is 48$\pm2$\,K. Given that pressures in the gap range from 10$^{-12}$-10$^{-10}$\,bar, water ice is stable on the relevant timescales in the absence of irradiation \citep{Lodders2003}. However, the actual ice abundance at the gap midplane is negligible in steady-state. Despite the presence of a shadowing inner disk, ice within the gap are sublimated as a result of the significant stellar background radiation scattered into the gap. %
To assess the longevity of ices crossing through the gap we populated the gap region with the ``inheritance" chemical abundances found exterior to the gap and produce snapshots at regular intervals. The resulting decline in ice abundance as a function of time is shown in Fig. \ref{fig:MMSN_GAP_ICE_LOSS} for various stellar luminosities. The differing luminosities correspond to expected properties of the sun for a stellar age of 0.1, 0.5, 1 and 4\,Myr \citep{Siess2000}. \citet{Turner2012} suggests that grains entering the gap are accreted generally within a single orbital period or 10\,yr. We find that for a moon formation time \mbox{> 1 Myr} after CAI formation \mbox{(L$_{*} \leq 2.34 $L$_{\odot}$)}, grains retain $>99\%$ of their volatile content during gap-crossing if they reach the vicinity of the planet within 10 yr. A "full" inheritance scenario is thus not excluded by conditions within the gap, but would instead rely on shock-heating at the CPD surface.
\subsubsubsection{The circumplanetary disks}
The CPD is an actively-fed accretion disk with a steady-state mass proportional to its viscosity and mass infall rate. We considered an optically thick CPD of mass $10^{-7}$ M$_{\odot}$ as well as a lower mass CPD ($10^{-8}$ M$_{\odot}$) which is optically thin everywhere outside the orbit of Callisto (\mbox{$r$ > 0.03 R$_{\rm H}$} where \mbox{R$_{\rm H}$ = 0.34 au} is the Hill radius). For a Jovian-mass planet these represent planet-disk mass ratios of $\sim10^{-4}$ and $10^{-5}$ respectively. The CPDs are thus of the "gas-starved" type, and do not instantaneously contain the mass required to form a moon system as massive as the Galilean one.
The outer radius of the CPD is set as the planetary Hill radius R$_{\rm H}$, however an exponential decline in the surface density profile is parameterized to begin at R$_{\rm H}$/3 corresponding to the outermost stable orbit set by tidal interaction and angular momentum considerations (tidal truncation radius) \citep{1998ApJ...508..707Q, 10.1111/j.1365-2966.2009.15002.x, 2011MNRAS.413.1447M}. An empty inner magnetospheric cavity is assumed to exist as the result of magnetic interaction with the planet \citep{Takata1996,Shibaike2019,Batygin2018}. The parameterized radial surface density profiles of the high- and low-mass CPDs can be found in Fig.\ref{fig:CPD_SURFACE_DENSITY} together with the resulting optical extinction profiles derived from the continuum radiative transfer.
The small parameter variation grid of models exploring possible CPD mass and viscosity can be found in Table \ref{tab:cpd_grid} along with the model id's. The format of the id is ($x$-$y$) where $x$ is related to the CPD mass by \mbox{$M_{\rm CPD} =$ 10$^{-x}$ M$_{\odot}$} and $y$ to the mass infall rate (and thus viscosity) by \mbox{$\dot M_{\rm CPD} =$ 10$^{-y}$ M$_{\odot}$ yr$^{-1}$}. The list of parameters which are common between the reference CPDs can be found in Table \ref{tab:cpds}.
\subsubsubsection{CPD viscosity} \label{sec:cpdviscosity}
While the mechanism that produces angular momentum transport in accretion disks is not well understood, it is known that molecular viscosity alone is far too weak to explain observations \citep{1973A&A....24..337S,Pringle1981}. The efficiency of angular momentum transport is parameterized by the dimensionless $\alpha$-viscosity \citep{1973A&A....24..337S} which for a circumstellar disk may have a broad distribution ranging from $\sim10^{-5}-10^{-1}$ \citep{Rafikov2017,Ansdell2018,Villenave2022}.
Disk gas with a sufficiently high ionization fraction couples to the stellar or planetary magnetic field such that the magnetorotational instability (MRI) induced turbulence may provide the source of the effective viscosity and produce \mbox{$\alpha \geq 10^{-3}$} \citep{Balbus1991,Hawley1995,Balbus2003}. In the case of a CPD, MRI induced-turbulence may be inhibited by the short orbital periods and presence of magnetic dead-zones, limiting gas transport to a thin surface layer \citep{2014ApJ...785..101F}. Even if the CPD were effectively inviscid, tidal interaction with the star may promote a minimum rate of angular momentum transport through the excitation of spiral waves \citep{Rivier2012}. In our \textsc{ProDiMo} model the $\alpha$-viscosity of the disk is not explicitly specified. Instead, a mass accretion rate is specified which controls the heating rate of the disk through viscous dissipation.
The disk mass, accretion rate, and viscosity are highly degenerate properties. 3D hydrodynamical modeling of gas delivery into the vicinity of a Jupiter mass planet at 5 au suggest \mbox{$\dot M = 10^{-9.3}$ M$_{\odot}$ yr$^{-1}$} of gas \citep{Szulagyi2021}. Stellar tidal perturbation may produce a minimum accretion rate \mbox{$10^{-9.7}$ M$_{\odot}$ yr$^{-1}$} \citep{Rivier2012}. In the PDS 70 system, two massive planets are observed to be accreting gas in a large dust cavity \citep{2018ApJ...863L...8W, 2018A&A...617A..44K,2019NatAs...3..749H}. \mbox{K-band} observations of PDS 70 b with the VLT are consistent with \mbox{$\dot M = 10^{-10.8} - 10^{-10.3} $ M$_{\odot}$ yr$^{-1}$} \citep{Christiaens2019} with similar values estimated for PDS 70 c \citep{Thanathibodee2019}. HST UV and H$\alpha$ imaging of the protoplanet PDS 70 b suggest \mbox{$\dot M = 10^{-10.9} - 10^{-10.8}$ M$_{\odot}$ yr$^{-1}$} \citep{Zhou2021}. Based on these observational and theoretical constraints we adopt \mbox{$\dot M = 10^{-10}$ M$_{\odot}$ yr$^{-1}$} (with a heating rate corresponding to \mbox{$\alpha \approx 10^{-2.7}$}) and \mbox{$\dot M = 10^{-11}$ M$_{\odot}$ yr$^{-1}$} \mbox{($\alpha \approx 10^{-3.6}$)} for the high-mass CPD, representing viscous timescales of $10^3$ and $10^4$ years, respectively, over which the majority of the CPD mass is replaced by freshly accreted material (see Eq.(\ref{eq:tvisc})). For the low-mass CPD we adopt \mbox{$\dot M = 10^{-11}$ M$_{\odot}$ yr$^{-1}$} and \mbox{$\dot M = 10^{-12}$ M$_{\odot}$ yr$^{-1}$} to represent the same $\alpha$-viscosities and $t_{\rm visc}$.
The viscous heating is determined according to the method of \citet{1998ApJ...500..411D}. The half-column heating rate is
\begin{equation} \label{eq:visc_heating}
F_{\rm vis} (r) = \frac{3 G M_{\rm p} \dot{M}}{8 \pi r^3} (1-\sqrt{R_{\rm p} /r}) \hspace{0.1cm} \textrm{erg cm}^{-2} \textrm{s}^{-1},
\end{equation}
\noindent
where $G$ is the gravitational constant, $M_{\rm p}$ is the planet mass, $r$ is the distance to the planet, and $R_{\rm p}$ is the planetary radius. We convert the surface-heating to a heating rate per volume as
\begin{equation}
\Gamma_{\rm vis}(r,z) = F_{\rm vis} (r) \frac{\rho_{\rm d}(r,z)}{\int \rho_{\rm d}(r,z') dz'} \textrm{erg cm}^{-3} \textrm{s}^{-1},
\end{equation}
where $\rho_{\rm d}$ is the mass density of the dust at radius $r$ and height $z$. The heating is applied directly to the dust. The resulting midplane dust temperature profile of the CPDs can be found in Fig.\ref{fig:CPD_TDUST}. The temperature profiles have been calculated using a new diffusion-approximation radiative transfer solver which is described in appendix \ref{appendix:rtdiffusionsolver}.
\subsubsubsection{Sources of CPD external irradiation}
The rate of photodesorption plays an important role in determining the ice abundance in the outer optically-thin region of the CPDs. Potential sources of radiation include the planet, the star, and nearby massive cluster stars. For the planet we assume the runaway gas accretion phase has ended and that the luminosity has correspondingly declined to $10^{-5}$\,L$_{\odot}$ with a surface temperature of 1000\,K where it remains relatively constant for 10 Myr \citep{Hubickyj2005, 2007ApJ...655..541M, 2012ApJ...745..174S}.
We parameterize the isotropic background radiation intensity with the dimensionless parameter $\chi$. The background intensity is then the sum of a diluted 20000\,K blackbody field and the cosmic microwave background,
\begin{equation} \label{eq:chi}
I_{\nu}^{bg} = \chi \cdot 1.71 \cdot W_{\rm dil} B_{\nu}(20000 K) + B_{\nu}(2.7 K),
\end{equation}
\noindent
where the dilution factor \mbox{$W_{\rm dil}$ = $9.85357\times10^{-17}$} such that a value of $\chi = 1$ corresponds to the quiescent interstellar background, or "unit Draine field" \citep{Draine1996, Roellig2007, Woitke2009a}. Irradiation by the young sun provides a minimum $\chi\,\sim\,3000$ at 5\,au (see Fig.\ref{fig:mmsn-gap-conditions}) despite partial shadowing by an inner disk \citep{Oberg2020}. We adopted the same value for the strength of the FUV irradiation in the midplane at Jupiter's location although it is contingent on our assumptions regarding the stellar UV luminosity and geometry of the inner circumstellar disk. Independently of these factors, \citet{Oberg2020} find that interstellar radiation results in a mean $\chi$ of O(3) in the gap, as the Sun is believed to have formed in a relatively dense stellar cluster \citep{Adams2010,PZ2019} containing massive OB stars \citep{2013A&A...549A..82P}. External irradiation heats dust and gas on the surface and in the outer regions of the optically-thin CPD midplane. 3D dust radiative transfer modelling of gap-embedded CPDs suggests scattered stellar radiation can be the dominant heating source in the outer regions of a CPD \citep{Portilla-Revelo2022}. We assumed that the outer edge of the CPD is in thermal equilibrium with the surroundings and set a CPD background temperature lower limit of 50\,K equal to that of dust in the circumstellar disk gap (see Fig. \ref{fig:mmsn-gap-conditions}).
\subsubsubsection{CPD dust mass and grain size population}
The dust-to-gas ratio is a key parameter that regulates the eventual ratio of ice to rock in the CPD. Planet-induced pressure bumps at the gap edges of the circumstellar disk may filter out dust particles above $\sim$100 \textmu m in size \citep{2006MNRAS.373.1619R,Zhu2012}. For a dust grain size population power law exponent of -3.5, minimum grain size a$_{\rm min}$ \mbox{0.05 \textmu m}, maximum grain size a$_{\rm max}$ \mbox{3000 \textmu m}, a filtering of grains larger than 100 \textmu m would be reduced the mass of dust entering the gap by a factor $\sim30$. As an opposing view \citet{Szulagyi2021} find that a planet within the gap can stir dust at the circumstellar disk midplane and produce a very high rate of dust delivery to the CPD, such that the dust-to-gas ratio can even be enhanced in the CPD to $\sim\,0.1$ for a Jupiter mass planet at 5\,au.
The dust population within the CPD may also rapidly evolve. It is possible that mm-sized grains are quickly lost to radial drift in $10^2-10^3$ yr \citep{Zhu2018}. Similarly \citet{Rab2019} used the dust evolution code \textit{two-pop-py} \citep{Birnstiel2012} to show that for an isolated CPD onto which new material does not accrete, an initial dust-to-gas ratio of $10^{-2}$ can in only $10^4$\,yr be reduced to $< 10^{-3}$ and in $10^5$\,yr to $<\,10^{-4}$. However, larger dust grains may become trapped in CPDs \citep{2018ApJ...866..142D, Batygin2020}. Additionally, we expected that in an embedded, actively-accreting CPD the dust will continually be replenished and that a higher steady-state dust-to-gas ratio will be achieved. Given these considerations we tested various possible dust to gas ratios ranging from \mbox{$10^{-4}-10^{-2}$} in Appendix \ref{appendix:dusttogas}.
\begin{table}
\caption{Parameters common to the CPD models. Parameters which are not common to the CPD models are listed in Table \ref{tab:cpd_grid}. Where not specified the CPD parameters are identical to the circumstellar disk parameters listed in Table \ref{tab:ppds}. }
\centering
\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{llll}
\hline \hline
Parameter & Symbol & Value & Unit \\ \hline
Planet Mass & $M_{\rm p}$ & 1.0 & M$_{\rm J}$ \\
Planetary Luminosity & $L_{\rm p}$ & $10^{-5}$ & L$_{\odot}$ \\
Effective Temperature & $T_{\rm eff,p}$ & 1000 & K \\
UV Luminosity & $L_{\rm UV,p}$ & 0.01 & L$_{\rm p}$ \\
Background UV Field & $\chi$ & $3\times10^3$ & - \\
Background Temperature & $T_{\rm back}$ & 50 & K \\
\hline
Disk Inner Radius & $R_{\rm in,CPD} $ & 0.0015 & au \\
Taper Radius & $R_{\rm tap,CPD} $ & 0.12 & au \\
Disk Outer Radius & $R_{\rm out,CPD} $ & 0.35 & au \\
Column Density Power Ind. & $\epsilon_{\rm CPD}$ & 1.0 & - \\
Flaring Index & $\beta_{\rm CPD}$ & 1.15 & - \\
Reference Scale Height & $H_{\rm 0.1 au}$ & 0.01 & au \\
\hline
Maximum dust size & $a_{\rm max,CPD}$ & 3000 & $\mu$m \\
Dust-to-Gas Ratio & $d/g$ & $10^{-3.3}$ & - \\
\end{tabular}
\label{tab:cpds}
\end{table}
\begin{table}
\caption{Variation of parameters for the circumplanetary disk model grid. Model id's are of the format (A-B) where the CPD mass is $10^{\rm -A}$ M$_{\odot}$ and where the accretion rate onto the CPD is $10^{\rm -B}$ M$_{\odot}$ yr$^{-1}$.}
\centering
\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{lllll}
\hline \hline
\vspace{-2.0ex} \\
model id & $M$ [M$_{\odot}$] & $\dot{M}$ [M$_{\odot}$yr$^{-1}$] & $t_{\rm visc}$ [yr] & $\alpha$ \\ \hline
\vspace{-2ex} \\
(7-10) & $10^{\textbf{-7}}$ & $10^{\textbf{-10}}$ & $10^3$ & $10^{-2.7}$ \\
\vspace{0.2ex}
(7-11) & $10^{\textbf{-7}}$ & $10^{\textbf{-11}}$ & $10^4$ & $10^{-3.6}$ \\
\vspace{0.2ex}
(8-11) & $10^{\textbf{-8}}$ & $10^{\textbf{-11}}$ & $10^3$ & $10^{-2.7}$ \\
\vspace{0.2ex}
(8-12) & $10^{\textbf{-8}}$ & $10^{\textbf{-12}}$ & $10^4$ & $10^{-3.6}$ \\
\end{tabular}
\label{tab:cpd_grid}
\end{table}
\subsection{Dust grain drift within the CPD} \label{sec:methods:drift}
In \textsc{ProDiMo} the chemistry is solved for each grid cell without accounting for radial dust or gas transport. Instead we used properties of the \textsc{ProDiMo} model output to inform the radial dust drift calculations in a post-processing step to compare timescales of chemical evolution vs. dust drift. Disk gas which is pressure-supported orbits at sub-Keplerian velocities such that larger grains feel a headwind and rapidly drift inwards \citep{Weidenschilling1977}. In a circumstellar disk the degree of sub-Keplerianity can be $<1\%$ while in a CPD it could be as high as 20-80$\%$ due to significant gas pressure support \citep{Armitage2007,2018ApJ...866..142D}. We considered whether the timescale of grain drift can compete with the processes that shape grain composition In the Epstein regime the radial grain drift velocity $v_{r,d}$ can be approximated as
\begin{equation}
v_{r,d} = \frac{ v_{r,g} T_s^{-1} - \eta v_K}{T_s + T_s^{-1}} ,
\end{equation}
\noindent
where $v_{r,g}$ is the radial velocity of the gas, $T_s$ is the dimensionless stopping time of a grain,
\begin{equation}
T_s = t_s \bigg( \frac{v_k}{r} \bigg) ,
\end{equation}
\noindent
where $t_{\rm s}$ is the stopping time, $v_{\rm k}$ is the Keplerian orbital velocity, and $r$ is the radius in the CPD. The stopping time is
\begin{equation}
t_s = \bigg( \frac{\rho_{\rm grain}}{\rho_{\rm gas}} \bigg) \bigg( \frac{a}{v_{\rm th}} \bigg) ,
\end{equation}
\noindent
where $a$ is the grain size, $\rho_{gas}$ is the gas density, $\rho_{\rm grain}$ is the material density of the dust grains, and $v_{\rm th}$ is the thermal velocity of the gas: $c_{\rm s}$ (8/$\pi)^{0.5}$ where $c_s$ is the speed of sound. The parameter $\eta$ is
\begin{equation}
\eta = n \bigg( \frac{c_s^2}{v_k^2} \bigg) ,
\end{equation}
\noindent
and $n$ is the power law exponent of the radial pressure gradient \citep{Armitage2010}. We extract the gas density $\rho_{\rm gas}$, sound speed $c_{\rm s}$, and pressure gradient from our disk models to consistently determine the grain drift velocities for a grain material density \mbox{$\rho_{\rm grain} = 3$ g cm$^{-3}$}. The Epstein regime is valid where the dust particle size $a \leq 9 \lambda / 4$ where $\lambda$ is the mean free path of the gas particles. In the inner CPD the gas density is sufficiently high that this condition can be violated. For the high-mass CPD this occurs inside of the orbit of Callisto and for the low-mass CPD inside of the orbit of Europa for grains less than 1 cm in size, in which case we transition to the Stokes regime and calculate the drift velocities accordingly \citep{Weidenschilling1977}.
We adopted several simplifying assumptions regarding the radial velocity structure of the gas. The centrifugal radius $r_{\rm c}$ of the CPD is the orbital radius at which the specific angular momentum is equal to the average of the infalling material and where solid material is accreted onto the CPD \citep{Canup2002,2010ApJ...714.1052S}. In the case of a Jupiter-mass planet this lies near the orbit of Callisto. Rather than accreting at precisely this radius infalling matter have some intrinsic spread in angular momentum \citep{2008ApJ...685.1220M}. We adopted the prescription of \citet{2011AJ....142..168M} that material will accrete between 16-28 R$_{\rm J}$. The gas falling onto the CPD at $r_{\rm c}$ will viscously spread radially away from where it is accreted \citep{Pringle1981}. Hence interior to $r_{\rm c}$ gas flows towards the planet and exterior to $r_{\rm c}$ it will flow outwards. Recently it has been proposed that to effectively trap dust and allow for planetesimal growth within the CPD, gas may indeed need to flow predominantly away from the planet and thus form a decretion disk \citep{Batygin2020}.
In our high viscosity models, a parcel of gas that accretes onto the CPD near $r_{\rm c}$ and flows outwards must travel $\sim 0.3$ au within $10^3$ yr to be consistent with $t_{\rm visc}$ and thus has a mean outwards radial velocity on the order of \mbox{1 m s$^{-1}$}. For our low viscosity case \mbox{$v_{\rm r, gas} \sim 0.1 $ m s$^{-1}$}.
\section{Results} \label{sec:results}
In our reference model grid (Table \ref{tab:cpd_grid}) we considered the case of a higher mass, optically thick CPD \mbox{($10^{-7}$\,M$_{\odot}$)}, and lower mass, optically thin CPD ($10^{-8}$\,M$_{\odot}$) both with $d/g = 10^{-3.3}$. For each unique mass we considered viscous timescales of $10^{3}$ and $10^{4}$\,yr. For each of the four resulting CPDs we tested two initial compositions: of chemical inheritance and reset, for a total of eight models.
\subsection{Timescales of ice formation} \label{sec:results_timescales}
In the following section we discuss by which pathways and at what rate water (ice) formation occurs in a chemically reset CPD. Thereafter we contrast these results with that of the CPDs which inherit an initial chemical composition from the circumstellar disk.
\subsubsection{Reset}
In our ``reset" scenario the species initially present are exclusively atomic (with the exception of PAHs) and singly or doubly ionized. All hydrogen is initially present in the form of H$^+$. The reset case is characterized by the formation of ice where it is stable against desorption. Ice formation is suppressed in the innermost region of the CPD due to dust temperatures in excess of 160-180\,K. In the outer region of the CPD ice formation is suppressed by the low optical depths and correspondingly high intensity of background radiation which causes ices to photodesorb. Between these two boundaries ices begin to form
The freeze-out occurs step-wise, with two characteristic timescales on which ice formation proceeds. The rate of water ice formation at different disk radii as a function of time is shown in Fig. \ref{fig:dnice_dt}. Within \mbox{$0.01-1$\,yr} the first step is completed. This initial, rapid water formation occurs primarily via a path that begins with a three-body recombination reaction important at temperatures below 2800\,K \citep{Hidaka1982,Tsang1986},
\vspace*{-1mm}
\begin{equation} \label{eq:water1a}
\rm H + O + M \rightarrow OH + M,
\end{equation}
\noindent
where the third body M = H, H$_2$, or He, and to a lesser extent by \mbox{H + O $\rightarrow$ OH + photon}. The water then forms by radiative association of the OH with free hydrogen;
\vspace*{-1mm}
\begin{equation} \label{eq:water1c}
\rm OH + H \rightarrow H_2O + photon,
\end{equation}
\noindent
after which it adsorbs to a grain. Water formation via the reactions \ref{eq:water1a} and \ref{eq:water1c} remains proportional to the declining abundance of free H and ends when it is depleted around $10^{-1}$\,yr. Typically half of the maximum possible amount of water ice that could form is produced during this stage.
The first stage of water ice formation proceeds inside-out as a result of the strong density-dependence of the collider reaction. The second stage proceeds outside-in due to the pathway's dependence on intermediate species produced by photo-reactions. This can be seen in Fig. \ref{fig:dnice_dt} where the inner disk formation rate (red lines) is initially higher while later the outer disk rate (blue lines) is relatively higher. At this lower rate of formation the total mass of water ice doubles at the midplane within $\sim1$\,Myr, approximately half of which forms by
\vspace*{-1mm}
\begin{equation}
\rm NH_2 + NO \rightarrow N_2 + H_2O,
\end{equation}
\noindent
near the snowline. The second stage of ice formation also involves the freezing-out of NH$_3$ and other more volatiles species. We find that the relatively unstable NH$_2$ exists in abundance at such high densities ($n_{\rm H} \sim 10^{15}$\,cm$^{-3}$) due to the three-body reaction \mbox{N + H$_2$ + M $\rightarrow$ NH$_2$ + M}. Although the reaction rate has been experimentally determined \mbox{$k_0$ = 10$^{-26}$ cm$^6$ s$^{-1}$} \citep{Avramenko1966}, it has been noted as being in need of revisiting due to its importance in water formation via \mbox{NH$_2$ + O $\rightarrow$ NH + OH} \citep{Kamp2017}. We have adopted a rate more typical of collider reactions \mbox{(10$^{-30}$ cm$^6$ s$^{-1}$)}, which still produces enough NH$_2$ for this path to play an important role. The other half of water ice is formed via the more ``classical" route
\vspace*{-1mm}
\begin{equation}
\rm H_3O^+ + e^- \rightarrow H_2O + H,
\end{equation}
\noindent
and in the outermost part of the disk where water ice is stable via
\begin{equation}
\rm H_3O^+ + H_2CO \rightarrow H_3CO^+ + H_2O,
\end{equation}
\noindent
In the high-mass CPDs water ice formation is typically still ongoing by the end of the viscous timescale, and so the midplane ice abundance is not able to converge to the level seen in steady-state within the viscous timescale.
\subsubsection{Inheritance}
The evolution of the inheritance case is characterized by the desorption of ices in regions where they are not stable against thermal or photo-desorption. Where water ice is stable very little additional water formation occurs within the viscous timescale.
Icy grains interior to the snowline sublimate typically within $10^{-5}$\,yr, and a ``snowline" (water iceline) is clearly established. In some cases the snowline can take longer to stabilize, shifting outwards over time, and sometimes continues to evolve radially for up to $10^5$ yr. This is notable in the (8-11) CPD where the snowline moves from 0.01\,R$_{\rm H}$ at $10^{-5}$\,yr to 0.03\,R$_{\rm H}$ after $10^4$\,yr. In practice this implies that there exists a radial span in which the composition of radially drifting icy grains does not immediately begin to reflect the ambient conditions. This is discussed in Sect. \ref{sec:discussion_drift}.
In the outer optically thin region of the CPDs, ices are also lost to photodesorption although the process is slower than the thermal desorption occurring in the inner disk. Desorption in this area is typically complete within $10^3-10^5$\,yr and has not always reached steady-state by the end of the viscous timescale.
\subsection{The midplane ice mass fraction}
While water ice can form efficiently in chemically reset CPDs, the final $f_{\rm ice}$ of the solids depends strongly on the total dust mass in the midplane. We explore the role of the global $d/g$ ratio in the reset and inheritance cases in Appendix \ref{appendix:dusttogas}, Fig. \ref{fig:fice_dustgas_001}. A canonical dust-to-gas ratio of $10^{-2}$ produces at most grains with an ice mass fraction of $<0.1$ and is nowhere consistent with the composition of the icy Galilean satellites. In contrast a dust-to-gas ratio of $10^{-3.3}$ results in solids with $f_{\rm ice}$ both above and below the maximum Galilean $f_{\rm ice}$ for the high and low-mass CPDs, respectively. Hereafter the global dust-to-gas ratio of $10^{-3.3}$ is considered in discussions of the four reference CPDs.
We show the state of the radial ice mass fraction for the CPDs with $d/g = 10^{-3.3}$ on their respective viscous timescales to allow for direct comparison between the inheritance and reset cases in Fig.\ref{fig:fice_all4}. For completeness we include the results where grain-surface chemical reactions are utilized.
The midplane $f_{\rm ice}$ is also dependent on the degree of dust sedimentation (settling). A general feature of the $f_{\rm ice}$ profiles is an inner maximal peak at the snowline followed by a decline towards the outer edge of the disk. As dust settling is more efficient at larger radii, $f_{\rm ice}$ reduces accordingly with increasing radius. Settling of dust to the midplane is counteracted by stochastic advection by turbulent eddies in the gas. We assume that the value of turbulent-$\alpha$ used to determine the degree of settling is equal to the global viscous $\alpha$ used to determine the heating rate by viscous dissipation. In the low viscosity CPDs (7-11) and (8-12) dust settling towards the midplane is thus proportionally enhanced. In the low-viscosity cases the dust density is enhanced in the midplane minimally by a factor $\sim 3$ over the global \textit{d/g}, increasing to a factor $\sim$20 at R$_{\rm H}/3$. As the degree of settling is also dependent on the adopted surface density power law exponent $\epsilon$ we explore the impact of deviation from the assumed $\epsilon=1$ in Appendix \ref{appendix:surfacedensityslope}. Given that we have no reason to believe this value will depart significantly from the range 1.0-1.3, the resulting $f_{\rm ice}$ in the inner disk should differ from our reference result by no more than 25-30$\%$ interior to the ammonia iceline at $\sim$70\,K.
In general the high-mass chemically reset CPDs (7-10) and (7-11) are not able to converge entirely towards a steady-state ice abundance in either the ``fast" ($10^3$\,yr) or ``slow" ($10^4$\,yr) viscous timescales as gas-phase CO is more stable and contains a relatively larger fraction of the total oxygen budget for longer. As a result chemically reset CPDs contain on average less water ice than those which inherit their ices from the circumstellar disk. In contrast, the low-mass chemically reset CPDs (8-11) and (8-12) are able to converge towards the ice abundances seen in the inheritance cases within 100 yr.
\subsubsection*{The role of surface chemistry} \label{sec:results_surface}
The duration of the initial stage in which water formation is rapid is dependent on the availability of atomic H. When H$_2$ formation is complete this stage ends. The formation of H$_2$ is treated differently with the inclusion of grain surface chemistry. When the chemistry is limited to gas-phase reactions and direct adsorption/desorption only, H$_2$ formation proceeds via the analytic rate determined by \citet{Cazaux2002}. When surface chemistry is included H$_2$ formation is instead modeled explicitly via reactions involving hydrogenated PAHs (H-PAH) \citep{Boschman2012,Thi2020H2}.
A chemical reset poses a scenario in which H$_2$ and H$_2$O formation occur simultaneously. The analytic rate of \citet{Cazaux2002} presupposes that chemisorbed H plays a role in H$_2$ formation on silicate or carbonaceous surfaces, in which H goes through an intermediate stage of being chemically, rather than physically, bound to the grain surface. We find that prior to atomic H depletion several (>3) monolayers of H$_2$O have formed on the average sized grain. The formation of H$_2$ via chemisorbed H should thus be suppressed in these regions as the water layers prevent H atoms from chemisorbing to the grain surface \citep{WAKELAM2017B}. In the absence of chemisorbed H, H$_2$ formation on dust is strongly reduced and H$_2$ formation proceeds primarily via H-PAH. The H$_2$ formation rate under these conditions is less efficient than the analytic rate from \citep{Cazaux2002} near the inner edge of the snowline at 150-170\,K. The relatively slower formation of H$_2$ and the resulting prolonged availability of atomic H results in a $\sim 30-100\%$ increase in water ice abundance interior to the NH$_3$ iceline prior to $t_{\rm visc}$ and hence narrows the gap between the inheritance and reset cases in this region. Water ice formation in the inner disk is further enhanced by the inclusion of O$_2$H in the surface chemistry network. Gas-phase three-body reactions with O$_2$ produce O$_2$H which in turn lead to early OH formation. The gas-phase O$_2$ reservoir is thus depleted and efficiently converted via OH into H$_2$O via the three-body reaction
\begin{equation}
\rm O_2 + H + M \rightarrow O_2H + M,
\end{equation}
\noindent
with rates adopted from UMIST2006 \citet{Atkinson2004,Woodall2007}, which is highly efficient at the densities in the CPD, and thereafter
\begin{equation}
\rm O_2H + H \rightarrow OH + OH,
\end{equation}
\begin{equation}
\rm OH + H \rightarrow H_2O + photon,
\end{equation}
In the outer disk the ice mass fraction can be enhanced relative to the gas-phase chemical network as the freezing-out of more volatile species is facilitated by grain surface reactions. CO$_2$ ice is readily formed on the surface via \mbox{OH + CO $\rightarrow$ CO$_2$ + H} \citep{Oba2010,Liu2015} for which we adopt an effective barrier of 150\,K \citep{Fulle1996,Ruaud2016}. The formation of carbon bearing ices begins to significantly influence the $f_{\rm ice}$ of the chemically reset CPDs only after 10$^3$\,yr and hence the effect on the high-viscosity CPDs with $t_{\rm visc} = 10^3$\,yr is less pronounced. The formation and composition of these ice impurities will be discussed in detail in an accompanying work (Oberg et al. in prep).
\subsection{Grain drift vs. adsorption and desorption} \label{sec:grain_drift}
We calculated the velocity of radially drifting grains in the four reference CPDs and showcase the results for the high-mass high-viscosity CPD (7-10) in Fig. \ref{fig:drift_velocity}. We solved for the total time it takes grains of several sizes to reach the inner disk edge. The resulting timescale of grain drift can be seen in Fig. \ref{fig:drift_time} which shows the time for a grain deposited at radius $r$ to reach the CPD inner edge. Grains which become trapped are not included in Fig. \ref{fig:drift_time}, as they do not reach the disk inner edge. The regions where thermal desorption, ice formation, and photodesorption predominantly shape grain mantle composition, as well as their respective timescales, are indicated in the figure. These are the timescales with which grain drift competes.
The regions have been defined as follows: the ``thermal desorption region" is found interior to the snowline where inherited, initially icy grains lose their icy mantles within the viscous timescale. The ``snow border'' is the region where the reset and inheritance cases are not able to converge towards a common snowline within $10^6$ yr. Grains here are able to retain their icy mantles but no significant additional ice adsorption occurs. The ``ice formation region" is where water begins to adsorb to grains after the CPD has been chemically reset. The ``photodesorption region" is the optically thin region exterior to the snowline where inherited grains eventually lose their icy mantles within $\sim10^6$ yr at most.
We focus on grains of size $<10$ mm as the adsorption and desorption timescales derived in Sect. \ref{sec:results_timescales} have only been derived with the thermochemical disk model for grains up to this size. In most cases the grain drift timescale $t_{\rm drift}$ is longer than the timescales of thermal desorption and the timescale of rapid ice formation. The composition of grains will thus correspond to the $f_{\rm ice}$ profiles derived in Sect. \ref{sec:results} except in the case of the low-mass, high-viscosity CPD (8-11). In this CPD icy grains can cross into the "thermal desorption region" but only begin desorbing once they approach the position of Europa.
\subsubsection*{Dust traps}
It is clear from Fig.\ref{fig:drift_velocity} that in a decreting CPD dust traps are present. Grains deposited near the centrifugal radius drift outwards with the gas until the force of radial advection is balanced by the loss of orbital energy from drag against gas orbiting at sub-Keplerian velocity. Trapped grains thus become radially segregated by size, with smaller grains drifting to the outer edge of the trap and larger grains remaining near the inner edge. In the high-mass high-viscosity CPD (7-10) grains 0.1-1\,mm in size become trapped near 0.1-0.2\,R$_{\rm H}$. Grains smaller than 0.05\,mm advect with the gas and are able to reach the outermost stable orbit at R$_{\rm H}/3$ where they would be eventually lost to e.g. tidal stripping.
In general the dust traps are spatially coincident with the ice formation region and in the lower-mass CPDs also partially with the photodesorption region. Hence continued ice deposition on trapped grains could facilitate grain growth. This phenomena is discussed further in appendix Sect. \ref{appendix:trapped-grains}.
\section{Discussion} \label{sec:discussion}
We set out to explore the process of ice formation in CPDs with relatively short viscous timescales to constrain their physical, chemical, and dynamical properties. We find that even if infalling gas and ice is fully atomized, re-freezing proceeds quickly in the CPD and solids reach an $f_{\rm ice}$ $\sim0.5$ by $t_{\rm visc}$ for an appropriate midplane dust-to-gas ratio. The midplane ice abundance at $t_{\rm visc}$ is generally insensitive to the initial chemical conditions. Only in the inner disk ($r<0.05-0.1$\,R$_{\rm H}$) of the high-mass CPDs is ice formation too slow for the reset and inheritance cases to converge. With our standard chemical network the efficiency of water production in this region is closely tied to the availability of atomic H and thus to the H$_2$ formation rate, which is not well constrained in these conditions. The three-body reaction \mbox{H + O + M $\rightarrow$ OH + M} is also critical to the process. For this reaction we have adopted the rate coefficients listed in UMIST2006 \citep{Woodall2007}, the value of which originates from a recommendation of \citet{Tsang1986} who noted that literature values represented only rough estimates and suggested a factor 10 uncertainty \citep{Baulch1972,DAY1973}. More recent estimates of this rate at high temperatures (\mbox{> 3000 K}) suggest this value is accurate to within $\sim$40$\%$ \citep{Javoy2003}, but modern low temperature measurements remain desirable. In our expanded grain-surface chemical network, gas-phase O$_2$H plays an important role in accelerating OH formation in the inner disk, diverting atomic oxygen into water rather than O$_2$.
\subsection{Constraints on CPD properties}
For reasonable assumptions regarding the properties of the CPD the snowline can fall very near the present-day position of Europa. The CPD with mass \mbox{$10^{-7}$ M$_{\odot}$} \mbox{($10^{-4}$ M$_{\rm J}$)}, \mbox{$\alpha = 10^{-3.6}$} and \mbox{$d/g = 10^{-3.3}$} matches best the compositional gradient of the Galilean moons at their present-day orbits. While this seems like a promising outcome, we emphasize that both inwards gas-driven migration \citep{Canup2002, Madeira2021} and long-term outwards tidally-driven migration \citep{1979Natur.279..767Y,Tittemore1990, Showman1997, Lari2020} have potentially played a role in repositioning the satellites both during and post-formation. Other regions of the CPD parameter space are equally capable of producing solids with \mbox{$0.4 < f_{\rm ice} < 0.55$}, but vary in the position of their snowline. In any case, we believe that it is more meaningful to determine whether a particular CPD can form enough ice on the relevant viscous timescale, rather than at precisely which radii a particular abundance of ice can be found.
To produce solids with a minimum \mbox{$f_{\rm ice}$ $\sim 0.5$} the global dust-to-gas ratio of the CPD must be $\leq\,10^{-3}$ and does not need to be $< 10^{-3.7}$. We suggest that this does not represent a minimum $d/g$ limit as imperfect accretion \citep{dwyer2013}, (hydrodynamic) proto-atmospheric escape \citep{Kuramoto1994,Bierson2020}, impact-vaporization \citep{Nimmo2012}, or tidal heating \citep{Canup2009} may all have played a role in reducing the volatile mass of the satellites either during or post-accretion. A minimum $d/g$ is instead implied by the minimum time required to accrete the solid total mass of the Galilean satellites ($\sim 10^{-7}$\,M$_{\odot}$) into the CPD. We consider the lifetime of the gaseous circumstellar disk ($\sim 10$ Myr) to be the maximum time over which the CPD can continue to accrete gas. Assuming Jupiter's runaway accretion ended $<3.94$\,Myr after CAI formation \citep{Weiss2021} and that moon formation only began at this stage, this leaves $\sim$6\,Myr to accrete the refractory material for the moons. We emphasize that this limit is very approximate, as it is possible that circumstellar gas disk lifetimes regularly exceed 10\,Myr \citep{Pfalzner2014}. Conversely an even shorter lifetime ($<4.89$\,Myr after CAI formation) has been proposed for the solar nebula on the basis of paleomagnetic measurements \citep{Weiss2021}.
The midplane dust-to-gas ratio and thus the $f_{\rm ice}$ in the CPD will differ from what has been derived from our models if the grain size distribution is significantly altered in some way. The circumstellar disk gap edge may filter out larger grains from the accretion flow \citep{2006MNRAS.373.1619R,Zhu2012}. Grains larger than 100 \textmu m, which settle efficiently, are those primarily responsible for enhancement of the dust mass at the CPD midplane. Massive planets may however vertically stir circumstellar disk midplane dust outside the gap \citep{Binkert2021}. \citet{Szulagyi2021} found that significant vertical stirring of large grains occurs at the gap outer wall in the presence of planets with mass above that of Saturn, resulting in a substantial delivery of mm-sized grains to the CPD. The high-mass tail of the dust distribution could alternatively be depleted by the more rapid inwards drift of these grains in the CPD. We have shown that grains larger than $\sim1$mm no longer advect with the gas in the CPD. The steady-state dust grain size distribution will thus likely be truncated. In the absence of these grains the limits we have derived on the CPD mass are revised upwards by a factor $\sim\,5$.
\subsection{Does grain drift erase the radial distribution of ices?} \label{sec:discussion_drift}
We have tested only the "gas-starved" disk paradigm in which a CPD must over time accrete the solids to form large moons. The sequential formation, episodic growth, and potential loss of migrating moons is a characteristic of this theory \citep{Canup2002}. In such a dynamical environment the relevancy of the instantaneous radial distribution of icy grains remains to be established. The simplest way in which the chemical properties of dust in the CPD could be imprinted on the final satellite system would be through in-situ formation: the satellites accrete the bulk of their material at fixed radial positions in the CPD \citep{Ronnet2017}. This might occur if the innermost proto-moon were prevented from migrating into Jupiter by the presence of a gas-free magnetically-truncated inner cavity \citep{Takata1996,Batygin2018,Shibaike2019}. Additional proto-satellites could then pile-up in a resonant chain and be stabilized against further migration by the proto-moon anchored at the disk inner edge \citep{2009ApJ...699..824O, 2010ApJ...714.1052S,2017MNRAS.470.1750I,Madeira2021}. Drifting grains would still be free to cross the orbit of proto-moons as accretion efficiency remains low when the proto-moons are only a fraction of their final mass \citep{Shibaike2019}. In this paradigm proto-moons at relatively fixed positions, continually accreting grains that drift into their feeding-zone \citep{Ronnet2020}. We find that the ice fraction of small \mbox{(< 1\,cm)} drifting grains in the inner disk will almost always reflect ambient conditions (Sect. \ref{sec:grain_drift}) independently of whether the gas in the CPD flows radially inwards or outwards (the fate of trapped grains is discussed in Appendix \ref{appendix:trapped-grains}). If the proto-moons (resonantly anchored at fixed radii within the CPD) accrete the majority of their total mass from these drifting grains, their bulk ice fraction would reflect the temperature gradient of the CPD.
\section{Conclusions} \label{sec:conclusions}
Circumplanetary disks represent a unique chemical environment characterized by high-densities and a relatively short timescale on which gas and dust are viscously or aerodynamically lost. We aimed to explore the process of ice formation in this environment from sharply contrasting initial chemical conditions, knowing that solids with ice/rock $\sim 1$ must be able to form within the viscous timescale of a Jovian CPD. We tested the paradigm in which solids are delivered directly from the circumstellar disk in the form of small grains \mbox{(< 1 cm)} to a "gas-starved", relatively low-mass CPD. We highlight our key conclusions as follows:
\vspace{4ex}
\textbf{If infalling material is chemically reset:}
\begin{enumerate}
\item High densities in the CPD facilitate three-body ``collider" reactions that lead to rapid water ice formation. Roughly half of the water ice is formed within a single year by the hydrogenation of OH.
\item Solids with the ice fraction of Ganymede or Callisto are produced within $t_{\rm visc}$ for \mbox{$\alpha \approx 10^{-3}-10^{-4}$} if the midplane is depleted in dust by a factor 20-50 relative to the canonical \mbox{$d/g=10^{-2}$}.
\end{enumerate}
\textbf{If chemical inheritance occurs:}
\begin{enumerate}
\item Ices near the planet efficiently sublimate and establish a snowline at a similar location to that of the reset case within $t_{\rm visc}$. Additional ice formation is minimal.
\end{enumerate}
\textbf{In either case:}
\begin{enumerate}
\item Icy circumstellar dust grains preserve the majority of their volatile content during gap-crossing if accreted onto the CPD within 100 yr unless the stellar luminosity is \mbox{$>10$ L$_{\odot}$}.
\item The compositional imprint of the CPD temperature profile is not erased by radial dust drift for grains of size $a < 1$ cm.
\item Only the ``high-mass" CPDs \mbox{($M_{\rm CPD}$ = 10$^{-4}$ M$_{\rm J}$)} are sensitive to the initial chemical conditions: water ice formation in the inner disk is less efficient if a chemical reset occurs as oxygen tends to remain locked in the gas-phase CO.
\end{enumerate}
In our solar system icy moons are common. No matter whether or not ices sublimate upon incorporation into the CPD, we have demonstrated that ices can be efficiently re-deposited onto dust grains and enable the general ubiquity of icy moons.
\begin{acknowledgements}
The research of N.O. and I.K. is supported by grants from the Netherlands Organization for Scientific Research (NWO, grant number 614.001.552) and the Netherlands Research School for Astronomy (NOVA). "This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research has made extensive use of Numpy \citep{numpy}, Matplotlib \citep{matplotlib}, scipy \citep{scipy}, and prodimopy \url{https://gitlab.astro.rug.nl/prodimo/prodimopy}. The authors would like to thank the anonymous referee for comments that contributed to the accuracy, clarity, and focus of this work.
\end{acknowledgements}
\bibliographystyle{aa} %
\bibliography{refs.bib} %
\begin{appendix} %
\section{The RT diffusion solver} \label{appendix:rtdiffusionsolver}
\def\kRoss{\kappa_{\rm R}}
\def\kPl{\kappa_{\rm P}}
\def\div{{\rm div}}
\def\grad{{\bf grad}}
\def\half{\hspace*{0.2pt}\sfrac{1\hspace*{-0.4pt}}{\hspace*{0.2pt}2}}
\def\ver{{\rm ver}}
\def\hor{{\rm hor}}
{\sc ProDiMo} solves the radiative transfer equation \citep[see Eq.\,(12)
in][] {Woitke2009a} together with the condition of radiative energy
conservation, which in general can be written as
\begin{equation}
\div \vec{F} ~=~ \Gamma \ ,
\label{eq:divF}
\end{equation}
where $\vec{F}=\int \vec{F}_\nu\,d\nu\rm\;[erg/cm^2/s]$ is the
bolometric radiation flux vector and $\Gamma\rm\,[erg/cm^3/s]$ is the
non-radiative heating rate per volume. In the viscous case, we use
$\Gamma\!=\!\Gamma_{\rm vis}$, see Eq.\,(\ref{eq:visc_heating}) with stellar mass $M_\star$ and
stellar radius $R_\star$ instead of $M_{\rm p}$ and $R_{\rm p}$ for circumstellar
discs. The additional non-radiative heating leads to a surplus
emission of photon energy according to
\begin{equation}
4\pi \int\kappa_\nu^{\rm abs}\,\Big(B_\nu(T)-J_\nu\Big)\,d\nu ~=~ \Gamma \ ,
\label{eq:radEq}
\end{equation}
where $\kappa_\nu^{\rm abs}$ is the dust absorption coefficient at
frequency $\nu$, $B_\nu(T)$ the Planck function, and $J_\nu$ the
mean intensity. For passive discs, we have $\Gamma\!=\!0$, in which case
Eq.\,(\ref{eq:radEq}) simplifies to the ordinary condition of
radiative equilibrium.
The numerical solution method for the radiative transfer (RT) in {\sc
ProDiMo} involves iterations where formal solutions with isotropic
scattering are computed along multiple rays to cover the full $4\pi$ solid
angle as seen from every point in the disc, see Sect. ~4 in
\citet{Woitke2009}. A formal solution results in new $J_\nu(r,z)$
which are used to update the dust temperatures $T_{\rm dust}(r,z)$ and
source functions. The convergence of this $\Lambda$-iteration is
accelerated by the Ng-method \citep[see][]{Auer1984}. However, despite this
acceleration, the convergence is still slow in the midplane, which
is a serious problem for all radiative transfer codes for discs,
including the Monte-Carlo codes, see \citet{Pinte2009}.
Here we describe a method how to avoid this problem.
In the diffusion approximation, the bolometric radiation flux
\begin{equation}
\vec{F}(r,z) = -\frac{4\,\pi}{3\kRoss(r,z)}\,\grad\,J(r,z)
\end{equation}
is given by the gradient of the bolometric mean intensity $J(r,z)$.
The Rosseland-mean and Planck-mean opacities are defined as
\begin{align}
\frac{1}{\kRoss(r,z)} =&~
\int \frac{1}{\kappa^{\rm ext}_\nu(r,z)} \frac{dB_\nu(T)}{dT}\,d\nu \;\;\Big/\;
\int \frac{dB_\nu(T)}{dT}\,d\nu \\
\kPl(r,z) =&~
\int \kappa^{\rm abs}_\nu(r,z) B_\nu(T)\,d\nu \;\;\Big/\;
\int B_\nu(T)\,d\nu \ ,
\end{align}
where $\kappa^{\rm ext}_\nu(r,z)$ is the extinction coefficient and
$T=T_{\rm dust}(r,z)$ the dust temperature at position $(r,z)$ in the disk.
At the beginning of a new RT iteration, the Rosseland and Planck
opacities are calculated based on the frequency-dependent disk opacity
structure and the current $T_{\rm dust}(r,z)$. Next, we compute radial
and vertical Rosseland optical depths $\tau_{\rm Ross}=\int
\kRoss\,ds$ from every point. When the radial inward, radial outward
and vertical upward Rosseland optical depths from that point are all
larger than a threshold value (we use a value of 10 here), the point
is flagged as being optically thick, and added to the subset of optically
thick points
\begin{equation}
{\cal M} = \{\,(i,j)\;|\;(r_i,z_{i,j})\rm\ is\ optically\ thick\} \ .
\end{equation}
where $i$ and $j$ are the 2D-indices of a grid point at radius $r_i$
and height $z_{i,j}$. The following method only updates the mean
intensities $J_{i,j}$ and dust temperatures $T_{i,j}$ on the optically
thick grid points $(i,j)\in\!{\cal M}$, whereas all
other points are regarded as fixed boundary conditions for this
problem. To pick up the bolometric mean intensities on the boundary
points, we integrate Eq.\,(\ref{eq:radEq}) assuming that $J_\nu$ is
close to a Planckian, hence
\begin{equation}
J(r,z) = B(T)-\frac{\Gamma(r,z)}{4\pi\,\kPl(r,z)} \ .
\label{eq:JJ}
\end{equation}
where $B(T)\!=\!\sigma\,T_{\rm dust}(r,z)^4/\pi$ is the
frequency-integrated Planck function. Integration of
Eq.\,(\ref{eq:divF}) over the volume associated with grid point
$(i,j)$ as sketched in Fig.~\ref{fig:grid} results in the following
numerical equation
\begin{align}
& A^\ver_{i-\half,j} D_{i-\half,j} \frac{J_{i,j}-J_{i-1,j}}{\Delta r_{i-\half}}
~+~A^\ver_{i+\half,j} D_{i+\half,j} \frac{J_{i,j}-J_{i+1,j}}{\Delta r_{i+\half}}
\label{eq:balance}\\
& +~A^\hor_{i} D_{i,j-\half} \frac{J_{i,j}-J_{i,j-1}}{\Delta z_{i,j-\half}}
~+~A^\hor_{i} D_{i,j+\half} \frac{J_{i,j}-J_{i,j+1}}{\Delta z_{i,j+\half}}
~=~ V_{i,j}\,\Gamma_{i,j} \nonumber \ ,
\end{align}
where we note that the vertical fluxes through the cell boundaries
involve a scalar product with the slanted normal vector of the surface
area, hence $A^\hor_{i}$ is the cell's horizontal area after
projection onto the vertical direction. The following abbreviations
are used for the distances, vertical and horizontal areas, and
the volume. They are given by the geometry of the {\sc ProDiMo} grid
points, which are aligned on radial rays on which $z/r$ is constant
\begin{align}
r_{i-\half} =&~\sqrt{r_i r_{i-1}} \label{eq:first}\\
r_{i+\half} =&~\sqrt{r_i r_{i+1}} \\
\Delta r_{i-\half} =&~r_i-r_{i-1}\\
\Delta r_{i+\half} =&~r_{i+1}-r_i\\
z_{i,j-\half} =&~\frac{1}{2}(z_{i,j}+z_{i,j-1}) \\
z_{i,j+\half} =&~\frac{1}{2}(z_{i,j}+z_{i,j+1}) \\
z_{i-\half,j-\half} =&~z_{i,j-\half} \frac{r_{i-\half}}{r_i}\\
z_{i-\half,j+\half} =&~z_{i,j+\half} \frac{r_{i-\half}}{r_i}\\
z_{i+\half,j-\half} =&~z_{i,j-\half} \frac{r_{i+\half}}{r_i}\\
z_{i+\half,j+\half} =&~z_{i,j+\half} \frac{r_{i+\half}}{r_i}\\
A^\ver_{i-\half,j} =&~2\pi\,r_{i-\half}(z_{i-\half,j+\half}-z_{i-\half,j-\half})\\
A^\ver_{i+\half,j} =&~2\pi\,r_{i+\half}(z_{i+\half,j+\half}-z_{i+\half,j-\half})\\
A^\hor_{i} =&~\pi(r_{i+\half}^2-r_{i-\half}^2)\\
V_{i,j} =&~ A^\hor_{i}(z_{i,j+\half}-z_{i,j-\half})
\end{align}
The radiative diffusion coefficients are defined as
\begin{align}
D_{i,j} =&~ \frac{4\pi}{3\kRoss(r_i,z_{i,j})}\\
D_{i-\half,j} =&~ \sqrt{D_{i,j} D_{i-1,j}} \\
D_{i+\half,j} =&~ \sqrt{D_{i,j} D_{i+1,j}} \\
D_{i,j-\half} =&~ \sqrt{D_{i,j} D_{i,j-1}} \\
D_{i,j+\half} =&~ \sqrt{D_{i,j} D_{i,j+1}} \label{eq:last}
\end{align}
Equation~(\ref{eq:balance}) states a system of linear equations for the
unknown bolometric mean intensities $J_{i,j}$ on the optically thick points
$(i,j)\in{\cal M}$ of the form
\begin{equation}
{\cal A}\cdot \vec{X} = \vec{B} \ ,
\label{eq:matrix}
\end{equation}
where all quantities in Eqs.\,(\ref{eq:first}) to (\ref{eq:last}) are
constants forming the matrix $\cal A$, and the volumes $V_{i,j}$
and heating rates $\Gamma_{i,j}$ are constants forming the rest vector
$\vec B$. The unknowns $\{J_{i,j}\}$ at the optically thick points
$(i,j)\in{\cal M}$ constitute the solution vector $\vec{X}$. All
terms in Eq.\,(\ref{eq:balance}) that involve the other $J_{i,j}$
on the adjacent points are also included into $\vec{B}$. The matrix
equation to solve (Eq.\,\ref{eq:matrix}) has a typical dimension
of a few hundred to a few thousand, depending on disk mass, geometry,
and dust parameters.
This way we can solve the 2D radiative diffusion problem for the
unknown mean intensities in the optically thick region as a linear
boundary value problem in one go, where there is one layer of points
surrounding the optically thick regions which sets the boundary
values. Our method calculates how the disk transports the photon
energy through the optically thick core inside of the boundary layer.
It is applicable to both cases, passive discs without viscous
heating and active discs with $\Gamma\!>\!0$.
Once the $\{J_{i,j}\}$ on $(i,j)\in{\cal M}$ have been determined, we
revert the process described by Eq\,(\ref{eq:JJ})
\begin{align}
B(T) =&~J(r,z) + \frac{\Gamma(r,z)}{4\pi\,\kPl(r,z)} \\
T_{\rm dust}(r,z) =&~ \left(\frac{\pi}{\sigma}B(T)\right)^{1/4}
\end{align}
and multiply the frequency-dependent mean intensities $J_\nu(r,z)$, as
they were determined prior to the application of the diffusion solver,
by a constant factor to make Eq.\,(\ref{eq:radEq}) valid again,
thereby keeping the previously calculated frequency distribution of
$J_\nu(r,z)$.
After having modified $T_{\rm dust}(r,z)$ and $J_\nu(r,z)$ this way on
all grid points $(i,j)\in{\cal M}$, the normal RT solution method
resumes, which begins by calculating the source functions on all
grid points and continues by performing a formal solution.
Figure~\ref{fig:benchmark1} shows a benchmark test against the Monte
Carlo radiative transfer program MCMax \citep{Min2011}. We
consider the disk model that is described in detail by
\citet{Woitke2016}, see their table~3. The central star is a 2\,Myr
old T\,Tauri star with a mass of 0.7\,$M_\odot$ and a luminosity of
1\,$L_\odot$, the disk has a mass of 0.01\,$M_\odot$, with a gas/dust
ratio of 100. The dust is composed of 60\% silicate, 15\% amorphous
carbon and 25\% porosity. The dust grains have sizes between
0.05\,$\mu$m and 3\,mm, with an unsettled powerlaw size-distribution
of index -3.5. The dust is settled according to the prescription of
\citet{Dubrulle1995} with $\alpha_{\rm settle}\!=\!0.01$. In contrast
to this standard passive T\,Tauri model, we use here a mass accretion
rate of $\dot{M}_{\rm acc}\!=\!10^{-8}\rm\,M_\odot/yr$ to set the
viscous heating of the disk according to Eq.\,(\ref{eq:visc_heating}). We use $140$
radial $\times100$ vertical grid points, and 40 wavelength bins.
Figure~\ref{fig:benchmark1} shows good agreement.
\noindent Figure~\ref{fig:benchmark1} reveals a number of interesting
features in the disk temperature structure:
\begin{itemize}
\item The dust temperature
at the inner rim is not much affected by viscous heating.
\item From top to midplane, the temperature first decreases in the
disk shadow, but then the trend is reversed and the temperature
re-increases towards the midplane as the viscous heat pumped into the
disk needs to flow outward, that is mostly upward, which according
to the diffusion approximation requires a negative temperature
gradient.
\item There is little effect of viscous heating on $T_{\rm dust}$
outside of the optically thick region which extends outward to
about 10\,au and upward to about $z/r \approx 0.1-0.15$ in this
model.
\item The temperature profile across the midplane beyond the
tapering radius \citep[$R_{\rm tap}\!=\!100\,$au, see][]{Woitke2016} shows a deep minimum around the midplane
$z\!=\!0$. This is because of the extreme dust settling that
occurs in these diluted outer disk regions, creating more optical
thickness along the midplane, which brings down the
midplane temperature to only 6\,K in this model.
\end{itemize}
\noindent Figure~\ref{fig:benchmark2} compares the calculated midplane
temperatures between {\sc ProDiMo} and MCMax, which reveals dust temperatures as high as 2800\,K,
which is of course questionable because at such temperatures, the dust
grains are expected to sublimate.
\noindent Figure~\ref{fig:benchmark3} shows the convergence of the RT
method, achieving residual relative temperature changes smaller than
$10^{-4}$ after about 150 RT iterations. The maximums occuring each
5$^{\rm th}$ iteration are due to the Ng-acceleration algorithm.
\vfill
\section{Adsorption energies} \label{appendix:eads}
\begin{table}[H]
\caption{Adsorption energies of the most prevalent molecular ices found in our model CPDs.}
\centering
\renewcommand{\arraystretch}{1.1}%
\begin{tabular}{lll}
\hline \hline
Ice & E$_{\rm ads}$ [K] & ref. \\ \hline
H$_2$O & 4800 & a \\
NH$_3$ & 5534 & b \\
NH$_2$ & 3956 & b \\
C$_2$H$_6$ & 2300 & c \\
C$_2$H$_4$ & 3487 & b \\
C$_2$H$_2$ & 2587 & d \\
CO$_2$ & 2990 & e \\
CH$_3$OH & 5534 & a \\
OH & 2850 & b \\ \hline
\label{tab:Eads}
\end{tabular}
\\
\raggedright
\footnotesize{(a) \citep{Brown2006}} \\
\footnotesize{(b) \citep{Garrod2006}} \\
\footnotesize{(c) \citep{Oberg2009}} \\
\footnotesize{(d) \citep{Collings2004}} \\
\footnotesize{(e) \citep{McElroy2013}} \\
\end{table}
The adsorption energies of our most common ices and their respective references are listed in Table \ref{tab:Eads}.
\vfill
\section{Surface density slope} \label{appendix:surfacedensityslope}
Our reference CPDs have a surface density powerlaw exponent $\epsilon = 1$. The steady-state solution for a constant-$\dot M$ decretion $\alpha$-disk is $\epsilon \approx 1.25$ \citep{Batygin2020}. The midplane ice mass fraction for a variety of possible values of $\epsilon$ is shown in Fig. \ref{fig:appendix-epsilon}. For a higher $\epsilon$ the NH$_3$ iceline responsible for the ``bump" in the $f_{\rm ice}$ profile at 0.07 R$_{\rm H}$ moves outwards only negligibly. In the inner disk however the ice mass fraction increases due to a combination of the lower midplane dust-to-gas and more efficient H$_2$O formation.
\vfill
\section{Background temperature}
Throughout this work we have assumed that the CPD is embedded in a radiation field in which the equilibrium dust temperature is 50 K. The midplane dust temperature at the gap center within the circumstellar disk is 50$\pm2$ K, for a solar luminosity 0.83 L$_{\odot}$, gap A$_{\rm V}$ = 0.008, and heliocentric distance 5.2 au. For earlier formation times with correspondingly higher solar luminosities (2.34-13.6 L$_{\odot}$), we find gap midplane dust temperatures ranging from 70-120 K at 5.2 au.
We have assumed that the final stage of Jupiter's accretion and moon formation occured at a radial distance from the sun of 5.2 au. Volatile enrichment in Jupiter's atmosphere indicates it may have formed further out at circumstellar disk temperatures < 25 K or at radii > 30 au \citep{Oberg2019}. The Nitrogen abundance of Jupiter, approximately 4$\times$ solar, may suggest additional N$_2$ was accreted from the solar nebula near the N$_2$ iceline \citep{Bosman2019}. In light of this possibility we consider also lower background temperatures down to 20 K. The midplane $f_{\rm ice}$ for the reference (7-11) CPD can be seen in Fig.\ref{fig:appendix-tback}. Inside the optically thick region of the CPD the influence of the background temperature $T_{\rm back}$ is marginal for temperatures \mbox{$\leq$ 70 K}. Above 70 K the more volatile NH$_3$ and CO$_2$ are unstable as ices and only water ice remains. Below 40 K the outer disk is able to retain ices at radii where A$_{\rm V} < 1$ as the photodesorption timescale is in excess of the viscous timescale.
\vfill
\section{Vertical mixing} \label{appendix:vertical-mixing}
We have made the simplifying assumption that material which accretes onto the CPD is instantaneously distributed vertically throughout the disk. The shock front may be found at a few ($\sim 5$) scale heights above the CPD midplane \citep{Tanigawa2012}. At 5 scale heights above the centrifugal radius the dust temperature $T_{\rm dust}$ = 123 K (relative to 89.5 K at the midplane), and optical extinction A$_{\rm V}$=0.004 (16.4 at the midplane). The velocity of vertical mixing by turbulent diffusion can be estimated as $v_{\rm z} = \alpha c_{\rm s}$ where $c_{\rm s}$ is the local speed of sound \citep{Heinzeller2011}. We find $v_{\rm z} \sim 0.5-1$ m s$^{-1}$ in this region for the high-viscosity CPDs, assuming that the magnitude of the turbulence is constant from the midplane up to $z = 5H$. The resulting vertical diffusion mixing timescale is $0.01$ $t_{\rm visc}$ (10-100 yr). We perform a test in the (7-11) CPD in which a parcel of gas is accreted at $z = 5H$ and iteratively evolve its chemistry in steps as it diffuses towards the midplane over 10 yr to understand the impact of more tenuous and high temperature conditions in the initial stages of ice formation (see Fig. \ref{fig:downwards-diffuse}). By the end of the stage of rapid formation of water (1-2 yr), the ice abundance has equalized to the fiducial case at the midplane.
\vfill
\section{Continued ice deposition on trapped grains} \label{appendix:trapped-grains}
In each of the reference CPDs grains of a certain size range remain trapped within the CPD if we assume that gas actively decretes via the outer edge of the disk. A trapped grain will increase physically in size as ice adsorbs onto its surface and thus alter its aerodynamic properties. \citet{Batygin2020} proposes that grain growth and sublimation could play a role in trapping and radial cycling of grains with size 0.1-10 mm. The size range of trapped grains is $\sim0.01-1$ mm in our high-mass CPDs and $\sim10^{-3}-0.01$ mm in the low-mass CPDs, representing 2.5$\%$ and 0.1$\%$ of the infalling dust mass that reaches the midplane, respectively. A modal icy grain is typically coated in no more than 4000 monolayers of water ice. Assuming a monolayer thickness $\sim0.5$ nm \citep{Zangi2003} and compact morphology, an icy mantle no more than 1 \textmu m thick will form. A grain of size 0.05 \textmu m can thus increase in size by a factor 20 and have a density corresponding more closely to water ice rather than silicate. For a 1 \textmu m thick mantle the aerodynamic effect of increased cross-section is negated by the corresponding reduction in grain density. If the trapped grain icy mantles continue to grow mantles beyond several 1000 monolayers, the new equilibrium trap radius drifts inwards. Realistically only a fraction (0.01-0.1$\%$) of the total CPD gas mass is accreted per year. Mantle growth for a trapped grain will thus not exceed $\sim2$ nm yr$^{-1}$ on average. The time for a grain to grow an icy mantle that allows it to drift to the trap inner edge is then $\sim10^6$ yr (high-mass CPD) or $\sim10^5$ yr (low-mass CPD) assuming a compact grain structure. Ice deposition is thus unlikely to allow grains to escape traps. This estimate does not take into consideration grain growth by coagulation or fragmentation by collision.
\section{Chemical abundances of the 0D "molecular cloud" model} \label{sec:appendix:mc}
The input parameters of the 0D molecular cloud model can be found in Table \ref{tab:mol_cloud}. A comparison between the model column densities of several common species with observations of TMC-1 can be found in Fig. \ref{fig:appendix-mc}. While most of the common species column density agrees relatively well with observations, the abundance of S-bearing species hinge on the uncertainties regarding the S elemental abundance \citep{Ruffle1999}.
\begin{table}[H]
\caption{Molecular cloud parameters}
\centering
\setlength{\tabcolsep}{1pt}
\begin{tabular}{llll}
\hline \hline
Parameter & Value & Unit \\ \hline
Hydrogen density & $10^4$ & cm$^{-3}$ \\
Gas temperature & 10.0 & K \\
Dust temperature & 10.0 & K \\
Optical extinction & 10.0 & - \\
Mean grain radius & 0.1 & \textmu m \\
Cloud Lifetime & $1.7\times 10^5$ & yr \\
\end{tabular}
\caption*{\textbf{Note}: Parameter values of the molecular cloud model and integration time are chosen according to the method of \citet{Helling2014} recommended for TMC-1 (Taurus Molecular Cloud) by \citet{McElroy2013}. Initial atomic abundances intended to represent typical diffuse interstellar medium conditions are also adopted from \citep{McElroy2013}.}
\label{tab:mol_cloud}
\end{table}
\section{CPD dust-to-gas ratio} \label{appendix:dusttogas}
In Fig. \ref{fig:fice_dustgas_001} we explore the resulting midplane ice mass fraction $f_{\rm ice}$ for possible values of the global dust-to-gas ratio from the canonical $10^{-2}$ down to $10^{-4}$. The maximum grain size and dust population size distribution is kept constant. A global dust-to-gas ratio of 10$^{-3.3}$ results in maximum midplane $f_{\rm ice}$ values most consistent with the Galilean satellite bulk compositions and hence is adopted as the reference value throughout this work.
\end{appendix}
|
Title:
Climate Change and Astronomy: A Look at Long-Term Trends on Maunakea |
Abstract: Maunakea is one of the world's primary sites for astronomical observing, with
multiple telescopes operating over sub-millimeter to optical wavelengths. With
its summit higher than 4200 meters above sea level, Maunakea is an ideal
location for astronomy with an historically dry, stable climate and minimal
turbulence above the summit. Under a changing climate, however, we ask how the
(above-) summit conditions may have evolved in recent decades since the site
was first selected as an observatory location, and how future-proof the site
might be to continued change. We use data from a range of sources, including
in-situ meteorological observations, radiosonde profiles, and numerical
reanalyses to construct a climatology at Maunakea over the previous 40 years.
We are interested in both the meteorological conditions (e.g., wind speed and
humidity), and the image quality (e.g., seeing). We find that meteorological
conditions were, in general, relatively stable over the period with few
statistically significant trends and with quasi-cyclical inter-annual
variability in astronomically significant parameters such as temperature and
precipitable water vapour. We do, however, find that maximum wind speeds have
increased over the past decades, with the frequency of wind speeds above
15~m~s$^{-1}$ increasing in frequency by 1--2%, which may have a significant
impact on ground-layer turbulence. Importantly, we find that the Fried
parameter has not changed in the last 40 years, suggesting there has not been
an increase in optical turbulence strength above the summit. Ultimately, more
data and data sources-including profiling instruments-are needed at the site to
ensure continued monitoring into the future and to detect changes in the summit
climate.
| https://export.arxiv.org/pdf/2208.11794 |
\section{Introduction}
With increasing global temperatures, weather around the world is changing \citep{ipcc_2019}. With more extreme weather events being attributed to climate change, research is focused on the impact of climate change on specific fields and industries. Astronomy may also be contributing to the crisis with the $CO_2$ emissions of an astronomer higher than the average adult (for example, 40\% higher in countries such as Australia)~\citep{nature_2020, Clery_2020}. The impact on climate of travel to conferences and meetings, operating observatories, and super-computing in turn could be impacting the quality of our astronomical sites and in the future limit ground-based astronomy. Recently, Cantalloube et al. (2021)~\cite{Cantalloube_2021} highlighted the need for in-depth study of the impact of climate change on astronomical observatories around the globe. Their recommendation was based on their investigation of parameters at the Paranal Observatory in Chile, where they found an increase in temperature and also surface layer turbulence. We investigate the climatology at Maunakea, one of the world's primary sites for ground-based astronomy, in order to determine if, and how, conditions may have evolved over the previous decades.
We specifically focus on the impact weather has on performing astronomical observations in the optical and near-infrared/infrared wavelengths at night. We first focus on the summit weather itself. Conditions for opening the telescope dome require that there be no precipitation, that the wind speed be below a specific threshold, and that the temperature and relative humidity present no risk of condensation forming on the primary telescope mirror (e.g., \cite{keckwebsite}). Should the nominal behavior of these parameters change, it could have a significant impact on the amount of time that observations can be made throughout a given year.
It is also possible that the conditions, while not severe enough to prevent operating of the telescope, degrade the image quality to such an extent that the ability to further improve current high-resolution imaging becomes limited. For example, atmospheric turbulence can lead to the distortion of images through the introduction of wavefront aberrations caused by fluctuations in the index of refraction in air. Techniques---such as the use of adaptive optics (AO) and post-processing algorithms---are able to mitigate some of the distortion, however, they are limited in what they can remove and not all instruments can benefit from such corrections (e.g., not all instruments are AO-fed). Should turbulence be increasing at the site, it means that future telescopes will need to be designed with more high-order AO systems (to correct for higher spatial frequencies) that can also correct for more turbulence (i.e., large stroke of the deformable mirror to correct for greater optical path differences due to changes in index of refraction). This is particularly important when considering the building and operation of future extremely large telescopes such as the Thirty Meter Telescope (TMT); should the atmospheric turbulence be worsening, the performance requirements of future AO systems currently being designed might not be achieved. Recent work by Lee et al. (2019)~\cite{Lee_2019} shows that increasing wind shear above the North Atlantic will lead to significant increases in turbulence that could in turn impact air travel between North America and Europe. We investigate whether a similar increase in turbulence is also found above Maunakea (where a similar Jet Stream feature exists), and whether that in turn leads to an increasing optical wavefront error. An important factor when considering turbulence is the ``seeing'' (related to the Fried parameter, $r_0$) which is the full-width-half maximum (FWHM) of the imaged point-spread-function (PSF) without AO correction. We look specifically at the vertical structure function of the index of refraction, the $C_n^2$ profile, which is what ultimately determines $r_0$ and therefore the seeing. Larger values of $r_0$ at Maunakea have been shown to be correlated to higher wind speeds \citep{Chun_2009,Lyman_2020}. At the same time, slow wind speeds might increase the occurrence of the low-wind effect seen by the Subaru telescope on Maunakea \citep{Vievard_2019} which is due to slow moving wind within the dome not allowing for proper cooling. We therefore consider both trends in $r_0$ and $C_n^2$, as well as the summit wind speed.
The rest of this paper is structured as follows. In Sects.~\ref{s:metdata} and \ref{s:turbdata}, we present an overview of the data and methods used in our analysis of the meteorological and turbulent characteristics of the summit, respectively. We then present the results of our analysis in Sect.~\ref{s:results}, beginning with a general overview of the sites' climatology over the previous decades, followed by an analysis of trends and impacts on observing/observable conditions. This is followed by a discussion of the results in Sect.~\ref{s:disc} and a summary of the conclusions in Sect.~\ref{s:concl}. %
\section{\label{s:metdata}Meteorological Data and Methods}
\subsection{Meteorological Data}
We use three types of meteorological data in our analysis: in situ observations made at the summit, radiosonde profiles, and a numerical re-analysis. The left panel in Fig.~\ref{f:datamap} presents a map of the area around the Island of Hawaii (also referred to as ``The Big Island''), showing the location and extent of different data sources in relation to the summit of Maunakea, while the right panel includes the layout of the summit with major telescopes shown. Figure~\ref{f:datatime} further illustrates the vertical and temporal resolution of the various datasets. We deliberately sought data that are available for roughly 30 years or more in order to be able to extract meaningful climatologies. The individual data sources are described in the following subsections.
\subsubsection{In Situ Meteorological Data}
In situ meteorological data is from the Canada-France-Hawaii Telescope (CFHT) meteorological tower located at the summit of Maunakea. These data---referred to as the METEO (meteorological) data---are available from 1991 to present day, and can be downloaded from the Maunakea Weather centre\footnote{\url{http://mkwc.ifa.hawaii.edu/archive/}}. Included in the dataset are 1-minute observations of wind speed ($U$), temperature ($T$), atmospheric pressure ($p$), and relative humidity (\emph{RH}).
We use the METEO data in order to provide an indication of the weather on top of the mountain at the location of the telescopes, as well as a ``ground truth'' benchmark for assessing the quality of the other data. Unlike the other data sources, the METEO data do not provide any vertical resolution. In order to smooth noisy data, we average the 1-minute values over 10-minute periods.
\subsubsection{\label{s:rds}Radiosonde}
Radiosondes are instrument platforms---typically carried by balloons---that are used to profile the properties of the atmosphere from the surface through the stratosphere~\citep{wmo14}. Observations are made as the balloon ascends, with values recorded at mandatory vertical levels (that change throughout the record), or at significant (thermo-) dynamic locations in the profile~\citep{schwartz92}. A typical ascent lasts roughly two hours; since the balloons are not steered, this means they can potentially drift up to hundreds of kilometers from their release point (e.g., \cite{Seidel_2011, Laroche_2013}). Yet, work by Bely (1987) \citep{Bely_1987} demonstrates good agreement between in situ and radiosonde observations. In Sect.~\ref{s:validn}, we further compare the radiosonde measurements to summit observations in order to determine whether or not the results can be compared with confidence for Maunakea. %
Radiosonde observations have been made on Hawaii since the 1950s. With improvements to instruments, the number of levels at which an observation is made has increased from roughly 10 above-summit locations to greater than 100 in recent years (Fig.\ref{f:datacomp}). Released twice per day at the Hilo International Airport (Fig.~\ref{f:datamap}), the radiosonde data provide a useful secondary verification for the re-analysis data set we use (ERA5). They also provide quasi-in situ vertical information, which the METEO data is unable to provide. From the radiosonde, we have vertical profiles of the temperature, humidity, and wind speed. The radiosonde data were downloaded from the NOAA/ESRL Radiosonde Database\footnote{\url{https://ruc.noaa.gov/raobs/}}. In the following, we refer to the radiosonde observations by the abbreviation, RDS.
\subsubsection{ERA5 Re-Analysis}
Re-analysis datasets are constructed by running \emph{a posteriori} simulations of numerical weather models and assimilating available in situ and remote-sensed data (e.g., radiosonde, weather station, surface temperature data) in order to provide an historical estimate of conditions over the globe. In essence, they provide time-space interpolations of sparse data. The European Centre for Medium-Range Weather Forecasts (ECMWF) produces its ERA-5 re-analysis with 137 vertical levels (roughly 68 above Maunakea's summit); the data are available for download as 3-hourly mean values from the Copernicus Climate Data Store\footnote{\url{https://climate.copernicus.eu/climate-reanalysis}} on a horizontal grid of $0.25^\circ \times 25^\circ$ and interpolated to 25 vertical levels above the summit (650--1~hPa). As with the in situ observations, the reanalysis includes values of temperature, wind speed, atmospheric pressure, and at least one metric of humidity from which other metrics can be determined. We downloaded the data at the closest model grid cell to the summit location of Maunakea, which is centred almost directly at the summit.
\subsubsection{\label{s:validn}Validation with In Situ Observations}
We present a very brief comparison of the statistics of the meteorological data used in our analysis in this section. We do this as a validation step to ensure that our data---particularly the reanalysis data---are reasonably representative of the observed conditions. While exact instantaneous values may differ, the statistics (e.g., mean and variance) of the different data sets should be similar in order to facilitate comparison between datasets. A series of histograms is presented in Fig.~\ref{f:datacomp} illustrating the distribution of summit-level temperature, humidity, and wind speed. Given that the radiosonde and ERA5 reanalysis data are reported on pressure levels, their measurements are not guaranteed to correspond precisely to the summit altitude of 4.2~km above sea level. As such, we select the observations taken at the level closest to the summit.
The METEO temperature distribution is broader than the others, with a slightly cooler peak. At the same time, while the ERA5 and in-situ METEO data have nearly identical distributions in the range from roughly 10\%--70\%, the in situ METEO contains much higher relative humidity values, approaching saturation almost 5\% of the time, while the other sources almost never reach saturation. This discrepancy could be related to local topographic effects (e.g., upslope advection of air leading to saturation) that are not resolved by the vertical profiling of the free atmoshere. The greatest deviation occurs in the wind speed, where the METEO reaches wind speeds greater than 10~m~s$^{-1}$ far more often than in the radiosonde and reanalyses (which all agree with each other). This may also be a local effect due to summit topography, as discussed by Bely (1987)\cite{Bely_1987}. While Bely corrects for deviations in wind between radiosonde and in situ observations, we are interested in the relative change on an instrument-basis and so do not make any adjustments to the reported observations. Some of the differences in distributions may be due to the much higher rate of sampling of the METEO (here, 10-minute averaged vs $\ge$~3 hours); however, resampling the METEO data (e.g., taking a 3-h mean), does not bring the distributions closer (not shown). The fact that the vertical profiles are not sampling exactly at the summit may also lead to some of the differences seen.
Ultimately, as a result of this brief comparison, we conclude that the differences in the underlying summit distributions will potentially lead to significant differences when determining in situ conditions for observing. As such, we only use the in situ METEO data for analysis of summit conditions. The ERA5 reanalysis does, however, do a good job of reproducing the observed properties of the radiosonde (both near the summit as shown here, and vertically; not shown). We can therefore use the ERA5 profiles for the the turbulent parameter estimation (see Sect.~\ref{s:turbdata}) for which summit values alone are insufficient. Using the ERA5 reanalysis rather than the radiosonde profiles allows for a much higher temporal resolution on a consistent vertical grid.
\subsection{Meteorological Methods}
In order to ensure that we are looking at nocturnal conditions, we use a strict window, limiting our analyses to the data taken at times between 2100 and 0600 local Hawaii Standard Time (HST; UTC+10). This means we have one radiosonde profile and four ERA5 values per night.
Meteorology influences observational astronomy in two primary ways. 1) The meteorological properties affect the quality of the observations through changes in the index of refraction and turbulent properties of the atmosphere, as well as (for longer wavelengths of observation) the background emissivity of the atmosphere. 2) Whether or not observations can even take place is also dependent on the weather. A simple example is risk of condensation; water condensing on the telescope not only reduces the transfer of light throughout the system but can also damage the surface of mirrors. Many telescopes have specific operating conditions where the dome cannot be opened (or must be closed) if conditions reach certain thresholds. In our investigations, we are therefore concerned with understanding the overall trend in the meteorological variables, as well as any changes in observable conditions.
We perform our meterological analyses by binning the data into seasons classified according to three months as follows: spring (March, April, and May; MAM), summer (June, July, and August; JJA), autumn (September, October, and November; SON), and winter (December, January, and February; DJF). We also look at annual values. In order to better ensure reliable results, we restrict ourselves to periods where at least 80\% of the data are valid (i.e., no missing or invalid data).
We determine the long-term trends in meteorological conditions themselves based on the mean over the seasonal/annual period. We are interested in both the long-term trends at the summit, as well as in the column of atmosphere above the summit. We only consider the in-situ meteorological data to determine summit trends, while the reanalyses and radiosondes are used to provide an indication of the above-summit conditions.
Determining a season's (or year's) potential for observation requires the further step of first comparing the in-situ observations with the meteorological thresholds. At each observation time, we compare the available data to the threshold. If the data do not exceed the threshold, then the timestep is considered ``observable''. If, however, the threshold is exceeded, then it would be deemed ``unobservble''. Our thresholds are based on those listed by the Keck Observatory~\cite{keckwebsite} which provides guidelines for Observing assistants on when to close the dome. The actual operation of the telescope will, of course, depend on the experienced decision making of the Keck personnel at the summit and so these thresholds are not absolute. They do, however, suffice for the purposes of our analysis to give an indication of operational feasibility. Briefly, the primary thresholds for observable conditions that we consider here are: $U < 20$~m~s$^{-1}$, \emph{RH}~$< 95$\%, and $T-T_{dew} > 2$~K. A more detailed overview of the observing criteria can be found at \cite{keckwebsite}.
\subsubsection{Precipitable Water Vapour}
The total precipitable water vapour (\textit{PWV}) is the total amount of water within a column of the atmosphere:
\begin{equation}
\textrm{\textit{PWV}}=\frac{1}{\rho_wg}\int_{P_1}^{0}q(P)dP
\end{equation}
where $\rho_w$ is the density of water vapour, $g$ the acceleration due to gravity, $q$ specific humidity, and $P$ the atmospheric pressure. \textit{PWV} is an important parameter for observing in the (near-) infrared (NIR/IR) and submillimeter wavelengths as water radiates at these wavelengths dominating background radiation. Water also introduces phase aberrations for longer wavelengths as it becomes a source of fluctuations in the index of refraction (the fluctuations are driven by temperature for optical/NIR wavelengths)~\cite{Colavita_2004}. With large amounts of water present in the atmosphere it becomes difficult to observe in these wavelengths from the ground. \textit{PWV} values around 5-10~mm can render it impossible to make scientifically impactful observations in the K-band (central wavelength of 2.2 $\mu$m) for science cases such as the direct imaging of exoplanets. For sub-millimeter, certain bands can only be observed when the \textit{PWV} is less than 1~mm. We use the James Clerk Maxwell Telescope, JCMT, weather bands\footnote{\label{note:JCMT}\url{https://www.eaobservatory.org/jcmt/observing/weather-bands/}} to bin the calculated \textit{PWV} and study the long term behaviour \textit{PWV} for these wavelengths.
\section{\label{s:turbdata}Turbulence Data and Methods}
\subsection{MASS-DIMM data }
The Canada France Hawaii Telescope (CFHT; Fig.~\ref{f:datamap}) provides nightly time series of the total seeing (i.e., $r_0$) and vertical profiles of the index of refraction structure function, $C_n^2$, estimated by its Differential Image Motion Monitor (DIMM) and Multi-Aperture Scintillation Sensor (MASS). The MASS $C_n^2$ profiles are estimated for fixed altitudes of 0.5, 1, 2, 4, 8, 16 km above the telescope and are made approximately every 2 minutes. The data are available from 2009 to present. The MASS instrument has limited ability to measure the turbulence accurately for the first altitude of 0.5 km as it is blind to some of the turbulence in the layer and can only measure the seeing in the free atmosphere. The DIMM, however, measures the integrated turbulence for the entire column allowing $r_0$ to be estimated. By combining the DIMM/MASS, an estimation of the ground-layer turbulence can be made.
\subsection{\label{sec:turb_method}Estimating Turbulence Parameters}
In ground-based optical and NIR astronomy, a few key parameters are used to describe atmospheric turbulence in a way that is meaningful for observing, including: the $C_n^2$, the Fried parameter ($r_0$), and the atmospheric coherence time ($\tau _0$). These parameters are either directly related to the image quality or have meaning for the performance of an AO system. $C_n^2$ is the structure function of the index of refraction as a function of altitude. At the observed wavelengths, fluctuations in the index of refraction cause the optical path differences (phase errors) that limit image quality and resolution of larger telescopes. The Fried parameter is related to the integrated $C_n^2$ and describes the total impact of the atmosphere. With units of length, $r_0$ can also be estimated as an angular separation in units of arcseconds giving the more commonly used value of "seeing" that astronomers report as it relates to the FWHM of an aberrated PSF. Finally, $\tau _0$ describes how quickly the turbulence is changing above the telescope. It is related to the wind speed and the turbulence strength profile. With these parameters we can predict the quality of observed data and the achievable image resolution. In this section we outline the calculation of these values.
\subsubsection{Determining Structure Function of the Index of Refraction from Re-analysis Data}
The ERA5 re-analysis data contain temperature and wind values at 25 pressure levels above the Maunakea summit (Fig.~\ref{f:datatime}). This corresponds to a value every few kilometers which is relatively coarse, though much finer resolution than MASS data. We match synthetic $C_n^2$ profiles generated from ERA5 to the coarse profiles as measured at CFHT. Here we do not aim for exact instanaeous matches rather estimate the mean $C_n^2$ in order to calibrate out ERA5-derived profiles. This provides us with rough estimates of the local $C_n^2$ (although with such coarse resolution it cannot be a 'local' estimate). Since the reanalysis has more layers than CFHT, we re-sample the $C_n^2$ data by summing the local $C_n^2 dh$. This provides us with the $C_n^2$ in $m^\frac{-1}{3}$, allowing us to compare to the CFHT measurements. Below we outline our methodology for calculating the $C_n^2$ values from ERA5 data.
Using same methodology as \citet{Osborn_2018}, we use the modified Gladstone's relationship~\cite{Masciadri_2016} to write $C_n^2$ as a function of the temperature structure function, $C_T^2$.
\begin{equation}
C_n^2 =(80*10^{-6} P/T\theta)^2 C_T^2
\end{equation}
where $\theta$ is the potential temperature,
\begin{equation}
\theta = T \frac{P_0}{P}^{\frac{R}{c_P}},
\end{equation}
and $P_0 =1000 mbar$ and $\frac{R}{c_p} =0.289$.
From Tatarskii et al. (1971)~\citep{Tatarskii_1971}, $C_T^2$ as a function of altitude ($z$) is estimated using the potential temperature gradient, and the scale of the largest energy scale of the turbulent flow, $L$.
\begin{equation}
C_T^2= kL(z)^{4/3} \left(\frac{\delta \bar{\theta} (z) }{\delta z}\right)^2
\end{equation}
\begin{equation}
L(z)=\sqrt{\frac{2E}{\frac{g}{\theta(z)}\frac{\delta \bar{\theta} (z) }{\delta z}}}
\end{equation}
$k$ is an unknown dimensionless constant that is calibrated against $C_n^2$ data; in reality it encodes information about the stability of the atmosphere. In Osborn et al. (2018)~\cite{Osborn_2018}, the authors found a $k$ value of 6 for a global calibration. However, $k$ can be determined for not only a specific site but also be altitude dependent. In this work we calibrate $k$ using approximately 10 years of MASS data starting from 2011 and find a value for each altitude: 6.3, 10.3, 25.6, 11.8, 18.2, and 12.0 going from the lower to higher altitudes, respectively. $E$,~the turbulent kinetic energy, is given by the square of vertical wind shear as done in Osborn et al. (2018)~\cite{Osborn_2018}:
\begin{equation}
E= \left(\frac{\delta u }{\delta z}\right)^2 +\left(\frac{\delta v }{\delta z}\right)^2 .
\end{equation}
\subsubsection{Determining Fried Parameter and Atmospheric Coherence Time}
From Hardy (1998)~\cite{Hardy_1998}, $r_0$ for light at 500~nm in the zenith direction can be calculated from the $C_n^2$ profile
\begin{equation}
r_0 = 0.423\left ( \frac{2 \pi}{\lambda}\right )^2\left(\int C_T^2 (h) dh \right)^{-3/5}
\end{equation}
From the Fried parameter, $r_0$ and the effective wind speed, $V_eff$, we can calculate the coherence time, $\tau_0$ following Hardy (1998)~\cite{Hardy_1998}.
\begin{equation}
\tau_0 = 0.314 \frac{r_0}{V_{eff}}
\end{equation}
with
\begin{equation}
V_{eff}= \left[\frac{\int_{0}^{\infty}C_n^2(h) U(h)^{5/3} dh}{\int_{0}^{\infty}C_n^2(h) dh}\right]^{3/5}
\end{equation}.
\section{\label{s:results}Results}
\subsection{Meteorology}
\subsubsection{Climate at Maunakea}\label{s:climate_mk}
We first present an overview of Maunakea's climate based on the assembled data, in order to provide a context for subsequent analysis. Figure~\ref{f:seasonalvariance} shows the seasonal-median summit values of temperature, wind speed, and relative humidity from the year 2000 to present as recorded at the CFHT weather station. All three parameters demonstrate a degree of seasonal variability though overall relatively stable characteristics. Temperature is within a few degrees of freezing throughout the year and summit wind speeds are typically around 5--7~m~s$^{-1}$. In general, the atmosphere is dry with relative humidity around 20\%, but with significant variability in the record (illustrated by the shading), including an increase in the standard deviation with time.
Of the standard meteorological variables, wind speed is the only parameter with any statistically significant increase in median value at the summit (Fig.~\ref{f:windchange}) with 5-year averaged speeds increasing over 30 years. For example, there is a rightward shift in the peak of the wind-speed distribution, with speeds above 15~m~s$^{-1}$ increasing by 1--2\%. Overall this does not have a significant impact on the mean summit windspeed, but it does indicate a greater likelihood for increased ground-layer turbulence and, by extension, wind buffeting as the wind interacts with the dome structures themselves~\cite{tmt_wind_buffet}.
The median vertical structure of the atmosphere is shown in Fig.~\ref{f:meanrds}. The jet stream layer is visible as a maximum in wind speed between 10--15~km above sea level, with maximum wind shear at the top and bottom of the layer. Within this layer, the wind predominantly blows from west to east, with equal likelihood of small northerly or southerly deviations (not shown). In general, temperature decreases near-adiabatically with height up to roughly the top of the jet stream where it becomes roughly constant. Specific humidity also decreases within the troposphere, before increasing above 15~km.
The precipitable water vapour, related to specific humidity, shows considerable variability over the period from 1980 to present (Fig.~\ref{f:pwvenso}). \textit{PWV} varies between 0.5--3~mm in both the radiosonde- (RDS) and ERA5-calculated values, with no significant long-term trend in the seasonal median. We also compare the \textit{PWV} to the El Ni\~na-Southern Oscillation (ENSO) in Fig.~\ref{f:pwvenso}. Minima in the signal appear to follow peaks in El Ni\~no and subsequent transitions to La Ni\~na conditions, providing a first-order predictor of conditions. Interestingly, there is a point before 1995 where the ERA5 and RDS values disagree, after which they are aligned, although the RDS does consistently lead to lower minima than the ERA5 estimates. The seemingly abrupt change in RDS values is likely due to an increase in the number of vertical levels sampled by the radiosonde between roughly 1995 (dashed line) and 1998, rather than any relevant climatological factor.
Figure~\ref{f:seaspwv} further highlights the seasonal distribution of total atmospheric \textit{PWV} in different bins. The RDS observations indicate that, post-90's, \textit{PWV} is less than 0.83~mm almost 50\% of the time, providing optimal conditions. The ERA5 reanalysis estimates a much lower fraction, though with \textit{PWV}~$<$~1.58~mm at least 50\% of the time. The difference in estimates may be due to the different vertical resolution of the profiles. After the mid-90's, the RDS data have a much higher vertical resolution than the ERA5 profiles. The consistent heights of the reanalysis data, however, allow for a long-term comparison of PWV to be made. While there are obvious variations over the previous 40 years, they are of a cyclical nature with no obvious trend.
\begin{comment}
Looking at available Maunakea data (2020--2021), the dome is closed roughly 50\% of time** (19.00-06.00) [up to 70\% in summer?!]
\item Precipitation registered roughly 4.5\% of the time, with a peak in November?
\end{comment}
\subsubsection{Weather-Related Dome Closures}
We next investigate the meteorological conditions that can lead to dome closures at the summit, using the Keck values as a guide. Figure~\ref{f:threshobs} plots the fraction of all METEO observations that exceed the dome closure criteria (i.e., the total number of times the criteria are exceeded, divided by the total number of observations made) This is analogous to the amount of time that the dome would need to be closed in a season, though not directly comparable due to operational considerations such as the waiting time needed before the dome can be re-opened. We plot seasonal (different panels) and annual values (black dots in each panel). In all cases, there is a significant worsening trend in the fraction of observations that exceed the criteria, with an annual increase of 0.49\% per year. The greatest increase is seen in spring (0.61\% per year) and the lowest in summer (0.3\% per year). These trends equate to a near-tripling in fraction of conditions requiring dome closure over the 30-year period. The trends are driven primarily by increasing summit winds in winter, spring, and autumn, while increasing humidity drives the trend in the summer (not shown).
While the total fraction of observations is increasing, Fig.~\ref{f:threshnights} shows that this does not necessarily equate to the same change in unique nights impacted by weather (i.e., nights were at least one criterion is exceeded at least once in the night). Winter has the most nights affected by bad weather, but only spring and autumn show significant trends, leading to an annually significant trend of around 0.65\% of nights per year. Over the 30 years, however, this does mean a roughly doubling in nights impacted by bad weather, going from approximately 15\% to over 30\% of unique nights by 2020.
\subsection{Turbulence}\label{s:results_turb}
Next we look at the behavior of turbulence above the telescope that drives the changes in optical path difference that causes image distortion and limits resolution. We split the analysis of turbulence into two components: 1) the free atmosphere (starting from approx 0.5 km above summit) and 2) ground layer using $C_n^2$, $r_0$, and $\tau_0$.
\subsubsection{Turbulence in the free atmosphere}
From the equations outlined in Sec.~\ref{sec:turb_method}, the mean $C_n^2$ profile for the free atmosphere was calculated using the ERA5 reanalysis, re-sampled to the MASS/DIMM altitudes and then calibrated on on the overlapping data from 2011 to early 2020 by calculating the ratio of the mean profiles in time. The calibration was then applied to all the ERA5 profiles. We plot the results in Fig.~\ref{fig:ERA5_CN2}, showing the median instead of the mean in order to highlight differences in the profiles. Note this means that while the mean profiles are same due to the calibration, the extrema are different. As expected we have good agreement with the overlapping data, while all of the ERA5 data has a slighter smaller mean value than the most recent data, suggesting that the mean profile has increased in strength though within the error bars of the most recent profile. We calculate $r_0$ for the free atmosphere using the $C_n^2$ profiles, with 5-year-binned statistics of $r_0$ in Fig~\ref{fig:ERA5_r0_time}. From the PDF and CDF we see temporal variability in $r_0$ but no consistent trend toward better or worse values. These results suggest that the strength of turbulence has not changed in the last 40 years.
While the strength of the turbulence has remained constant, we look to the free atmosphere $\tau_0$ which can have a significant impact on the image quality for astronomical observations especially AO assisted imaging. Taking the effective wind speed, we calculate $\tau_0$ with Fig.~\ref{fig:ERA5_tau_time} showing the statistics of $\tau_0$ over the same temporal bins. From 1980-2014, the peak of the PDF is decreasing with time and from the CDF we see that the curves shift to the right indicating that the $\tau_0$ has more larger values between. There is, however, an increase in the peak of the PDF for 2015-2020. Looking more closely at the temporal sampling of the data, we are unable to confirm whether the changes are significant as the amount of data available within each bin varies by tens of percentages as well as the distribution throughout the year. Qualitatively, we see no evidence of changes to the wind speed in the free atmosphere at the MASS altitudes that we use for the $\tau_0$ calculations.
The free atmosphere behaviour, however, does not provide the complete story; we must also look to what the ground layer turbulence is doing. Given that the ERA5 wind speed does not agree well with the in situ obervations (Sect.~\ref{s:validn}), we use the in situ MASS/DIMM observations rather than ERA5 as in the previous section.
From the MASS/DIMM measurements we not only get the full $r_0$ value covering the entire atmosphere but also the $r_0$ value of the free atmosphere. From these values we can calculate the ground layer $r_0$~\cite{Lyman_2020}. We compare the histograms of these different $r_0$ values for the complete data set in Fig.~\ref{fig:fried_all}. We see that the amount of turbulence in the ground and the free atmosphere are both log normal distributions with slightly different mean values, as expected. We also see that the bulk of the turbulence (corresponding to smaller $r_0$ values) is found in the free atmosphere. The median value of the ground layer is 21~cm which is in agreement with 20~cm what was previously found through a dedicated SLODAR campaign by Chung et al. (2009)~\cite{Chun_2009}. We also see from the figure that the ERA5 free $r_0$ and free $r_0$ have good agreement with median $r_0$ values of 19 and 22~cm, respectively. These values agree with other studies that report a 21~cm $r_0$ using MASS data~\cite{KAON303}. The mean total $r_0$ of roughly 15~cm is also in agreement with the 4-year mean of 15~cm found by Subaru~\cite{KAON303}. From Fig~\ref{fig:fried_all}, we can further verify that our calculations of the $C_n^2$ profile and $r_0$ are in good agreement with the literature. Taking a closer look we plot PDFs and CDFs for the fraction of turbulence in the free atmosphere and the ground in Fig.~\ref{fig:ratio_fried} for every 2 years. Over this short-term basis there is little evidence of a trend to either large or smaller values with the peak of the PDF fluctuating. This suggests that the strength of the ground layer does vary with respect to the free atmosphere but the mean shows no trend in time from the current baseline available.
\section{\label{s:disc}Discussion}
In this section, we discuss some of the results above and their significance in relation to previous work and their implications for astronomy at the summit of Maunakea. We have chosen to highlight three key elements of our investigations, namely, 1) the trends in dome-closure criteria, 2) the impact of precipitable water vapor, and 3) turbulence, while also looking toward future data needs.
\subsection{Dome Closure Criteria}
The general increase in summit winds (Fig.~\ref{f:windchange}), although modest, is sufficient to increase the number of times Keck's dome closure criteria would be exceeded in a given year (Figs.~\ref{f:threshobs} and \ref{f:threshnights}). The overall increase in excedances is significant, with a roughly doubling of unique nights (approximately 15\% to over 30\% of nights) reporting meteorological values that could lead to dome closure over the course of a year (0.65\% per year), and a corresponding significant trend in autumn of roughly 1\% per year.
As mentioned above, it is important to note that this is not the actual dome closure rate. The thresholds are guidelines that are used by the experienced observers and telescope operators on site who are responsible for making dome open/close decisions. It is also possible that many of the ``bad nights'' would already be lost to maintenance or other non-meteorological closures that are upwards of 40 nights per year. Recent closure records\footnote{Records provided by Jim Lyke for the period spanning July 2018--October 2021} indicate that the dome is closed---for any reason---roughly 40\% of the year on average, with the lowest closure rate in May and June. Averaged historically, weather-related closures account for roughly 15\% of all closures\footnote{Q\&A at workshop in 2021 given by Keck personnel:\url{https://www.keckobservatory.org/wp-content/uploads/2021/02/OMeara-QA.pdf?x32463}}. Our analysis in Fig.~\ref{f:threshobs} agrees with these numbers provided by Keck, with lowest closure estimates in the spring and summer periods, and accounting for up to 20\% of total nighttime hours. Ultimately, this agreement lends confidence to our analysis.
Overall, the trend is concerning. Should conditions continue to worsen, it is possible that weather-related closure could become a significant hindrance to future astronomy. We do not, however, have a long enough time series to conclude whether this is a trend that has persisted for some time, or simply a short-term increase as part of a larger cycle, and as such the conditions will improve in coming decades. We also could not take into consideration other phenomena---such as changes in cloud cover or precipitation---that would also restrict observations. It could very well be that improvements in these variables offset the worsening conditions in the variables we could consider here. Continued monitoring of the site is therefore essential. This includes the need for wider---or at least more accessible---recording and reporting on seasonal and annual dome closure statistics and their causes.
\subsection{Precipitable Water Vapor}
Taking a closer look at Fig.~\ref{f:seaspwv}, we look at the variability in the ERA5 \textit{PWV} values, specifically comparing the minima and maxima. As mentioned in Sec~\ref{s:results} following analysis of Fig.~\ref{f:pwvenso} minima in the \textit{PWV} signal follow peaks in El Ni\~no and transitions to La Ni\~no conditions. Comparing the minima of Fig.~\ref{f:seaspwv} in the years 1998, 2003, and then 2010 we see a large difference in the \textit{PWV} value and how long the dry period spans. In 2010 we have significantly longer period (twice as long) of \textit{PWV} values falling within the smallest JCMT bin. Contrasting to more recently, the \textit{PWV} values has been abnormally high providing poor conditions from 2019 until at least 2021. For astronomy such as NIR observations of exoplanets, the \textit{PWV} can significantly impact the quality of the observation and ultimately be the difference between a detection or non-detection for a given night of observation. Specifically, the discovery of a fourth planet previously undetected around HR 8799 was made around 2009 and 2010 using W.M. Keck Observatory~\cite{Marois_2010} on Maunakea when the \textit{PWV} was abnormally low for a longer period; it is possible that these conditions favorably contributed to the detection. It would be interesting to compare the time of observations for impactful science in NIR bandpass with values such as \textit{PWV} to determine how much the results depend on specific conditions in order to better understand the performance of our current and future instruments. This analysis, however, is outside the scope of this paper.
\subsection{Turbulence}
We discuss the results from Sect~\ref{s:results_turb} in more detail here. The Gladstone equation presented in Sect.~\ref{s:turbdata} depends on the shear (derivative) of the wind as a function of altitude. From the wind profiles above Maunakea (i.e., Fig.~\ref{f:meanrds}) we expect to have two peaks in the shear profile near 6~km and a second peak around 15~km (approximately where the change in wind is the greatest) on either side of the jet stream layer. When studying the CFHT $C_n^2$ profiles, however, we only find one peak around 6~km lining up with the base of the jet stream but the second peak at the top of the jet stream is not present. When resampling the ERA5 data to match the CFHT resolution we also effectively miss this second peak higher up. Figure~\ref{fig:era5_full} shows the $C_n^2$ profile for the full resolution ERA5 profile which reveals both peaks as expected. We note that the lower peak in the full profile is considerably smaller than in the resampled profile. This suggests that some of the turbulence above the jet stream is being binned into the lower layer and that the amount of turbulence is not being missed but the distribution of turbulence might be incorrect.
MASS/DIMM instruments are model-dependent on both the assumed altitudes and the type of turbulence, providing an accuracy of up to 10\% when properly maintained~\cite{Tokovinin_2007}. Since the methods assume a thin layer of turbulence at specific heights, turbulence at different heights will be binned into a specific height. While the overall amount of turbulence measured is correct, there is much uncertainty in how it is distributed. The data suggest that the assumed layers on Maunakea for the MASS instrument could be changed to better sample where the bulk of the turbulence is expected. This might have implications for better understanding of turbulence and instrument performance but also for AO methods such a multi-conjugate AO where wavefront sensors are conjugate to different altitudes to measure the turbulence at a given height.
With the ground layer $r_0$ showing no trend over 10 years, we comment on the impact of the increase in ground layer wind speed that is seen in the in-situ measurements presented in Sec~\ref{s:climate_mk}. The increase in strong ground layer winds not only result in dome closure criteria being met more often but will have an impact on how quickly the ground layer turbulence is evolving as well as dome seeing (turbulence inside the dome itself). More quickly flowing air over the dome structure itself (and other structures including geophysical) could alter turbulence for one telescope and not for another (i.e., for one telescope it might increase vibrations along the support structure by a small, but still significant, amount). Due to the multifaceted impact that ground layer wind can have we do not calculate the coherence time of the ground layer (and only look at free atmosphere coherence time in Sec~\ref{s:results_turb}). The full impact of change in ground-layer wind speed therefore must be evaluated for each telescope separately.
\subsection{Future data}
As the astronomy community looks toward future telescopes such as the Thirty Meter Telescope (TMT) as well as continue to use current telescopes on Maunakea, it is desirable to expand the work presented in this paper and increase the baseline to detect trends early. With such work, new instruments and new operation methods (i.e., queue observing) can have the necessary tools and data to produce the best science with these ground-based telescopes.
In this work we look for changes in meteorological data as well as various turbulent parameters. It is important to keep the current facilities up-to-date along with increasing their capabilities (e.g., improve MASS/DIMM resolution as well as number of operational nights). Specifically, in regards to the MASS, it will be important to understand why the distribution of turbulence is different compared to ERA5 as discussed above and make any necessary changes to what altitudes are chosen by the MASS. It would also be beneficial to have more data sources of similar data on the mountain as to not be biased towards a specific geographical feature, answering questions such as: are the winds measured at CFHT representative of winds at Keck Observatory or Subaru Telescope? Are these observatories really experiencing an increase in dome closure due to this? Finally, having observatories publish their current data on percentage of night with dome open/closed would be good for understanding how the weather is affecting astronomy and if there are indeed any trends in dome opening.
Beyond observations, numerical simulations are also important to consider. In particular, climate projections will be important in order to relate current observed trends to potential future scenarios, although careful consideration of the potential mismatch between numerical estimates and highly local in situ observations (e.g., Sect.~\ref{s:validn}) will need to be undertaken. Waiting until we can observe a change is too late. While such an analysis is beyond the scope of this current manuscript, it is an important next step.
\section{\label{s:concl}Conclusions}
We present a study of long-term trends on Maunakea with the primary goal of determining whether climate change is already having an impact on astronomy at the site. Specifically, we look at weather (temperature, wind speed, and relative humidity) both at the summit using in situ data as well as above the summit using radiosonde and re-analysis data (ERA5). We use in situ $C_n^2$ profile measurements to calibrate $C_n^2$ profile values extracted from ERA5 data allowing us to look at the turbulence characteristics over the last 40 years.
From the meterological data we find:
\begin{enumerate}
\item the wind speed is increasing at the summit (Fig.~\ref{f:windchange}) with 5-year averaged speeds increasing over 30 years,
\item there has been a doubling in nights impacted by bad weather over the last 30 years based on the Keck dome-closure criteria (driven mainly by the wind speed), and
\item there is no long-term trend in precipitable water vapour \textit{PWV} although there is significant interannual variability in \textit{PWV}, possibly related to ENSO dynamics.
\end{enumerate}
Studying the turbulence parameters we show:
\begin{enumerate}
\item that the 5-year means of $r_0$ and $\tau_0$ have not changed over the last 40 years,
\item year to year both $r_0$ and $\tau_0$ can change noticeably,
\item and that the fraction of turbulence in the ground and free atmosphere has no trend in the last 10 years but note that it can vary greatly year to year.
\end{enumerate}
To support further monitoring of climate-change impacts and to further understand the changes we are already seeing, we stress the need to maintain an up-to-data climatology on Maunakea. We would also encourage observatories to publish available data such as local temperature, wind speed, or dome closure data, making it accessible to continue this work. An important follow-up to this work is to look toward climate projections in order to better understand how climate change could affect the site in the future, and not just how it has affected Maunakea in the past (including whether any acceleration is possible). Finally, while it does not yet appear that significant deleterious changes have occurred on Maunakea, we urge the astronomy community to consider ways to reduce our carbon footprint, which will help to maintain the scientific quality of our global astronomical sites as well as the important ecological and social settings of our observatories.
\bibliography{references}
\authorcontributions{Both authors contributed equally the preparation of this manuscript.}
\funding{This research received no external funding.}
\acknowledgments{We acknowledge that the land on which the University of California, Santa Cruz is located is the unceded territory of the Awaswas-speaking Uypi Tribe. The Amah Mutsun Tribal Band, comprised of the descendants of indigenous people taken to missions Santa Cruz and San Juan Bautista during Spanish colonization of the Central Coast, is today working hard to restore traditional stewardship practices on these lands and heal from historical trauma.
The authors also wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
We are grateful to all---known and unknown---who collected and provided the data used in these analyses. Unless otherwise noted, the individual datasets are all openly available and can be accessed as described in Sect.~\ref{s:metdata} and Sect.~\ref{s:turbdata}.
This work was started while the authors were at the Leiden Observatory (MvK) and Delft University of Technology (JGI) in the Netherlands and has continued throughout the COVID-19 pandemic. We are grateful for the support received there as well as in our current positions at UC Santa Cruz. In particular, we thank Dr. Rebecca Jensen-Clem for her feedback on an early version of the manuscript. We are also grateful for the reviewer's time and feedback.
}
|
Title:
Modeling Cosmic Reionization |
Abstract: The transformation of cold neutral intergalactic hydrogen into a highly
ionized warm plasma marks the end of the cosmic dark ages and the beginning of
the age of galaxies. The details of this process reflect the nature of the
early sources of radiation and heat, the statistical characteristics of the
large-scale structure of the Universe, the thermodynamics and chemistry of
cosmic baryons, and the histories of star formation and black hole accretion. A
number of massive data sets from new ground- and space-based instruments and
facilities over the next decade are poised to revolutionize our understanding
of primeval galaxies, the reionization photon budget, the physics of the
intergalactic medium (IGM), and the fine-grained properties of hydrogen gas in
the "cosmic web". In this review we survey the physics and key aspects of
reionization-era modeling and describe the diverse range of computational
techniques and tools currently available in this field.
| https://export.arxiv.org/pdf/2208.02260 | \subsection{Partially coupled simulations}
Cosmological simulations vary widely in their setups, input physics, and numerical approaches. In this review we deliberately limit ourselves to simulations that model reionization, i.e.\ include cosmological radiative transfer as one of their main physics packages.
A number of recent computational projects have focused on modeling high redshift galaxy formation without accounting for the thermal and ionization history of their environment, such as FLARES \citep{flares1,flares2,flares3}, FIRE \citep{Ma2020,Liang2021}, or FirstLight \citep{firstlight}. Because they do not address the reionization process itself, they are somewhat outside the scope of the present review. An excellent recent review of galaxy formation simulations is given by \citet{Vogelsberger2020}.
An intermediate stage between a ``semi-numerical'' technique like DMO+SAM and a fully coupled cosmological hydrodynamic simulation is one where
not all the physics components of a simulation are actually fully coupled. The classical example of such an approach are C2-Ray simulations \citep{c2ray1,c2ray2}, which include full radiative transfer but assume
that the hydrogen density tracks the dark matter field of a large DMO run. In this method, the gas dynamics does not respond to heating by UV radiation, and physical effects such as the suppression of gas accretion and condensation in sufficiently low-mass halos cannot be captured directly, although they may be modeled with additional, approximate schemes. These simulations also do not resolve the scales that are relevant for star formation, and model radiation sources with a semi-analytic approach.
A different example of partially coupled simulations can be found in \citet{illust-rei}, where two different radiative transfer solvers are run in post-processing on outputs from the Illustris simulation \citep{illust}. Illustris aims at studying the processes of galaxy formation and evolution in the Universe with a comprehensive physical model that
includes feedback from supernova explosions and
accreting supermassive black holes, and radiative transfer in post-processing complements the original simulation with maps of ionized/neutral IGM gas throughout the reionization process. The limitation of this approach is that the gas dynamics does not respond to the spatially-varying and time-dependent radiation field. An analogous approach using the MassiveBlack-II simulation \citep{MBII} and the radiative transfer code CRASH was presented by \citet{eide18,Eide2020,Ma2022}. Here, the key advantage is the adopted frequency range, which is wider than modeled by most other approaches and extend to soft X-rays and the associated secondary ionizations. The CRASH+MassiveBlack-II set of simulation also systematically explored the impacts of multiple types of radiation sources -- in addition to normal stars and quasars, they also considered binary stars, shock-heated ISM, and X-ray bursts. The radiative transfer solver has also been used in post-processing for the modeling, for example, of the escape fraction from individual halos in cosmological simulations
\citep[c.f.][for recent studies]{Ma2020,Kostyuk2022}. As we mentioned at the beginning of this subsection, such models are beyond the scope of this review.
Partially coupled simulations with a primary focus on galaxy formation at high redshift and not on the process of reionization per se may also include an approximate radiative transfer algorithm rather than an actual numerical solver. The poster child examples of such schemes are BlueTides \citep{bluetides} and Astrid \citep{astrid}. These
simulations combine the physics package of the MassiveBlack-II run \citep{massiveblack} with the ``The Reionization on Large Scales'' approach from \citet{Battaglia2013} that served as a precursor to the AMBER scheme already described.
They are able to account, albeit approximately, for the ionization history of the galaxy environment, while offering a huge boost to computational efficiency.
Yet another flavor of a partially decoupled scheme can be envisioned as an efficient approach for a multi-fold extension of the simulation size. Most of the methods described here as well as the fully coupled simulations discussed below resolve the low density IGM on scales of order 100 comoving kpc or better. Such resolution, comparable to the Doppler smoothing scale of gas at $10^4\dim{K}$, is the minimum required to capture the IGM density fluctuations that give origin to the \Lya\ forest in the spectra of distant quasars.
A uniform grid simulation with, say, $50\,\dim{kpc}$ resolution in a $500\,\dim{Mpc}$ box would require $10{,}000^3$ cells, a size that is currently achievable on modern 100 petaflops-level platforms %
with, e.g., the GPU-native cosmological hydrodynamics code Cholla \citep{cholla}.
At $50\dim{kpc}$ resolution, however, no plausible model for the physics of galaxy formation can be constructed. This is similar, for example, to C2-Ray simulations that have to rely on semi-analytical schemes for including radiation sources.
A ``two-tier'' simulation approach may be desirable, where a high-resolution small-box simulation is used to inform a coarse-resolution large-box one. Large volume simulations -- which cannot track the interior structure of dark matter halos --
may then implement the physics of galaxy formation with a approximate model that recovers the mean trends for galaxy baryonic properties predicted by more detailed calculations. A recent example of this technique is offered by \citet{Hausen2022}, who trained the Explainable Boosting Machines (EBM) machine learning algorithm to assign stellar masses and star formation rates (SFRs) to the host dark matter halos based on nearly 6 million galaxies simulated by the fully coupled ``Cosmic Reionization On Computers'' (CROC) project. Figure \ref{fig:ebm} shows the main result from that work: a comparison of the simulated SFRs in the original CROC simulations versus the values predicted by machine learning as a function of halo virial mass
$M_{\rm vir}$. The EBM model is highly predictive, failing to capture only the most extreme outliers in SFR at a fixed $M_{\rm vir}$.
Through this approach, the physics of baryonic galaxy formation can be connected to the properties
of dark matter halos and implemented as a ``sub-grid'' prescription in cosmological hydrodynamics simulations that do not resolve the small scale details of star formation and feedback, while at the same time capturing the variations of SFR in halos of the same mass due to environmental effects and different prior accretion histories.
\begin{table*}[ht]
\setlength\extrarowheight{5pt}
\caption{Simulation-based models and simulations of reionization.}
\begin{minipage}{\hsize}
\begin{tabular}{l|c|c|c|c|l}
Suite Name &
N$_{\rm{box}}$\footnote{We list only the number of the largest box runs. Several projects also completed a number of simulations with 8 times smaller N$_{\rm{part}}$.} &
N$_{\rm{part}}$\footnote{Equivalent number of dark matter particles in the largest simulation.} &
Box size \footnote{In comoving units.} & Resolution\footnote{Spatial resolution is given in proper units as an effective grid code cell size; for particle codes the conversion between the gravitational softening and the effective cell size is given in \citet{Mansfield2021}. For simulations that maintained their resolution in comoving units, the resolution is quoted at $z=6$.} & Code
\footnote{AMR=Adaptive Mesh Refinement; SPH=Smooth Particle Hydrodynamics; MM=Moving Mesh; RT=Radiative Transfer.}
\\
\hline
\multicolumn{4}{c}{SAM+DMO} & & \\
ASTRAEUS & 1 & $3840^3$ & 230 Mpc& 530 pc& ART (DMO) \\
DRAGONS & 1 & $2160^3$ & 100 Mpc& 50 pc& GADGET-2 (DMO) \\
\multicolumn{4}{c}{Partially Coupled Simulations} & & \\
Astrid & 1 & $5500^3$ & 370 Mpc& 115 pc& GADGET-3 (SPH) + semi-analytical RT \\
MassiveBlack-II & 1 & $1792^3$ & 142 cMpc& 440 pc& GADGET-3 (SPH) + RT in post-processing \\
C2-Ray & 1 & $6912^3$ & 714 Mpc& 6.3 kpc& P$^3$M (DMO) + C2-Ray (RT) \\
Illustris & 1 & $1820^3$ & 107 Mpc& 72 pc& AREPO (MM) + RT in post-processing \\
\multicolumn{4}{c}{Fully Coupled Simulations} & & \\
CoDa & 1 & $8192^3$ & 94~Mpc & 1.7 kpc & RAMSES-CUDATON (uniform grid) \\
CROC & 6 & $2048^3$ & 117~Mpc & 100~pc & ART (AMR) \\
SPHINX & 1 & $1024^3$ & 20~Mpc & 10 pc & RAMSES-RT (AMR) \\
Thesan & 1 & $2100^3$ & 96~Mpc & 300 pc & AREPO-RT (MM)\\
\end{tabular}
\end{minipage}
\label{tab:sims}
\end{table*}
\subsection{Fully coupled simulations}
Self-consistent, fully coupled simulations, often considered the ultimate theoretical model for a given process, can only be trusted as long as they include all the relevant physics (such as gravity, gasdynamics, star formation, stellar and AGN feedback, radiation transport) with sufficient precision and maintain numerical effects under control.
Table~\ref{tab:sims} summarizes some of the most recent fully coupled simulations (together with several of the approximate methods discussed above).
CosmicDawn (CoDa) \citep{ocvirk16,coda1,coda2,coda3,Lewis2022}, CROC, and Thesan are similar in their computational volume ($\sim100\,\dim{Mpc}$). They differ somewhat in spatial and mass resolution, but all fall into the class of simulations that do not resolve the scale heights of galactic disks (and therefore model galaxies in ``2D''), and can be directly compared to each other. SPHINX simulations
\citep{sphinx1,sphinx2,sphinx3}, by contrast, focus on resolving the actual vertical structure of star-forming galaxies, achieving much higher spatial resolution at the expense of being unable to model the global reionization history (a 20 comoving Mpc box contains only 4 $L_\star$ galaxies on average).
The target goal of reaching a $\sim100\,$Mpc simulation box is dictated by the desire to replicate a representative region of the Universe. At $z\sim7$, the correlation length of galaxies is around $10\,$Mpc \citep{Barone-Nugent2014} and is weakly luminosity dependent. The number density of $L_\star$ galaxies is also about 1 per $10\,$Mpc. Hence, a volume of $\sim 100\,$Mpc on a side contains around 1000 $L_\star$ galaxies and has the rms correlation function at half the box size of $(50/10)^{-1.6} \approx 0.08$. Whether $\sim100\,\dim{Mpc}$ simulations actually converge on some of the key features of the reionization process, such as the size distribution of ionized bubbles, is presently unclear. Some earlier C2-Ray simulations \citep{Iliev2014} found convergence only in $\gtrsim 250\,$Mpc boxes, while in CROC simulations (which explicitly match at $z<6$ the LyC mean free path determined by the abundance of LLSs) the bubble size distribution appears to have converged by $z\gtrsim7$ \citep{Gnedin2014b}.
Figure \ref{fig:lfs} from \citet{Bouwens2022} shows a comparison between the galaxy UV luminosity function observed at different redshifts and several theoretical predictions. The observational data come primarily from the Hubble Frontier Fields, and are subject to systematic uncertainties from lensing modeling \citep{Bouwens2017,Bouwens2022} that rapidly increase for galaxies fainter than $M_{\rm UV}>-15$. At brighter magnitudes most theoretical models match the data well,\footnote{Notice, however, that CROC results are shown for earlier, smaller boxes. CROC underpredicts luminosities and stellar masses of super-$L_*$ galaxies in its largest, 117 Mpc boxes \citep{Zhu2020}.}\, but they differ
widely at $M_{\rm UV}\gtrsim-14$ mag,
a faint luminosity regime that will soon be probed by {\it JWST} observations. Theorists are eagerly waiting for {\it JWST} deep field data as few, if any, of the current models will survive these ground-breaking sensitive measurements.
Since the goal of fully coupled simulations is to actually model the reionization process, their performance must be measured against that metric. In Figure \ref{fig:xhz} we show the most basic prediction for any simulation of the epoch of reionization, the evolution of the globally volume-averaged neutral hydrogen fraction.\footnote{Predicting the mass-weighted neutral hydrogen fraction is much harder, as after reionization this is dominated by the residual neutral component locked in the Damped Lyman-$\alpha$ Systems.} While the latest fully coupled simulations do produce neutral fractions similar to the observed values, they do not do so at the level of precision required
by the observations. The only simulation project that does match the observations accurately is CROC, but this is \emph{by construction}, as the data measurements were actually used to calibrate the effective escape fraction from the radiation sources. The unsatisfactory level of agreement
between data and theory points to the direction where efforts in designing the next generation of cosmological simulations should be aimed at -- i.e. on significantly improving the modeling of intergalactic gas. A number of recent observations have provided measurements of various properties of the post-reionization IGM, such as the distribution of mean \Lya\ opacities along skewers of fixed length \citep{becker15,bosman18,Eilers2018,Yang2020}, the distribution of ``dark gaps'' -- continuous spectral regions in distant quasar spectra where the trasmitted flux is below a specified threshold \citep{Zhu2020,Zhu2021}, and the cross-correlation between quasar absorption spectra and properties of galaxies along the same sightline \citep{Meyer2019,Meyer2020}. None of the existing fully coupled simulations are able to match all of these observational constraints. |
Title:
Using 3D and 2D analysis for analyzing large-scale asymmetry in galaxy spin directions |
Abstract: The nature of galaxy spin is still not fully known. Iye et al (2021) applied
a 3D analysis to a dataset of bright SDSS galaxies that was used in the past
for photometric analysis. They showed that the distribution of spin directions
of spiral galaxies is random, providing a dipole axis with low statistical
significance of 0.29$\sigma$. However, to show random distribution, two
decisions were made, each can lead to random distribution regardless of the
real distribution of the spin direction of galaxies. The first decision was to
limit the dataset arbitrarily to z$<$0.1, which is a redshift range in which
previous literature already showed that random distribution is expected. More
importantly, while the 3D analysis requires the redshift of each galaxy, the
analysis was done with the photometric redshift. If the asymmetry existed, its
signal is expected to be an order of magnitude weaker than the error of the
photometric redshift, and therefore the low statistical signal under these
conditions is expected. When using the exact same data without limiting to
$z_{phot}<0.1$ and without using the photometric redshift, the distribution of
the spin directions in that dataset shows a statistical signal of $>2\sigma$.
Code and data for reproducing the analysis are publicly available. These
results are in agreement with other experiments with SDSS, Pan-STARRS, HST, and
the DESI Legacy Survey. The paper also examines other previous studies that
showed random distribution in galaxy spin directions. While further research
will be required, the current evidence suggest that large-scale asymmetry
between the number of clockwise and counterclockwise galaxies cannot be ruled
out.
| https://export.arxiv.org/pdf/2208.00893 |
\title{Using 3D and 2D analysis for analyzing large-scale asymmetry in galaxy spin directions}
\author{Lior Shamir \\ Kansas State University \\ Manhattan, KS 66506 \\ email: [email protected]}
\date{}
\section{Introduction}
\label{introduction}
Spiral galaxies as seen from Earth can seem to an observer to spin clockwise (Z) or counterclockwise (S). Since the spin direction is merely a matter of the perspective of the observer, the null hypothesis would be that in a sufficiently large number of galaxies the number of galaxies spinning clockwise would be equal (within statistical error) to the number of galaxies spinning counterclockwise.
Whether the distribution of spin directions of spiral galaxies is indeed random is a question that has been the focus of several previous studies, some of them suggested that the distribution is not necessarily random \citep{macgillivray1985anisotropy,longo2011detection,shamir2012handedness,lee2019galaxy,lee2019mysterious,shamir2021large}.
One of the first studies that proposed a population-based asymmetry between galaxies with opposite spin directions was \citep{macgillivray1985anisotropy}. By using a relatively small dataset of just 418 annotated galaxies, they proposed an asymmetry between the number of galaxies spinning in opposite directions with statistical significance of P=0.08 to occur by chance. More recent studied used the power of digital sky surveys enabled by robotic telescopes, allowing the collect larger datasets.
\cite{lee2019mysterious} identified links between the spin directions of spiral galaxies that are too far from each other to interact gravitationally, and defined these links ``mysterious'', cautiously proposing the possible existence of cosmological-scale links between galaxy spin directions \citep{lee2019mysterious}. These claims are aligned with certain evidence using several different datasets from SDSS \citep{shamir2020patterns}, HST \citep{shamir2020pasa}, Pan-STARRS \citep{shamir2020patterns} and DESI Legacy Survey \citep{shamir2021large}. All of these telescopes show very similar profiles of non-random distribution of galaxy spin directions \citep{shamir2021large,shamir2022large}. The analysis shows a dipole axis in galaxy spin directions observed in all of these telescopes, and the locations of the axes computed from the different telescopes are well within 1$\sigma$ from each other. A statistically significant correlation was also found between the spin directions of galaxies and cosmic initial conditions, proposing that the galaxy spin direction can be used as a probe to study the early Universe \citep{motloch2021observed}.
A study with a dataset of $\sim6.4\cdot10^4$ Sloan Digital Sky Survey (SDSS) galaxies with spectra showed non-random distribution that can be fitted into cosine dependence with probability of 4.6$\sigma$ \citep{shamir2020patterns}. The profile of distribution was nearly identical to a similar analysis done with Pan-STARRS galaxies, when the redshift distribution of the galaxies was similar \citep{shamir2020patterns}. These results are also in excellent agreement with the distribution of $\sim8\cdot10^5$ galaxies from DESI Legacy Survey \citep{shamir2021large}. Galaxies imaged by Hubble Space Telescope annotated manually also show non-random distribution, and a dipole axis very close to the dipole formed by SDSS galaxies with higher redshifts \citep{shamir2020pasa}.
On the other hand, several studies suggested that the distribution of the spin direction of spiral galaxies was random. One of the early studies that suggested random distribution was \citep{iye1991catalog}, who compiled a catalog of $\sim6.5\cdot10^3$ galaxies from the Southern hemisphere, and found random distribution of their spin directions. Another notable attempt to characterize the distribution of galaxy spin directions was done by using anonymous volunteers who annotated a large number of galaxies manually through a web-based user interface \citep{land2008galaxy}. After correcting the data for the bias driven by the human perception, the study concluded that the spin directions of the galaxies were distributed randomly \citep{land2008galaxy}. A study that applied computer annotation confirmed that the distribution of the galaxies in SDSS was indeed random \citep{hayes2017nature}. These studies will be discussed in detail in Section~\ref{previous_studies} of this paper.
\cite{iye2020spin} proposed that the possible observed asymmetry between galaxies with opposite spin directions is the result of photometric objects that are part of the same galaxies in the dataset \citep{iye2020spin}. Unlike the analysis shown in \citep{shamir2012handedness,shamir2020large,shamir2020patterns,shamir2020pasa,shamir2021particles}, in which the spin directions of the galaxies were fitted into cosine dependence based on their RA and declination, \cite{iye2020spin} performed a 3D analysis that used the RA, declination and redshift \citep{iye2020spin}. While their analysis showed that the distribution of galaxy spin directions was random, a follow-up analysis by the National Astronomical Observatory of Japan using the exact same data showed that when using basic statistics, the distribution of the galaxy spin directions in that dataset is not random \citep{Fukumoto2021}.
This paper analyzes the reasons of the differences between the initial analysis published in \citep{iye2020spin} and the follow-up analysis done using the exact same data but showed non-random distribution. The paper also shows analysis of two different datasets that were not analyzed in that manner in the past, and investigates the reasons for the differences between the conclusions of \cite{iye2020spin} and the results shown in \citep{macgillivray1985anisotropy,longo2011detection,shamir2012handedness,shamir2020patterns,shamir2021particles,shamir2021large,shamir2022new,shamir2022large,shamir2022analysis}. The paper also discusses previous studies that showed random distribution of the galaxy spin directions, and analyzes the experimental design that led to these conclusions.
\section{The datasets}
\label{dataset}
The analysis of the possible asymmetry in the distribution of galaxy spin directions has been studied using relatively large datasets for over a decade. During that time, multiple datasets were prepared and studied. The datasets are different in the type of objects, telescopes, redshift limit, magnitude limit, and galaxy annotation methods. Table~\ref{datasets} shows the datasets used in each study. It also shows the purpose for which each dataset was designed. More details about each dataset can be found in the cited papers. The table only shows datasets used by this author in previous experiments. Data collected or used by others will be described in Sections~\ref{other_datasets} and~\ref{previous_studies}.
\begin{table*}
\scriptsize
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
Number & Instrument & Reference & Object & Object & Annotation & Purpose \\
& & & type & count & method & \\
\hline
1 & SDSS & \citep{shamir2012handedness} & Spectroscopic & 126,501 & Automatic & S/Z distribution and dipole axis \\
\hline
2 & SDSS & \citep{shamir2016asymmetry} & Spectroscopic & 13,440 & Hybrid & S/Z photometric asymmetry \\
\hline
3 & SDSS & \citep{shamir2017photometric} & Photometric & 162,514 & Automatic & S/Z photometric asymmetry \\
\hline
4 & Pan-STARRS & \citep{shamir2017large} & Photometric & 29,013 & Automatic & S/Z photometric asymmetry \\
\hline
5 & SDSS & \citep{shamir2017large} & Spectroscopic & 40,739 & Manual & S/Z photometric asymmetry \\
\hline
6 & HST & \citep{shamir2020asymmetry} & Photometric & 5,122 & Hybrid & S/Z photometric asymmetry \\
\hline
7 & SDSS & \citep{shamir2020patterns} & Spectroscopic & 63,693 & Automatic & S/Z distribution and dipole axis \\
\hline
8 & Pan-STARRS & \citep{shamir2020patterns} & Photometric & 38,998 & Automatic & S/Z distribution and dipole axis \\
\hline
9 & SDSS & \citep{shamir2020large} & Photometric & 172,883 & Automatic & S/Z distribution and dipole axis \\
\hline
10 & HST & \citep{shamir2020pasa} & Photometric & 8,690 & Manual & S/Z distribution and dipole axis \\
\hline
11 & SDSS & \citep{shamir2020pasa} & Spectroscopic & 15,863 & Automatic & S/Z distribution and dipole axis \\
\hline
12 & SDSS & \citep{shamir2021particles} & Photometric & 77,840 & Automatic & S/Z distribution and dipole axis \\
\hline
13 & DESI Legacy Survey & \citep{shamir2021large} & Photometric & 807,898 & Automatic & S/Z distribution and dipole axis \\
\hline
\end{tabular}
\caption{The different datasets of galaxies separated by their apparent spin direction. The table includes datasets used by this author.}
\label{datasets}
\end{table*}
For their analysis, \cite{iye2020spin} used Dataset 3 in Table~\ref{datasets}, which is a dataset of photometric objects used for the purpose of photometric analysis of objects rotating in opposite directions. In the absence of literature that used that dataset to analyze a dipole axis in the S/Z galaxy distribution, \cite{iye2020spin} compared their results to the results shown in \citep{shamir2020large}, which were based on Dataset 9 in Table~\ref{datasets}. For the comparison, \cite{iye2020spin} argue that the dataset used in \citep{shamir2020large} was the result of combining Dataset 3 in Table~\ref{datasets} with 33,028 galaxies imaged by Pan-STARRS. That statement is made twice in Section 3.1 in the journal version of \citep{iye2020spin}. However, in \citep{shamir2020large} no Pan-STARRS galaxies were used. Moreover, no galaxies from two different telescopes were combined into a single dataset in any other previous study, and consequently no statement about combining data from two different telescopes can be found in any previous paper. It is therefore unclear what led \cite{iye2020spin} to believe that the dataset they used for comparing their results contained Pan-STARRS galaxies. %
The two statements about combining the SDSS and Pan-STARRS galaxies appears only in the journal version of \citep{iye2020spin}, but not in the ArXiv version of that paper.
\section{3D analysis of cosmological-scale anisotropy using the photometric redshift}
\label{photometric_redshift}
The analysis done in \citep{shamir2012handedness,shamir2020large,shamir2020patterns,shamir2020pasa,shamir2021particles,shamir2021large} was two-dimensional, and therefore the position of each galaxy was determined by its RA and declination. The RA and declination are considered accurate. \cite{iye2020spin}, however, applied a 3D analysis. Unlike the 2D analysis done in \citep{shamir2012handedness,shamir2020large,shamir2020patterns,shamir2020pasa,shamir2021particles,shamir2021large,shamir2022new,shamir2022large}, the location of each galaxy in the 3D analysis was determined by its $(l,b,d)$, where d is the distance. As explained in Section 2.1 of \citep{iye2020spin}, the distance $d_i$ of each galaxy {\it i} was determined by $d^i=cz^i/H_{o}$ , where {\it c} is the speed of light, $H_{o}$ is the Hubble constant, and {\it z} is the redshift. Because the vast majority of the galaxies in Dataset 3 in Table~\ref{datasets} do not have spectra, the spectroscopic redshift could not be used for the analysis of that dataset. The redshifts used in \citep{iye2020spin} are the photometric redshifts from the catalog of \citep{paul2018catalog}.
The dataset of \citep{shamir2020patterns} is based on spectroscopic objects identified as galaxies, and therefore all galaxies in that dataset had redshift. \cite{iye2020spin}, however, used a dataset of photometric objects used in \citep{shamir2017photometric}. \cite{iye2020spin} report on two experiments of 3D analysis, one with 162,516 objects and another experiment with 111,867 objects with ``measured redshift'' \citep{iye2020spin}. As discussed in \citep{shamir2017photometric}, less than 11K galaxies in that dataset had spectra. Since less than 11K galaxies in that dataset had spectra, the vast majority of the galaxies in \citep{shamir2017photometric} did not have spectroscopic redshift. As shown in Section 3.1 of \citep{iye2020spin}, the source of the redshift was \citep{paul2018catalog}, which is a catalog of photometric redshift. %
Unlike the redshift measured from the spectra, the photometric redshift is highly inaccurate. The average error of the photometric redshift method used in \citep{paul2018catalog} is $\sim$18.5\%. That error is far greater than the possible signal of less than 1\% shown in \citep{shamir2020large,shamir2020patterns}. When the error of the data is an order of magnitude greater than the expected signal, it is expected that analysis of these data would lead to loss of the signal, and therefore random distribution. %
If the photometric redshift is not systematically biased, it is expected that the error in one direction would balance the error in the opposite direction, and therefore any 3D axis that exists in the data will also exist with the approximate same location when the photometric redshift is used. But because the distances between the galaxies when using the photometric redshift are not the accurate distances, even if the error of the photometric redshift is symmetric, such 3D analysis would lead to weaker signal due to the inaccurate distribution of the galaxies in the 3D space.
In addition to the relatively high error of the photometric redshift, the photometric redshift was determined by using machine learning, and therefore was based on complex data-driven rules that are very difficult to characterize in an intuitive manner. Because it is based on machine learning output of multiple parameters that are not directly related to the spectra, it can also be systematically biased. The general nature of the systematic bias of photometric redshift has been discussed and analyzed in multiple previous studies such as \citep{wittman2009lies,bernstein2010catastrophic,dahlen2013critical,rau2015accurate,tanaka2018photometric}.
The catalog of photometric redshift \citep{paul2018catalog} that was used by \cite{iye2020spin} is also systematically biased. For instance, according to that catalog \citep{paul2018catalog}, the mean photometric redshift of the 289,793 galaxies in the RA range $(180^o-210^o)$ is 0.1606$\pm$0.0002, and the mean photometric redshift of the 289,710 in the RA range $(210^o-240^o)$ is 0.1578$\pm$0.0002. The two-tailed probability to have such difference by chance is $\sim$0.0001. Even after applying a Bonferroni correction to all of the 12 30$^o$ RA slices, the probability is still less than 0.0012. In the RA range $(330^o-360^o)$, the mean photometric redshift of the 323,028 galaxies is 0.1705$\pm$0.0002, which is significantly different (P$<$0.001) from the mean photometric redshift in the two other RA slices.
Since the magnitude of the galaxies in that catalog was limited to $i<18$ in all RA ranges, and no criteria were used to select the galaxies, the difference in the mean distance of the galaxies can be viewed as evidence of cosmological-scale anisotropy. However, these results are more likely driven by the systematic bias of the redshift catalog rather than a true reflection of the large-scale structure. It is possible that such differences in redshift are the results in differences in the cosmic voids in different parts of the sky. However, according to a catalog of cosmic voids in SDSS \citep{mao2017cosmic}, the mean spectroscopic redshifts of the voids in the three RA ranges are 0.4724$\pm$0.0084, 0.4615$\pm$0.0088, and 0.4525$\pm$0.012, for RA ranges $(180^o-210^o)$, $(210^o-240^o)$, and $(330^o-360^o)$, respectively. The differences between the mean spectroscopic redshifts of the cosmic voids are not statistically significant. Also, in the RA range $(300^o-330^o)$ the mean photometric redshift is higher than in the RA range $(210^o-240^o)$, while the mean spectroscopic redshift of the voids is lower. In RA range $(180^o-210^o)$ the mean photometric redshift is lower than in the RA range $(330^o-360^o)$, while the mean spectroscopic redshift of the voids is also lower. That inconsistency suggests that the differences in the mean photometric redshift is not necessarily driven by cosmic voids, but more likely by the systematic bias of the photometric redshift. The photometric redshift is a complex probe for studying subtle cosmological-scale anisotropties. Since it is based on non-intuitive data-driven machine learning rules, it is also difficult to fully profile at a precision level.
The combination of high error and systematic bias makes the 3D analysis with the photometric redshift difficult to predict, and therefore an unreliable probe for studying possible subtle violations of the cosmological isotropy assumption. Statistical observations of cosmological-scale anisotropy that are based on the photometric redshift must therefore be analyzed with extreme caution. As the analysis of \citep{iye2020spin} requires the redshift of each galaxy, the absence or presence of non-random distribution must be analyzed with the spectroscopic redshift rather than the photometric redshift.
\section{Limiting the redshift range}
\label{limiting_redshift}
By applying the 3D analysis using the photometric redshift, \cite{iye2020spin} identified a dipole axis at $(\alpha=26^o,\delta=23^o)$ with statistical significance of $4.00\sigma$. However, the 4.00$\sigma$ was shown when the redshift of the galaxies in the dataset was limited to $z_{phot}<0.1$. As shown in \citep{shamir2020patterns}, there is no statistically significant asymmetry when using galaxies with redshift range of $z<0.1$. Therefore, the dipole axis shown in \citep{iye2020spin} is in strong disagreement with the dipole axis shown in \citep{shamir2020patterns}. %
As shown in previous experiments, if galaxy spin directions are not distributed fully randomly, such distribution might be related to the redshift range \citep{shamir2020patterns,shamir2020pasa}. For instance, it has been shown that when normalizing the redshift distribution, SDSS and Pan-STARRS show a near-identical profile of the S/Z distribution \citep{shamir2020patterns}. Similarly, using SDSS galaxies with relatively high redshift (z$>$0.15) provides a profile of asymmetry that is very similar to the asymmetry of galaxies imaged by HST \citep{shamir2020pasa}.
As shown in \citep{shamir2020patterns}, if indeed there is an asymmetry between the number of Z and S galaxies, that asymmetry might not be present at lower redshifts. Tables 3, 5, 6, and 7 in \citep{shamir2020patterns} show random distribution at z$<$0.15 \citep{shamir2020patterns}. An analysis of a dipole axis when limiting the redshift to $z<0.15$ showed no statistically significant dipole axis. Therefore, the random distribution at $z_{phot}<0.1$ reported in \citep{iye2020spin} is in full agreement with \citep{shamir2020patterns}.
When not limiting the redshift, \cite{iye2020spin} report on a dipole axis of 1.29$\sigma$ at $(\alpha=181^o,\delta=31^o)$. The stronger signal when the redshift range is higher is in agreement with \citep{shamir2020patterns}. However, \cite{iye2020spin} used the photometric redshift for a 3D analysis. The reported asymmetry between the number of galaxies with opposite spin directions is $\sim$1\%, and therefore the error of the data was far greater than the expected signal. When analyzing data with error greater than the signal, random distribution is expected. %
\section{Reanalysis of the Iye catalog}
\label{reanalysis}
As discussed in Sections~\ref{photometric_redshift} and~\ref{limiting_redshift}, to show statistical signal of 0.29$\sigma$ \cite{iye2020spin} used the photometric redshift in a 3D analysis, and also limited the dataset to z$<$0.1. Each of these decisions can eliminate the presence of a possible signal. Therefore, evidence of random or non-random distribution of Z and S spiral galaxies can be studied when using the exact same 72,888 galaxies used in \citep{iye2020spin}, but without limiting the dataset by the redshift, and without using 3D analysis that is based on the photometric redshift. The data are available at \url{https://people.cs.ksu.edu/~lshamir/data/assym_72k/}.
A dipole axis in the S/Z distribution means that in one hemisphere there are more Z spiral galaxies than S spiral galaxies, while in the opposite hemisphere there is a higher number of S spiral galaxies than Z galaxies. Table~\ref{hemispheres} shows the number of S-wise and Z-wise galaxies in the hemisphere centered at RA=160$^o$, and in the opposite hemisphere.
\begin{table*}
\scriptsize
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Hemisphere & \# Z-wise & \# S-wise & $\frac{\#Z}{\#S}$ & P & P \\
(RA) & & & & (one-tailed) & (two-tailed) \\
\hline
$70^o-250^o$ & 23,037 & 22,442 & 1.0265 & 0.0026 & 0.0052 \\
$>250^o \cup <70^o$ & 13,660 & 13,749 & 0.9935 & 0.29 & 0.58 \\
\hline
\end{tabular}
\caption{The number of Z-wise and S-wise galaxies in the \cite{iye2020spin} catalog in the RA hemisphere centered at 160$^o$, and in the opposite hemisphere (centered at RA=340$^o$). The dataset is used without redshift limit. The P values are based on binomial distribution such that the probability of a galaxy to have Z-wise or S-wise spin is 0.5.}
\label{hemispheres}
\end{table*}
Statistically significant signal is observed in one hemisphere. The asymmetry in the opposite, less populated, hemisphere is not statistically significant. But because it has more S-wise galaxies than Z-wise galaxies, it is also not in conflict with the distribution in the hemisphere centered at (RA=160$^o$) for forming a dipole axis in the dataset. That simple analysis shows certain evidence that the distribution in the specific dataset used in \citep{iye2020spin} might not be random. Due to the deterministic nature of the algorithm, repeating the same experiment after mirroring the galaxy images using the ``flip'' command of {\it ImageMagick} led to identically inverse results. For instance, after mirroring the galaxy images the hemisphere $70^o-250^o$ had 22,442 galaxies spinning clockwise, and 23,037 galaxies rotating counterclockwise.
One of the analyses in \citep{iye2020spin} is a simple difference between the number of galaxies that spin clockwise and the number of galaxies that spin counterclockwise. That analysis, however, is done for the entire sky, without separating the sky into two opposite hemispheres. When just counting galaxies in the entire sky, the asymmetry in one hemisphere offsets the asymmetry in the opposite hemisphere. The asymmetry of 1.8$\sigma$ observed in \citep{iye2020spin} can be attributed to the higher number of galaxies in one hemisphere, leading to asymmetry in the total number of galaxies. As shown in Table~\ref{hemispheres}, when separating the sky into two opposite hemispheres the signal becomes statistically significant. As shown in \citep{shamir2021large}, when the number of galaxies is high, the asymmetry in one hemisphere is nearly exactly inverse to the asymmetry in the opposite hemisphere.
As was done in \citep{shamir2012handedness,shamir2020pasa,shamir2020patterns}, from each possible $(\alpha,\delta)$ combination in the sky, the angular distance between that point to all galaxies in the dataset was computed. That was done by using standard angular distance between two points on a sphere. The angular distance $\phi $ between $(\alpha,\delta)$ and galaxy $\Psi$ is determined simply by $\phi=acos( sin(\delta) \cdot sin(\Psi_{\delta}) + cos(\delta) \cdot cos(\Psi_{\delta}) \cdot cos(\alpha-\Psi_{\alpha}) )$. That analysis does not require the redshift, and therefore can be done with galaxies that do not have spectra.
Then, $\chi^2$ statistics was used to fit the spin direction distribution to cosine dependence. That was done by fitting $d\cdot|\cos(\phi)|$ to $\cos(\phi)$, where $\phi$ is the angular distance between the galaxy and $(\alpha,\delta)$, and {\it d} is a number within the set $\{-1,1\}$, such that d is 1 if the galaxy spins clockwise, and -1 if the galaxy spins counterclockwise. The $\chi^2$ was compared to the average of the $\chi^2$ when computed in $10^3$ runs such that the $d$ of each galaxy was assigned a random number within \{-1,1\}. The standard deviation of the $\chi^2$ of the $10^3$ runs was also computed. Then, the $\sigma$ difference between the $\chi^2$ computed with the actual spin directions and the $\chi^2$ computed with the random spin directions provided the probability to have a dipole axis in that $(\alpha,\delta)$ combination by chance.
Figure~\ref{dipole1} shows the statistical significance of the dipole axis from each possible pair of ($\alpha$,$\delta$) in increments of five degrees. The most likely location of the dipole axis identified at $(\alpha=165^o,\delta=40^o)$, and the probability of the axis is $\sim$2.1$\sigma$. The 1$\sigma$ error range of the position of the axis is $(84^o, 245^o)$ for the right ascension, and $(-41^o, 90^o)$ for the declination. Figure~\ref{dipole_random} shows the likelihood of the dipole axis when the galaxies are assigned with random spin directions, showing much lower probability of $<1\sigma$.
Figure~\ref{dipole1_high} shows the results of the same analysis when using just the 24,799 galaxies with $z_{phot}>0.1$. When using the galaxies with the higher redshift, the most likely dipole axis is identified in $(\alpha=145^0,\delta=-10^o)$, but the statistical signal increases to $\sim$2.42$\sigma$. Figure~\ref{dipole1_low} shows the same analysis when the galaxies are limited to $z_{phot}<0.1$, with signal that is not statistically significant. The increased signal when the redshift gets higher is in agreement with the results reported in \citep{shamir2020patterns,shamir2022new,shamir2022large}. As discussed in \citep{shamir2022new,shamir2022large}, one explanation to the change in location and strength of the signal is that such axis, if exists, does not necessarily go directly through Earth.
Another experiment aimed at testing the impact of the inaccuracy of the locations of the galaxies on the results. For that purpose, an error of 18.5\% was added to the RA and Dec of each galaxy in the dataset. To ensure that the error is added in a symmetric manner, the direction of the error was random. Figure~\ref{with_18prcnt_error} shows the analysis of the dipole axis after adding the error to the locations of the galaxies. While the change of the location of the most likely axis at $(\alpha=155^o,\delta=15^o)$ is minor, the statistical strength of the axis is dropped to 1.41$\sigma$. That shows that an error added randomly to the locations of the galaxies might not change the location of the dipole axis, but the error reduces the signal.
\section{Error in the galaxy annotation}
\label{error}
The statistical signal observed in the analysis of Section~\ref{reanalysis} can also be the result of the annotation of the galaxies, as even a subtle bias in the annotation process leads to very strong statistical signal \citep{shamir2021particles}. The algorithm used for annotating the galaxies \citep{shamir2011ganalyzer} is fully symmetric. It works by using clear defined rules that are completely symmetric. It is a model-driven algorithm that is not based on machine learning or deep learning algorithms that cannot be fully symmetric due to their sensitivity to the data they are trained by, the initial weights, and even the order of the training samples. Deep learning algorithms might also be driven by biases that are very difficult to characterize \citep{dhar2021evaluation}, specifically in the case of galaxy images \citep{dhar2022systematic}. Because the algorithm is symmetric, inaccurate annotations are expected to impact galaxies that spin clockwise in the same way it impacts galaxies that spin counterclockwise.
Several experiments were performed in previous work by repeating the analyses after mirroring the galaxy images \citep{shamir2012handedness,shamir2020large,shamir2020patterns,shamir2020pasa,shamir2021large,shamir2022new,shamir2022large}. In all cases, the results were identically inverse compared to the results with the original images. For instance, Section 2.2 in \citep{shamir2022large} discusses an experiment in which all galaxy images were mirrored, and the results were exactly inverse compared to the results with the original images. The same was done with a far larger dataset as discussed in Section 3 in \citep{shamir2021large}. Figure~\ref{by_ra_mirror} shows the asymmetry in the distribution of 807,898 galaxies imaged by DECam. The experiment is described in \citep{shamir2021large}. The top graph shows the asymmetry observed when analyzing the original images, while the bottom graph shows the asymmetry observed after mirroring the images using the {\it flip} command of the {\it ImageMagick} software tool, and using the lossless TIF image file format. The asymmetry observed with the mirrored galaxies is inverse to the asymmetry observed with the original images.
Another empirical evidence that shows that the observed asymmetry is not the result of bias in the annotations is the inverse asymmetry observed in opposite hemispheres. The inverse asymmetry in opposite hemispheres has been described in previous reports \citep{shamir2020patterns,shamir2021large,shamir2022new,shamir2022large,shamir2022analysis}, and can also be observed in Figure~\ref{by_ra_mirror}. If the algorithm was biased systematically, the same asymmetry should have been observed in all parts of the sky, and it was not expected to inverse in opposite hemispheres as shown in Table~\ref{hemispheres}. That evidence is added to the multiple experiments of mirroring the galaxy images, leading to exactly inverse results \citep{shamir2020patterns,shamir2021large,shamir2022new,shamir2022large}.
If the galaxy annotation algorithm had a certain error in the annotation of the galaxies, the asymmetry {\it A} can be defined by Equation~\ref{asymmetry}.
\begin{equation}
A=\frac{(N_{cw}+E_{cw})-(N_{ccw}+E_{ccw})}{N_{cw}+E_{cw}+N_{ccw}+E_{ccw}},
\label{asymmetry}
\end{equation}
where $E_{cw}$ is the number of Z galaxies incorrectly annotated as S galaxies, and $E_{ccw}$ is the number of S galaxies incorrectly annotated as Z galaxies. Because the algorithm is symmetric, the number of S galaxies incorrectly annotated as Z is expected to be roughly the same as the number of Z galaxies missclassified as S galaxies, and therefore $E_{cw} \simeq E_{ccw}$ \citep{shamir2021particles}. Therefore, the asymmetry {\it A} can be defined by Equation~\ref{asymmetry2}.
\begin{equation}
A=\frac{N_{cw}-N_{ccw}}{N_{cw}+E_{cw}+N_{ccw}+E_{ccw}}
\label{asymmetry2}
\end{equation}
Since $E_{cw}$ and $E_{ccw}$ cannot be negative, a higher rate of incorrectly annotated galaxies is expected to make {\it A} lower. Therefore, incorrect annotation of galaxies is not expected to lead to asymmetry, and can only make the asymmetry lower rather than higher.
An experiment \citep{shamir2021particles} of intentionally annotating some of the galaxies incorrectly showed that even when an error is added intentionally, the results do not change significantly even when as many as 25\% of the galaxies are assigned with incorrect spin directions, as long as the error is added to both Z and S galaxies \citep{shamir2021particles}. But if the error is added in an asymmetric manner, even a small asymmetry of 2\% leads to a very strong asymmetry, and a dipole axis that peaks exactly at the celestial pole \citep{shamir2021particles}.
As described in \citep{shamir2021large}, not all galaxies are spiral, and not all spiral galaxies have a spin direction that can be identified visually. An example of spiral galaxies that their spin direction cannot be determined are spiral edge-on galaxies. Therefore, the spin directions of many of the galaxies cannot be identified, and these galaxies cannot be used in the analysis. A potential source of bias could be when the number of galaxies that their spin direction is not identified is not distributed evenly between galaxies that spin clockwise and galaxies that spin counterclockwise. That is, the number of galaxies that in reality spin clockwise and are also identified by the algorithm as galaxies that spin clockwise can be higher than the number of galaxies that in reality spin counterclockwise and also identified as galaxies that spin clockwise. In that case, the dataset of annotated galaxies can be completely clean, but still exhibit a bias due to the selection of a higher number of galaxies that spin clockwise.
If a galaxy has obvious visible spin patterns, it can be determined that the galaxy spins. However, if a galaxy does not have clear identifiable spin pattern, that does not necessarily mean that the galaxy does not have a clear spin direction. For instance, Figure~\ref{sdss_vs_hst} shows an example of a galaxy at $(\alpha=150.329^o, \delta=1.603^o)$ imaged by HST and by SDSS. The SDSS image of the galaxy does not have an identifiable spin direction, and the galaxy in that image does not necessarily seem spiral. The more powerful HST shows that the galaxy is spiral, with very clear counterclockwise patterns. That example shows that it is practically not possible to separate between galaxies that have spin patterns and galaxies that do not seem to spin. Therefore, such analysis has to rely on the characterization of the data. The algorithm used for a study of this kind must be fully symmetric, as described above.
Assuming that an annotation algorithm selects a higher number of galaxies from a certain spin direction, it is possible to predict the expected distribution of the spin direction. Given that some galaxies spin in a certain direction but are determined by the algorithm to have an unidentifiable spin direction, the observed asymmetry A can be defined by Equation~\ref{asymmetry3}
\begin{equation}
A= \frac{ R_{cw} \cdot u_{cw} - R_{ccw} \cdot u_{ccw} } { R_{cw} \cdot u_{cw} + R_{ccw} \cdot u_{ccw} } ,
\label{asymmetry3}
\end{equation}
where $R_{cw}$ and $R_{ccw}$ are the numbers of galaxies in the dataset that indeed spin clockwise and counterclockwise, respectively. $u_{cw}$ is the fraction of galaxies in the dataset that spin clockwise, and their spin direction is also identified correctly by the algorithm and therefore these galaxies were used in the analysis. Similarly, $u_{ccw}$ is the fraction of galaxies in the dataset that spin counterclockwise, and their spin directions were identified by the annotation algorithm.
If the real distribution of the spin directions in the dataset is fully symmetric, we can assume that $R_{cw} \simeq R_{ccw}$. In that case, the observed asymmetry $A$ is defined by Equation~\ref{asymmetry4}.
\begin{equation}
A= \frac{ (u_{cw} - u_{ccw}) } { (u_{cw} + u_{ccw}) } ,
\label{asymmetry4}
\end{equation}
Because the algorithm does not change during the analysis, the asymmetry $A$ is expected to be a constant. As shown in Figure~\ref{by_ra_mirror} and in several previous experiments \citep{shamir2020patterns,shamir2021large,shamir2022new,shamir2022large}, the asymmetry changes consistently in different parts of the sky. Moreover, the sign of the asymmetry flips in opposite hemispheres. For instance, Table~\ref{hemispheres2} shows the number of clockwise and counterclockwise galaxies imaged by DECam in opposite hemispheres, as described in \citep{shamir2021large}. The table shows statistically significant inverse asymmetry observed in opposite hemispheres. Because the algorithm does not change during the analysis, a selection bias would have been expected to show a consistent asymmetry $A$ in all parts of the sky. It is difficult to think of an algorithm that does not change during the analysis, but selects a higher number of clockwise galaxies in one hemisphere, and a higher number of counterclockwise galaxies in the opposite hemisphere.
\begin{table*}
\caption{The number of DECam galaxies spinning clockwise and counterclockwise in opposite hemispheres. The P values are the binomial distribution probabilities to have such asymmetry or stronger when the probability of a galaxy to spin in each direction is mere chance 0.5 \citep{shamir2021large}. The vast majority of the galaxies are from the Southern hemisphere.}
\label{hemispheres2}
\centering
\begin{tabular}{lcccc}
\hline
\hline
Hemisphere (degrees) & \# cw galaxies & \# ccw galaxies & $\frac{cw-ccw}{cw+ccw}$ & P \\
\hline
$(0^o-150^o \cup 330^o-360^o)$ & 264,707 & 262,559 & 0.004 & 0.0015 \\
$(150^o-330^o)$ & 139,719 & 140,913 & -0.004 & 0.0121 \\
\hline
\hline
\end{tabular}
\end{table*}
\subsection{Other biases in the selection of the galaxies}
\label{selection_bias}
A possible source of error or bias can be the selection of the galaxies. Using some selection criteria or previous catalogs of galaxy morphology can lead to bias carried over from the catalog, and affecting the rest of the analysis. For instance, a certain galaxy morphology catalog can be biased due to human or machine learning bias. Both of these possible biases are very difficult to detect and profile. The human perception is complex to understand or reproduce, and machine is based on complex non-intuitive data-driven rules are difficult to control and verify formally that no bias exists. For instance, it has been shown that non-even distribution of the locations of the training samples in the sky can lead to mild but consistent bias in deep neural networks \citep{dhar2022systematic}.
To annotate the galaxies by their spin directions, one might apply a first step of using machine learning or crowdsourcing to identify spiral galaxies, and then apply an algorithm to the galaxies classified as spiral to identify their spin direction \citep{hayes2017nature}. While this approach does not necessarily lead to a biased dataset, it also introduces a risk that bias in the machine learning algorithm that is used to select spiral galaxies might be carried over to the rest of the analysis. That is, if the selection of the spiral galaxies is biased in some way due to machine learning or human bias, that bias can affect the entire analysis. Such bias might be unintuitive and difficult to identify.
Because machine learning algorithms use all possible information that can differentiate between the different classes in the training set, these algorithms can be biased by aspects that are not intuitive or obvious to the user \citep{dhar2021evaluation}. For instance, a machine learning algorithm trained to classify between elliptical and spiral galaxies has been shown to also ``learn'' the background sky \citep{dhar2022systematic}. As a result, the distribution of elliptical and spiral galaxies in one part of the sky was different from the distribution in another part of the sky, and the different distributions corresponded to the part of the sky from which the training galaxies were taken. That is, the machine learning algorithm showed a certain distribution of elliptical and spiral galaxies. Then, the same set of galaxies was annotated by the machine learning algorithm, and provided a significantly different distribution of spiral and elliptical galaxies. That was because the machine learning algorithm was trained by different sets of spiral and elliptical galaxies, taken from different parts of the sky. The experiment showed that by selecting training samples from different parts of the sky to train the machine learning algorithm, the results were significantly different, showing two different distributions when annotating the exact same set of galaxies.
To avoid such possible biases, the galaxies should be selected without prior analysis, and in particular avoid complex and unexplainable analyses such as deep neural networks or other forms of machine learning. In \citep{shamir2020patterns}, which is Dataset 7 in Table~\ref{datasets}, all galaxies with spectra in SDSS DR 14 were used. The only selection criterion of the galaxies was the Petrosian radius. All SDSS galaxies with spectra and Petrosian radius greater than 5.5'' were selected and analyzed. Assuming that galaxies that spin clockwise are, on average, as large as galaxies that spin counterclockwise, this simple criterion is not expected to lead to bias in the data. Then, the Ganalyzer algorithm was applied on all galaxies without any prior selection. Galaxies that did not have identified spin direction, such as elliptical galaxies, were rejected from the analysis when Ganalyzer was not able to determine their spin direction. As shown in \citep{shamir2011ganalyzer}, Ganalyzer can identify between spiral and elliptical galaxies, and does that in a fully symmetric manner, without using any form of machine learning or pattern recognition.
In the dataset of DESI Legacy Survey objects used in \citep{shamir2021large}, which is Dataset 13 in Table~\ref{datasets}, the galaxies did not have spectra. In that case, the only selection criteria was to select just extended objects that have r magnitude brighter than 19.5. That selection criteria was necessary due to the extreme size of the data. Assuming that galaxies that spin clockwise are, on average, as bright as galaxies that spin counterclockwise, such simple selection is not expected to lead to preference of galaxies that spin in a certain direction. That selection does not involve machine learning or human selection that can be biased in a manner that is difficult to notice and profile. A similar selection criteria was used in Dataset 8 of Pan-STARRS galaxies. The galaxies were selected from the entire Pan-STARRS dataset, such that the selection criteria was extended objects with r magnitude of less than 19 \citep{timmis2017catalog}. No other selections such as morphological catalogs were used.
As also mentioned in Section~\ref{dataset}, the dataset used by \citep{iye2020spin} was not designed or used to analyze the asymmetry in the population of galaxies spinning in opposite directions. The paper from which the data were taken \citep{shamir2017photometric} makes no attempt to identify any kind of axis in galaxies with opposite spin directions, and no attempt to show a dipole axis in that dataset was made in any other paper. The dataset used in \citep{iye2020spin} is a dataset of galaxies that were selected after a first step of applying machine learning to separate spiral galaxies from elliptical galaxies \citep{shamir2017photometric}. That step makes the dataset different from the datasets mentioned above.
The machine learning algorithm used to classify the galaxies \citep{orlov2008wnd} was initially developed for classifying cells in microscopy images. Therefore, the features it uses are rotationally invariant. Unlike deep neural networks, where full rotational invariance is more difficult to control, substantial work has been done in the past to develop specific mathematical descriptors that can reflect the visual content and are not affected when the image is rotated or mirrored. Therefore, the machine learning algorithm is not expected to be sensitive to the spin direction of the galaxy. However, machine learning works by complex rules that are not intuitive to understand, and can therefore lead to biases that are not expected. It is therefore preferred not to use machine learning in any step of such analysis. As mentioned above, the dataset used by \cite{iye2020spin} to show a dipole axis in the distribution of galaxy spin directions was not used for that purpose in the paper from which it was taken \citep{shamir2017photometric}, or in any other paper. Experiments that show asymmetry in the distribution of galaxy spin directions \citep{shamir2020patterns,shamir2020pasa, shamir2021large,shamir2022new,shamir2022large} did not use any kind of machine learning, and the selection of the galaxies was done by using very simple rules. In all of these cases there was no first step of selecting spiral galaxies, which is a step that in itself might lead to bias if done by human annotation or machine learning.
\section{Comparing to other datasets}
\label{other_datasets}
The results shown in Section~\ref{reanalysis} can be compared to results published in previous studies. One of the first attempts to use a large number of galaxies to study the distribution of galaxy spin directions was Galaxy Zoo \citep{land2008galaxy}, which analyzed galaxies manually by several hundred thousands non-expert volunteers. That effort showed non-significant distribution that peaked at $(\alpha=161^o,\delta=11^o)$, as also mentioned in \citep{longo2011detection}. That location is close to the location of the most probable axis shown in Section~\ref{reanalysis}. The axis reported with Galaxy Zoo data is not statistically significant. The absence of statistical signal can also be attributed to the strong human bias \citep{land2008galaxy}, leading to substantial corrections during which most of the data could not be used. The average redshift of the galaxies in that dataset was $\sim$0.07.
The main downside of the Galaxy Zoo annotations was that the galaxies were not mirrored randomly, and therefore the perceptional bias of the volunteers did not offset. Section~\ref{previous_studies} provides an additional discussion regarding the Galaxy Zoo experiment. Another attempt to study the possible asymmetry was done by \cite{longo2011detection}, who used 15,158 manually annotated galaxies. These galaxies were mirrored randomly to offset for the possible human bias. The study resulted in a dipole axis in the galaxy spin directions with the most probable axis at $(\alpha=217^o,\delta=32^o)$, with very strong signal of $>5\sigma$. That location falls within the 1$\sigma$ error of the most likely axis shown in Section \ref{reanalysis}, which is $(84^o, 245^o)$ for the RA. The declination of the two axes is nearly identical. The downside of that study was that the galaxies were annotated manually by five undergraduate students, and therefore the annotations are subjected to possible biases related to human labor.
\cite{longo2011detection} does not specify the exact average redshift of the galaxies, but specifies that the redshift of the galaxies was limited to 0.085. Therefore, the average redshift was determined by the average redshift of the galaxies in the redshift range of $(z<0.085)$ in the ``superclean'' Galaxy Zoo and in \citep{shamir2020patterns}. In both cases the average redshift of the galaxies was $\sim$0.05, and therefore 0.05 was used as the average redshift of the galaxies in that dataset. Since the redshift range in \citep{longo2011detection} was $(z<0.085)$, the average redshift of the galaxies within the same range can be assumed to provide an approximation of the average redshift of the galaxies in that dataset.
While all the datasets mentioned above were taken from SDSS, a comparison should be made to more than a single telescope to show that if such asymmetry exists, it is not the feature of a single instrument or photometric pipeline. Relevant sky surveys that are not SDSS but collect large data and covering a sufficiently large footprint can be Pan-STARRS \citep{shamir2020patterns} and DESI Legacy Survey \citep{shamir2021large}. As shown in previous work, the location of the most likely dipole axis changes consistently when the redshift of the galaxies gets higher \citep{shamir2020patterns,shamir2022new}. That is, datasets that have similar redshift distribution tend to provide similar profiles of asymmetry, while the location of the peak of the axis changes when the redshift of the galaxies change \citep{shamir2020patterns,shamir2022new}. That can be viewed as an indication that if such axis exists, it does not necessarily go directly through Earth, as discussed in \citep{shamir2022new} and also briefly in Section~\ref{conclusion}. %
Table~\ref{comparison} summarizes the most likely peak shown in Section~\ref{reanalysis} for the dataset used in \citep{iye2020spin}, and the peak if the axis detected in other datasets mentioned above. While the locations of the peaks are not identical, they are within 1$\sigma$ error from the peak of the axis analyzed in Section~\ref{reanalysis}. The table also shows the number of galaxies in each dataset. The Pan-STARRS dataset is the smallest dataset, with merely $\sim3.3\cdot10^4$ galaxies, and the small size of the dataset can explain the lower statistical signal, slightly below 2$\sigma$. The largest dataset is the galaxies of the DESI Legacy Survey imaged by DECam, with over $8\cdot10^5$ galaxies. The table also shows the number of galaxies annotated as clockwise galaxies and the number of galaxies annotated as counterclockwise galaxies in each dataset. As discussed in Section~\ref{error}, the sign of the asymmetry flips in opposite hemispheres, and therefore the asymmetry in one hemisphere is offset by the asymmetry in the opposite hemisphere. The total number of galaxies in the dataset that spin in opposite directions is therefore not an informative measurement of its asymmetry, as these numbers are heavily dependent on the dataset footprint. In the dataset of \cite{longo2011detection}, the number of galaxies spinning in each direction was determined from Figure 2 in \cite{longo2011detection}.
\begin{table*}[h]
\scriptsize
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Telescope & Reference & RA & Dec & Significance & \# galaxies \\
& & (degrees) & (degrees) & ($\sigma$) & (cw | ccw) \\
\hline
SDSS & This paper & 165 & 40 & 2.1 & 72,888 (36,607 | 36,181) \\
SDSS & \citep{land2008galaxy} & 161 & 11 & $<2\sigma$ & 13,796 ( 6902 | 6,894) \\
SDSS & \citep{longo2011detection} & 217 & 32 & 5.15 & 15,258 ( 7442 | 7,816 ) \\
DECam & \citep{shamir2021large} & 237 & 10 & 4.7 & 807,898 (404,426 | 403,472) \\ %
Pan-STARRS & \citep{shamir2020patterns} & 227 & 1 & 1.9 & 33,028 ( 16,508 | 16,520 ) \\
\hline
\end{tabular}
\caption{Most likely dipole axes from previous results using several different telescopes.}
\label{comparison}
\end{table*}
\section{Previous work that showed different conclusions}
\label{previous_studies}
Section~\ref{introduction} mentions briefly several studies that agree with the contention that the distribution of the spin directions of spiral galaxies is not necessarily random. That section also mentions some previous studies that proposed opposite conclusions. The purpose of this section is to analyze these studies and identify possible reasons for these conclusions.
One of the first studies to specifically claim that the spin directions of spiral galaxies is randomly distributed was done by \cite{iye1991catalog}. The dataset used to make the conclusions was a catalog of 6,525 galaxies (3,257 clockwise and 3,268 counterclockwise) from the Southern hemisphere. To observe a two-tailed statistical significance of P$\simeq$0.05, the distribution needs to be 3184:3341 or stronger. That is $\sim$5\% difference between the number of galaxies that spin clockwise and galaxies that spin counterclockwise. The asymmetry reported here is far smaller, at $\sim$1.4\%. That asymmetry is comparable or greater than the asymmetry reported in previous studies \citep{shamir2020patterns,shamir2021large}. Therefore, just $6.5\cdot10^3$ is not sufficient to identify the magnitude of asymmetry reported here or in previous studies. Based on binomial distribution when assuming the probability of a galaxy to spin in a certain direction is 0.5, showing a one-tailed statistically significant difference of P=0.05 when the difference between the number of clockwise and counterclockwise galaxies is 1.4\% requires a minimum population of 55,818 galaxies. Separation of such dataset to 27,715:28,103 would provide $\sim$1.4\% difference, and P$\simeq$0.05. Therefore, just a few thousand galaxies might not be sufficient to show statistically significant asymmetry.
Another study that aimed at addressing the same question and concluded that the distribution is random was done by using anonymous volunteers who annotated galaxy images by using a web-based user interface \citep{land2008galaxy}. While the annotation of a single anonymous untrained volunteer is not necessarily considered a reliable piece of information, a group of volunteers who annotate the same galaxy can provide a large number of annotations, and analysis of these annotations can provide meaningful information regrading the spin direction of the galaxy \citep{land2008galaxy}. One of the downsides of the study was that even a group of annotators does not necessarily guarantee accurate annotation. While each galaxy was annotated by multiple volunteers, the different annotations of the same galaxy do not always agree, making it difficult to know which annotations are the correct ones . That requires to reduce the data by defining a certain threshold majority as a criterion for the correctness of the annotation \citep{lintott2010}. For instance, for the separation between spiral and elliptical galaxies in Galaxy Zoo 1, just $\sim$40\% of the galaxies had agreement of at least 80\% of the annotators. That reduction led to a smaller dataset, but with a relatively small error of $\sim$3\% \citep{lintott2010}. To reduce the error rate to virtually 0\%, a ``superclean'' criterion of 95\% agreement was used, but just $\sim$12\% of the galaxies met that criterion \citep{lintott2010}. More importantly, the study also showed that the human perception is systematically biased \citep{land2008galaxy}.
The initial ``superclean'' dataset used in \citep{land2008galaxy} contained 6,106 galaxies spinning clockwise, and 7,034 galaxies spinning counterclockwise. That difference of $\sim$15\% was attributed to the perceptional bias of the human annotators \citep{land2008galaxy}. That bias was not known at the time of the design of the study, and therefore the galaxy images were initially not mirrored randomly to correct for that bias. When the bias was noticed, an experiment with mirrored galaxies was done for the purpose of profiling the bias, and included just a small subset of the data. That experiment showed that 5.525\% of the galaxies were classified as spinning clockwise, and 6.032\% of the galaxies were annotated as spinning counterclockwise. When the galaxies were mirrored, 5.942\% were annotated as clockwise, and 5.646\% as counterclockwise. While the experiment provided clear evidence of human annotation bias, it also showed that the results when mirroring the images were not identical to the results when using the original images. That is shown in Table 2 in \citep{land2008galaxy}. That shows that after mirroring the galaxies the number of counterclockwise galaxies was reduced by $\sim$1.5\%, and the number of clockwise galaxies increased by $\sim$2\%. That asymmetry is similar in direction and magnitude to the asymmetry shown in \citep{shamir2020patterns}. The observation reported in \citep{shamir2020patterns} is the most suitable comparison since it also analyzes SDSS galaxies with spectra, and therefore the footprint and distribution of the galaxies is similar.
The size of the entire set of galaxies that were mirrored in \citep{land2008galaxy} is 91,303. Based on that information, the number of galaxies annotated as clockwise was 5,044, and the number of mirrored galaxies classified as clockwise was 5155. The one-tailed binomial distribution statistics to get such asymmetry or stronger by chance is (P$<$0.13). While that P value is not considered statistically significant, there is certainty of $\sim$87\% for agreement with other reports on a higher number of counterclockwise galaxies among SDSS galaxies with spectra. When observing the galaxies classified as counterclockwise, the distribution is 5507:5425. That provides a P value of 0.21 to get such asymmetry or stronger by chance. Because the direction of asymmetry is in agreement in both clockwise and counterclockwise galaxies, the aggregated P value to have both results by chance is $\sim$0.03. That, however, is an invalid statistical analysis since the two experiments are not independent. While the annotations in each experiment are independent, it is likely that many of the galaxies that were annotated in both experiments were in fact the mirrored versions of the same galaxies.
As mentioned above, the asymmetry observed in \citep{land2008galaxy} agrees in both direction and magnitude with the asymmetry reported in \citep{shamir2020patterns} using SDSS galaxies with spectra, which means similar distribution of the objects within the SDSS footprint. The difference between the two experiments is the much larger size of the data used in \citep{shamir2020patterns}, which naturally provides a stronger statistical signal. It is therefore possible that a larger dataset of objects in \citep{land2008galaxy} would have shown a statistically significant signal. That is obviously an assumption that cannot be verified with the existing dataset of \citep{land2008galaxy}. But in any case, the two studies show results that are in agreement, and do not conflict with each other.
Another study that suggested that the distribution of spin directions of galaxies is random was done by using computer analysis to annotate the spin directions of the galaxies \citep{hayes2017nature}. The initial dataset was the same initial dataset of Galaxy Zoo, but instead of annotating the data manually, the researchers used the {\it SPARCFIRE} algorithm and software to identify the spin direction of the galaxies. The clear advantage of using a computer algorithm is that it is not subjected to the human perceptual bias that was dominant in the annotations performed by the Galaxy Zoo volunteers. {\it SPARCFIRE} is also a model-driven algorithm that is not based on machine learning, and therefore suitable for the task. Perhaps a possible limitation of the approach was that the process started with a first step of separating the spiral galaxies from the elliptical galaxies, which was done by using Galaxy Zoo human annotations of galaxies that were not mirrored randomly, or by machine learning.
When applying the computer analysis to galaxies annotated as spirals by humans, the results showed asymmetry between the number of clockwise and counterclockwise galaxies with statistical significance as high as 2.84$\sigma$. That is shown in Table 2 in \citep{hayes2017nature}. That is, when using the galaxy images without making any prior assumptions about their distribution, the asymmetry in the distribution of clockwise and counterclockwise galaxies was statistically significant. %
That statistically significant observation was defined by the authors in the end of Section 3 of \citep{hayes2017nature} as ``surprising''.
A possible explanation would be that the humans who annotated the data were biased towards counterclockwise galaxies when they selected galaxies as spirals. That means that a spiral galaxy that spins counterclockwise is more likely to be selected as spiral compared to a spiral galaxy that spins clockwise. That is a new type of bias that was not known before. To correct for that possible bias, a machine learning algorithm was designed to select the galaxies in a manner that does not depend on its spin direction. The algorithm selected spiral galaxies from a much larger set of galaxies that contained both elliptical and spiral galaxies. The galaxies were selected by training a machine learning algorithm to differentiate between spiral and elliptical galaxies based on morphological descriptors computed by SPARCFIRE \citep{hayes2014}, as well as photometric attributes that allow separation between elliptical and spiral galaxies \citep{banerji2010}. To make the machine learning algorithm symmetric in its selection of spiral galaxies, the machine learning algorithm was trained after rejecting all attributes that could identify between clockwise and counterclockwise galaxies reported in \citep{shamir2016asymmetry}, as well as several other attributes used by {\it SPARCFIRE} that were noticed to differentiate between galaxies with opposite spin directions.
included in the dataset, which will discriminate galaxies with informative features that are different between clockwise and counterclockwise galaxies. Regardless of whether the source of the asymmetry is the real sky or a certain bias, the higher priority for attributes that are known as attributes that are not linked to the spin direction will reduce the asymmetry in the predictions compared to the original dataset.
The random forest algorithm has an integrated feature selection process \citep{breiman2001}, and therefore it selects the informative features even when no explicit feature selection algorithm is used. A simple experiment was done by creating a random forest classifier that can differentiate between spiral and elliptical galaxies. For that purpose, a dataset of 19,500 spiral galaxies were taken from the dataset described in Section~\ref{reanalysis}, such that half of them were clockwise galaxies and the other half were galaxies spinning counterclockwise. That set of galaxies made the training samples of the spiral galaxies. The training samples of the elliptical galaxies were taken from the catalog of \citep{kuminski2016computer}, and included 19,500 annotated as elliptical with certainty of 0.9 or higher. Using the full photometric attributes of each galaxy taken from SDSS DR 8, a classifier $C_1$ was trained using random forest with the default settings of the {\it Scikit-learn} library.
The classifier $C_1$ was then applied to annotate a dataset of 10,000 clockwise galaxies and 9,500 counterclockwise galaxies used in Section~\ref{reanalysis}. After applying classifier $C_1$ to these galaxies, 17,169 galaxies were classified as spiral, and the rest of the galaxies were classified by the algorithm incorrectly as elliptical. The galaxies classified as spiral were distributed such that 8,814 galaxies had clockwise spin patterns, and 8,355 galaxies had counterclockwise spin patterns. That shows that among the galaxies classified by $C_1$ as spiral, there were $\sim$5.2\% less galaxies that spin counterclockwise. That asymmetry is very close to the asymmetry in the original dataset of 10,000 clockwise galaxies and 9,500 counterclockwise galaxies, which has exactly 5\% less galaxies that spin counterclockwise compared to the number of galaxies that spin clockwise. The classifier $C_1$ therefore did not lead to annotations that changed the distribution of the spin directions of the galaxies in the dataset that it classified.
Then, a classifier $C_2$ was trained with the same training set that was used to train classifier $C_1$. But for training classifier $C_2$, the 18 attributes from Table 8 in \citep{shamir2016asymmetry} were removed, in addition to isoPhiGrad attributes in all bands, isoPhi, petroR50Err attributes in all bands, petroR90Err attributes in all bands, u and q attributes in all bands, LnDeV attributes in all bands, lnLStar in all bands, and the magnitude attributes. After training $C_2$ with the same random forest using {\it Scikit-learn}, the classifier $C_2$ was applied to annotate the same 10,000 clockwise galaxies and 9,500 counterclockwise galaxies that were annotated by classifier $C_1$. The $C_2$ classifier annotated 14,541 galaxies as spiral, and the rest of the galaxies were incorrectly annotated by $C_2$ as elliptical. The set of galaxies that were classified as spiral by $C_2$ included 7,378 clockwise galaxies and 7,163 counterclockwise galaxies. Additionally, the resulting dataset of galaxies classified as spirals contained 3,814 elliptical galaxies that were incorrectly classified by the algorithm as spiral. While the resulting dataset still had a statistically significant higher number of galaxies spinning clockwise, the difference became smaller compared to the original dataset, from 5\% to $\sim$3\%. The two-tailed P-value of the binomial distribution of the asymmetry in the annotations of classifier $C_1$ (8,814 clockwise, 8,355 counterclockwise) is 0.00047. That probability increases to 0.076 with the annotations done by classifier $C_2$ (7,378 clockwise, 7,163 counterclockwise). That shows that when removing the attributes that differentiate between clockwise and counterclockwise galaxies, a dataset that was originally asymmetric became less asymmetric after applying a classifier to identify spiral galaxies.
Because the machine learning algorithm used by \cite{hayes2017nature} was trained with an equal number of clockwise and counterclockwise spiral galaxies, there is no obvious reason for a machine learning algorithm to prefer one spin direction over the other even without removing certain attributes that correlate with the spin direction. The balanced training set is expected to ensure that the selection of spiral galaxies is not biased towards a certain spin direction. But since machine learning is often not fully explainable, it is difficult to prove mathematically whether a certain machine learning algorithm is biased. That is often done by empirical analysis, which can also be challenging and non-intuitive as shown in \citep{dhar2022systematic}. Selecting the spiral galaxies using a machine learning algorithm provided a symmetric dataset after applying a removal of some of the attributes that correlate with the spin direction. As shown in the experiment above, the removal of such attributes can make a dataset with asymmetric distribution of galaxy spin directions become less asymmetric, and the asymmetry becomes statistically insignificant. While this experiment cannot be considered a proof that the removal of attributes led to the disappearance of the asymmetry in \citep{hayes2017nature}, the experiment above shows that it can reduce it, and therefore a possible explanation.
\cite{hayes2017nature} do not show the results after selecting the galaxies with their machine learning algorithm, yet without removing attributes. It is therefore not impossible that the asymmetry shown in this paper was observed also by \cite{hayes2017nature}, but was not included in their paper. Because the selection of spiral galaxies was done by training a machine learning system in a symmetric manner, the analysis makes a correct use of machine learning, within the known limitations of machine learning systems.
It should be mentioned that the attributes identified in \citep{shamir2016asymmetry} were reported as attributes that exhibit certain differences between clockwise and counterclockwise galaxies in SDSS. These attributes were never used for any analysis of the distribution of galaxies with opposite spin directions. The only known use of these attributes for that purpose is \citep{hayes2017nature}. As described in Section~\ref{selection_bias}, previous experiments were done by selecting all objects subjected to simple criteria (e.g., maximum radius), without making prior assumptions for the selection. By avoiding a first step of separation of the galaxies to elliptical and spiral, machine learning is not used, and its potential biases therefore cannot impact any stage of the analysis.
\section{Conclusion}
\label{conclusion}
As claimed in \citep{iye2020spin}, the distribution of spin directions of spiral galaxies is still unknown. Multiple experiments using several different telescopes suggest that the distribution might not be random. These include SDSS \citep{shamir2020large}, Pan-STARRS \citep{shamir2020patterns}, HST \citep{shamir2020pasa}, and DESI Legacy Survey \citep{shamir2021large,shamir2022new}. \cite{iye2020spin} used a dataset of photometric objects \citep{shamir2017photometric} that was used for an experiment aiming at profiling photometric differences between objects spinning in opposite directions, and used the same dataset to identify a cosmological-scale dipole in the S/Z galaxy distribution. They found that after removing photometric objects that are part of the same galaxy the dataset provided random distribution of 0.29$\sigma$ in the galaxy spin directions.
However, the random distribution of 0.29$\sigma$ was reported by limiting the dataset to $Z_{phot}<0.1$. The random distribution of galaxies in that redshift range agrees with previous literature showing random distribution of the spin directions in that redshift range \citep{shamir2020patterns}. More importantly,
the analysis of \cite{iye2020spin} is three-dimensional, which requires the redshift of the galaxies. The source of the redshift is \citep{paul2018catalog}, which is a catalog of photometric redshift that \cite{iye2020spin}. %
Analysis of the exact same catalog used by \citep{iye2020spin}, but without limiting to low redshifts and without using the photometric redshift shows different statistical signal compared to the signal reported in \citep{iye2020spin}. That shows that by avoiding the use of the photometric redshift, the statistical signal of the same dataset is different. Because the analysis done in \citep{iye2020spin} is completely different from the analysis used here or in previous work \citep{shamir2012handedness,shamir2020patterns,shamir2021large}, it is not necessarily certain that limiting the redshift of the galaxies or using the photometric redshift are the reasons for the low statistical signal observed by \citep{iye2020spin}. Given that standard statistical methodology such as binomial distribution and $\chi^2$ show that the spin directions are not distributed randomly, the redshift limit and the use of photometric redshift are both possible reasons for the low statistical signal observed by \cite{iye2020spin}. Simple analyses show that limiting the data by the photometric redshift, or adding error to the position of each galaxy lead to the loss of the statistical signal of the dipole axis, even if the error is added in a symmetric manner.
The analysis shown here shows a peak that agrees with several previous experiments by several different researchers. The statistical significance, although greater that 2$\sigma$, is still not exceptionally high, and cannot prove or disprove the existence of a dipole axis. That is, the dataset used here does not necessarily prove that the distribution of the spin directions of SDSS galaxies forms a dipole axis. But it also does not show that the distribution of spin directions in SDSS is random. These results are added to previous work with other datasets, or with new analysis of datasets published in the past. That leads to the possibility that spin directions of objects in SDSS identified as galaxies are not necessarily random.
Previous work showed that the asymmetry in the spin directions of spiral galaxies can be observed in different telescopes, and the locations of the dipole axes observed with different telescopes are well within statistical error from each other. In addition to SDSS, the asymmetry was also observed in HST \citep{shamir2020pasa}, Pan-STARRS \citep{shamir2020patterns,shamir2021large,shamir2022new}, and DESI Legacy Survey \citep{shamir2021large,shamir2022new}.
Another interesting observation is that the location of the most likely axis depends on the redshift of the galaxies in the dataset \citep{shamir2020patterns,shamir2020pasa,shamir2022new}. That can be viewed as an indication that if such axis indeed exists, it might not necessarily go directly through Earth. When using datasets with similar redshift distribution of the galaxies, the location of the dipole axes observed with the different telescopes become closer, and the profile of the distribution of the galaxy spin directions becomes more similar \citep{shamir2020patterns,shamir2020pasa,shamir2022new}.
Figure~\ref{decam_sdss_panstarrs_normalized} shows the results of the analysis described in Section~\ref{reanalysis} when using data from SDSS, Pan-STARRS \citep{shamir2020patterns}, and a relatively large dataset of $8.08\cdot10^5$ galaxies from the DESI Legacy Survey \citep{shamir2021large}. As described in \citep{shamir2022new}, the SDSS dataset was compiled by selecting $3.8\cdot10^4$ galaxies from the dataset used in \citep{shamir2020patterns} such that their redshift distribution is similar to the redshift distribution of the galaxies in the DESI Legacy Survey. Because most galaxies in DESI Legacy Survey do not yet have spectra, the redshift distribution of these galaxies was determined by a subset of $1.7\cdot10^4$ galaxies that had spectra through the 2dF survey \citep{cole20052df}. Full description of the normalization of the redshift distribution and the full-sky analysis can be found in \citep{shamir2022new}.
The figure shows very similar profiles of asymmetry across the different digital sky surveys, with similar locations of the most likely dipole axes \citep{shamir2022new}. The Pan-STARRS dataset of $\sim3.3\cdot10^4$ provided a dipole axis with statistical signal of 1.9$\sigma$, which might not necessarily be considered statistically significant, but also does not conflict with the other datasets. The SDSS galaxies $3.8\cdot10^4$ provided a statistical significance of 2.2$\sigma$ for the existence of the dipole axis, while the far larger dataset from DESI Legacy Survey of $8.08\cdot10^5$ galaxies showed a dipole axis with probability of 4.7$\sigma$ \citep{shamir2022new}.
When not normalizing the redshift distribution of the SDSS galaxies the most probable location of the dipole axis is still within 1$\sigma$ statistical error from the axes identified in Pan-STARRS and DESI Legacy Survey \citep{shamir2022new}. As described in \citep{shamir2020patterns,shamir2021large,shamir2022new}, assigning the spin directions with random distribution provided no statistically significant dipole axis, and a distribution profile similar to Figure~\ref{dipole_random}.
Interestingly, the locations of the most likely axes are very close to the CMB Cold Spot. For instance, the most likely axis observed in the DESI Legacy Survey data peaks at $(\alpha=57^o,\delta=-10^o)$, which is close to the CMB Cold Spot at $(\alpha=49^o,\delta=-19^o)$. That can obviously be considered a coincidence. The consistent change of the location of the most likely axis with the redshift can be viewed as an axis that does not necessarily go directly through Earth \citep{shamir2022new}.
The contention that the Universe is oriented around a major axis shifts from the standard cosmological models, but agrees with several other previously proposed cosmological theories. These include rotating universe \citep{godel1949example,ozsvath1962finite,ozsvath2001approaches,sivaram2012primordial,chechin2016rotation,seshavatharam2020integrated,camp2021}, ellipsoidal universe \citep{campanelli2006ellipsoidal,campanelli2007cosmic,campanelli2011cosmic,gruppuso2007complete,cea2014ellipsoidal}, or geometric inflation \citep{arciniega2020geometric,edelstein2020aspects,arciniega2020towards,jaime2021viability}.
Another cosmological theory that could be relevant to the observation is Black Hole Cosmology \citep{pathria1972universe,stuckey1994observable,easson2001universe,poplawski2010radial,tatum2018clues,chakrabarty2020toy}, suggesting that the Universe is the interior of a black hole in another Universe. Black hole cosmology was motivated by the agreement between the Hubble radius and the Schwarzschild radius, but can also explain the accelerated expansion of the Universe without the assumption of dark energy. Black hole cosmology is also closely related to the theory of holographic universe \citep{susskind1995world,bak2000holographic,bousso2002holographic,myung2005holographic,hu2006interacting,sivaram2013holography,shor2021representation,rinaldi2022matrix}, which is related to black hole thermodynamics, and can explain space as seen in our Universe as the interior of a black hole in another universe. Because black holes spin \citep{gammie2004black,takahashi2004shapes,volonteri2005distribution,mcclintock2006spin,mudambi2020estimation,reynolds2021observational}, a universe hosted in a black hole should have an axis and a preferred direction inherited from the spin of its host black hole \citep{poplawski2010cosmology,seshavatharam2010physics,seshavatharam2014understanding,christillin2014machian,seshavatharam2020integrated}.
Large-scale anisotropy and a cosmological-scale axis were also proposed and discussed in the light of the cosmic microwave background radiation \citep{abramo2006anomalies,mariano2013cmb,land2005examination,ade2014planck,santos2015influence,dong2015inflation,gruppuso2018evens,yeung2022directional}, and the acceleration rates \citep{perivolaropoulos2014large}. \cite{luongo2022larger} proposed a link between the $H_0$ expansion rates and the CMB dipole. Other messengers that were used to show cosmological anisotropy in addition to the cosmic microwave background \citep{eriksen2004asymmetries,cline2003does,gordon2004low,campanelli2007cosmic,zhe2015quadrupole} include short gamma ray bursts \citep{meszaros2019oppositeness}, LX-T scaling \citep{migkas2020probing}, Ia supernova \citep{javanmardi2015probing,lin2016significance}, radio sources \citep{ghosh2016probing,tiwari2015dipole}, galaxy morphology types \citep{javanmardi2017anisotropy}, dark energy \citep{adhav2011kantowski,adhav2011lrs,perivolaropoulos2014large,colin2019evidence}, high-energy cosmic rays \citep{aab2017observation}, polarization of quasars \citep{hutsemekers2005mapping,secrest2021test}, and very large cosmological-scale structures \citep{deng2006super}. These observations challenge the standard cosmological models, and mandate further research to fully profile and understand their nature in the light of the large-scale structure of the Universe.
\section*{Acknowledgments}
I would like to thank the very knowledgeable anonymous reviewer for the sincere efforts to help improve the manuscript and the research. This study was supported in part by NSF grants AST-1903823 and IIS-1546079.
\bibliographystyle{apalike}
\bibliography{main_archive}
|
Title:
(3200) Phaethon Polarimetry in the Negative Branch: New Evidence for the Anhydrous Nature of the DESTINY+ Target Asteroid |
Abstract: We report on the first polarimetric study of (3200) Phaethon, the target of
JAXA's DESTINY$^+$ mission, in the negative branch to ensure its anhydrous
nature and to derive an accurate geometric albedo. We conducted observations at
low phase angles (Sun-target-observer angle, alpha = 8.8-32.4 deg) from 2021
October to 2022 January and found that Phaethon has a minimum polarization
degree $P_{min}$ = -1.3 +- 0.1 %, a polarimetric slope h = 0.22 +- 0.02 %
deg$^{-1}$, and an inversion angle alpha$_0$ = 19.9 +- 0.3 deg. The derived
geometric albedo is $p_V$ = 0.11 (in the range of 0.08-0.13). These
polarimetric properties are consistent with anhydrous chondrites, and
contradict hydrous chondrites and typical cometary nuclei.
| https://export.arxiv.org/pdf/2208.11912 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
techniques: polarimetric -- minor planets, asteroids: individual: (3200) Phaethon.
\end{keywords}
\section{Introduction}
C-complex asteroids are particularly important for revealing the aqueous activity that might have occurred $<$ 10\, Myr after the beginning of the solar system formation \citep{2012NatCo...3..627F}. Most of them are rich in volatile components, maintaining the primordial information since the formation epoch \citep{2015aste.book..635K}. Accordingly, recent asteroid explorations targeted carbonaceous asteroids. The {\it OSIRIS-REx} mission investigated its target asteroid (101955) Bennu and revealed unambiguous evidence for widespread hydrated minerals \citep{Hamilton+2019}. On the other hand, (162173) Ryugu, the target asteroid of the {\it Hayabusa2} mission, indicated a weak signature of the hydrated minerals that might have experienced a mild heating process at $>$ 300\,$^\circ$C in the parent body \citep{Kitazato+2021}. Therefore, hydrated silicate abundance is an important tracer for the thermal history of C-complex asteroids \citep{Hiroi+1996}.
(3200) Phaethon (F- or B-type, a subclass of C-complex, \citealt{Tholen+1989,Bus+2002}) is the target of JAXA's {\it DESTINY$^+$} mission \citep{Arai+2018}, and known to have unique properties. It has an asteroid-like orbit (the Tisserand parameter with respect to Jupiter, $T_\mathrm{J}>3$) that likely originates in the main asteroid belt \citep{deLeon+2010,MacLennan+2021}. It has shown evidence for dust ejection reminiscent of comets \citep{Jewitt+2010}. Phaethon's albedo has not been determined well, making it difficult to identify if this object consists of a comet-like or asteroid-like composition (see Section \ref{sec:results}).
There is a large discrepancy in the interpretation of Phaethon's spectrum. \citet{Licandro+2007} argued that Phaethon's spectrum is similar to those of aqueously altered CI/CM meteorites and hydrated minerals. \citet{Licandro+2007} further suggested that Phaethon is likely an activated asteroid similar to the main-belt comets rather than typical comets of outer solar system origins. On the other hand, \citet{Clark+2010} reported that Phaethon's spectrum matches CK meteorites or an experimental mixture of chlorite and carbon lampblack. Later, \citet{Takir+2020} reported that this asteroid shows no hydrated mineral absorption near 3\,$\mu$m, supporting the idea of anhydrous material. Note that the interpretation of anhydrous material conflicts with \citet{Licandro+2007}. Such a large discrepancy arises the necessity to examine the nature of Phaethon by a method independent of spectroscopy.
Recently, \citet{Ishiguro2022} proposed that polarimetry at low phase angles (Sun--target--observer angle, $\alpha \lesssim 20 \degr$) is a useful diagnostic tool for conjecturing if C-complex asteroids are hydrous or anhydrous. However, due to the unfavorable observational conditions until recently, Phaethon's polarimetric property at low phase angles ($\alpha < 19.1 \degr$) has not been investigated. Taking advantage of the opportunity in late 2021 and early 2022, we obtained polarimetry at low phase angles ($\alpha= 8.8$--$32.4\degr$) and found that Phaethon's surface is anhydrous. In addition, we narrowed down the albedo estimate range with our polarimetry.
In this paper, we describe our observations in Section \ref{sec:observations} and the derivation of polarimetric parameters in Section \ref{sec:polpara}. We provide two major findings (the composition and geometric albedo) in Section \ref{sec:results}. We discuss these results in Section \ref{sec:discussion}, focusing on the significance of the albedo determination and hydrous/anhydrous nature.
\section{Observations and data analysis}
\label{sec:observations}
We made polarimetric observations using three instruments: the Hiroshima Optical and Near-InfraRed camera (HONIR; \citealt{Akitaya+2014}) on the 1.5-m Kanata Telescope at the Higashi-Hiroshima Observatory, the Wide Field Grism Spectrograph 2 (WFGS2; \citealt{Uehara+2004,Kawakami+2021}) on the 2.0-m Nayuta telescope at the Nishi-Harima Astronomical Observatory, and the Andalucia Faint Object Spectrograph and Camera (ALFOSC) with the FAPOL polarimeter on the 2.56-m Nordic Optical Telescope at the Observatorio del Roque de los Muchachos, La Palma. These instruments equip a polarizer and a rotatable half-wave plate mounted in the Cassegrain focus of each telescope. We acquired HONIR and WFGS2 data at four different angles of the half-wave plate (0\degr, 45\degr, 22.5\degr, and 67.5\degr, in that order) and FAPOL data at 16 different angles (0\degr, 22.5\degr, 45\degr, 67.5\degr, 90\degr, 112.5\degr, 135\degr, 157.5\degr, 180\degr, 202.5\degr, 225\degr, 247.5\degr, 270\degr, 292.5\degr, 315\degr, and 337.5\degr, in that order). We used only $R_\mathrm{C}$-band filter. In addition to these new observations, we reanalyzed the $R_\mathrm{C}$-band polarimetric data published in \citet{Shinnaka+2018}. Because this data was taken near the inversion angle with a good signal-to-noise (S/N) ratio (i.e., small random errors), we reanalyzed them by paying particular attention to the systematic errors.
An outline of the data analysis consists of five major steps: (1) preprocessing the raw observed images, (2) extraction of source signals by aperture photometry, (3) correction for systematic errors, (4) derivation of Stokes parameters ($q$ and $u$), polarization degree ($P$), and polarization position angle ($\theta_\mathrm{P}$), and (5) obtaining the nightly weighted mean of $q$ and $u$. Because we strictly followed the reduction processes (1), (2), (4), and (5) written in \citet{Ishiguro2022}, we skipped the detailed explanation in this paper. The reduction process (3) is particularly important for this work, not only because the polarization degrees at these phase angles are small ($P \lesssim 1-2$ \%) and comparable to the instrumental polarization (an inherent artifact of polarization) of some instruments but also because we need to compare the data taken with different instruments.
In the HONIR data analysis, we examined the polarization efficiency ($P_\mathrm{eff}$) by observing a star (HD 14069) through a wire-grid filter. We investigated the instrumental polarization parameters ($q_\mathrm{inst}$ and $u_\mathrm{inst}$) and position angle offset ($\theta_\mathrm{off}$) through observations of unpolarized stars (G191B2B, HD 212311, and BD +32 3739) and strongly polarized stars (HD 29333, BD +59 389, BD +64 106, and HD 204827). We determined $P_\mathrm{eff}$ = $97.58 \pm 0.08$ \%, $q_\mathrm{inst} = -0.0097 \pm 0.0498$ \%, $u_\mathrm{inst} = -0.0077 \pm 0.0371$ \%, and $\theta_\mathrm{off} = 36.08 \pm 0.13\degr$. These parameters are consistent with \citet{Akitaya+2014}, ensuring the long-term stability of the polarimetric performance of HONIR.%
In the WFGS2 data analysis, it was reported that $q_\mathrm{inst}$ and $u_\mathrm{inst}$ depended on the instrument rotator angle ($\theta_\mathrm{rot}$). To eliminate this effect, we observed unpolarized stars (HD 212311 and HD 21447) at four different instrument rotator angles and derived two equations: $q_\mathrm{inst}(\theta_\mathrm{rot}) = q_{0}\cos{2\theta_\mathrm{rot}} - u_{0}\sin{2\theta_\mathrm{rot}}$ and $u_\mathrm{inst}(\theta_\mathrm{rot}) = q_{0}\sin{2\theta_\mathrm{rot}} + u_{0}\cos{2\theta_\mathrm{rot}}$, where $q_\mathrm{0} = -0.042 \pm 0.016$ \% and $u_{0} = 0.178 \pm 0.011$ \% for the 2021 October observation and $q_{0} = -0.043 \pm 0.012$ \% and $u_{0} = 0.273 \pm 0.012$ \% for the 2021 November observation. We determined $\theta_\mathrm{off} = -5.19 \pm 0.15\degr$ from the observations of strongly polarized stars (HD 204827, HD 25443, BD +59 389, and HD 19820). We assumed $P_\mathrm{eff}$ $=1$.
In the FAPOL data analysis, we divided each set of data (consisting of 16 different half-wave plate angles data) into four subgroups. The procedure for deriving the Stokes parameters from each subgroup (the process (4)) is the same as the procedure for HONIR and WFGS2. To investigate $q_\mathrm{inst}$, $u_\mathrm{inst}$, and $\theta_\mathrm{off}$, two unpolarized stars (G191B2B and HD 14069) and one strongly polarized star (BD +59 389) were observed. We determined $q_\mathrm{inst} = -0.05 \pm 0.07$ \%, $u_\mathrm{inst} = -0.04 \pm 0.11$ \%, and $\theta_\mathrm{off}=-92.30 \pm 0.06\degr$. These values are in good agreement with previous observations \citep{Ishiguro2022}.
We analyzed the PICO data following \citet{Ikeda+2007}. However, it should be noted that $q_\mathrm{inst}$ and $u_\mathrm{inst}$ errors in our analysis are different from those described in \citet{Ikeda+2007}. They estimated the errors of the instrumental polarization to be $\sim 0.3\, \% $ over the entire field of view (5\arcmin$\times$10\arcmin). After analyzing standard star data taken during the Phaethon observations, we found that the instrumental polarization of PICO was significantly smaller than $0.3\, \%$. The Phaethon's images were taken in the central part of PICO, where the polarization performance is the best in the field of view \citep{Ikeda+2007}. Accordingly, we considered 0.1 \% errors for $q_\mathrm{inst}$ and $u_\mathrm{inst}$ and derived Phaethon's polarization degrees. We also updated the errors of $P_\mathrm{eff}$ and $\theta_\mathrm{off}$ to 0.02\,\% and 0.18$\degr$ based on the measurement of calibration data taken during Phaethon's run. Although we only use data at a low phase angle ($\alpha < 30^\circ$), we confirm that our results show good agreement with \citet{Shinnaka+2018} within their $3\sigma$-uncertainty throughout the whole phase angles. Only the errors are slightly different because we considered systematic errors comprehensively, following the data reduction processes in \citet{Ikeda+2007}.
\begin{table*}
\caption{Observation Circumstance and Polarimetric Result}
\label{table:obs&result}
\begin{tabular}{lccccccccccccc}
\hline
Date in UT$^a$ & Inst$^b$ & Exp$^c$ &N$^d$ & $ r^e $ & $ \Delta^f $ & $ \phi^g $& $ \alpha^h$&$P^i$&$\sigma \,P^j$&$\theta_{P}^k$&$\sigma \, \theta_{P}^l$ &${P_\mathrm{r}}^m$&${\theta_\mathrm{r}}^n$ \\
& & (s)& & ($ \mathrm{au} $) & ($ \mathrm{au} $) & ($\degr$) & ($\degr$)&($\%$)&($\%$)&($\degr$)&($\degr$)&($\%$)&($\degr$)\\
\hline
2021 Oct 27 18:53--19:35 & WFGS2 & 300 &8 & 2.31 & 1.46 & 235.4 & 15.9 & 0.51 & 0.58 & 69.9 & 47.2 & -0.45 & 104.5 \\
2021 Oct 28 14:41--18:14 & WFGS2 & 300 &12 & 2.31 & 1.45 & 234.1 & 15.5 & 1.09 & 0.35 & 65.3 & 16.3 & -1.01 & 101.2 \\
2021 Nov 14 12:35--20:31 & WFGS2 & 300 &40 & 2.25 & 1.30 & 188.3 & 9.0 & 1.47 & 0.15 & 21.1 & 10.8 & -1.33 & 102.8 \\
2021 Nov 15 10:24--15:23 & WFGS2 & 300 &44 & 2.25 & 1.30 & 184.0 & 8.8 & 1.29 & 0.16 & 12.4 & 14.3 & -1.23 & 98.4 \\
2021 Nov 02 17:22--19:50 & HONIR & 120 &56 & 2.29 & 1.39 & 225.3 & 13.4 & 1.00 & 0.25 & 41.0 & 7.3 & -0.99 & 85.7 \\
2021 Dec 22 10:22--13:17 & HONIR & 120 &64 & 2.07 & 1.33 & 80.5 & 22.6 & 0.57 & 0.41 &-0.29 & 20.3 & 0.55 & 9.2 \\
2021 Dec 23 09:12--12:41 & HONIR & 120 &80 & 2.06 & 1.34 & 79.8 & 23.0 & 0.96 & 0.32 & 3.6 & 9.6 & 0.85 & 13.8 \\
2021 Nov 10 00:57--02:49 & FAPOL & 180 & 16 & 2.27 & 1.33 & 206.2 & 10.5 & 1.37 & 0.23 & 26.5 & 14.9 & -1.37 & 90.4 \\
2021 Nov 13 01:18--01:33 & FAPOL & 180 & 8 & 2.26 & 1.31 & 195.2 & 9.5 & 1.33 & 0.25 & 10.7 & 7.9 & -1.31 & 85.5 \\
2021 Nov 15 01:55--02:23 & FAPOL & 180 & 8 & 2.25 & 1.30 & 186.4 & 9.0 & 1.47 & 0.20 & 176.7& 5.5 & -1.39 & 80.3 \\
2021 Nov 30 22:15--23:33 & FAPOL & 180 & 20 & 2.18 & 1.26 & 114.3 & 11.5 & 1.29 & 0.13 & 113.3& 4.9 & -1.29 & 89.0 \\
2021 Dec 11 21:15--21:43 & FAPOL & 180 & 8 & 2.13 & 1.28 & 91.5 & 17.2 & 0.67 & 0.20 & 86.0 & 11.5 & -0.66 & 84.6 \\
2021 Dec 23 19:27--21:39 & FAPOL & 180 & 28 & 2.06 & 1.34 & 79.5 & 23.2 & 0.70 & 0.11 & 159.8& 7.1 & 0.66 & -9.7 \\
2022 Jan 24 20:08--20:11 & FAPOL & 180 & 8 & 1.84 & 1.59 & 68.4 & 32.4 & 3.55 & 0.32 & 151.36& 3.6 & 3.55 & -7.07 \\
2017 Dec 09 12:16--17:39 & PICO & 30 &188 & 1.13 & 0.15 & 201.8 & 19.3 & 0.40 & 0.11 & -4.2 & 7.8 & -0.24 & 64.0 \\
2017 Dec 10 10:58--16:53 & PICO & 30 &144 & 1.11 & 0.14 & 187.9 & 19.2 & 0.17 & 0.11 & 15.4 & 18.3 & -0.16 & 97.5 \\
2017 Dec 11 10:46--16:18 & PICO & 30 &424 & 1.10 & 0.12 & 170.4 & 20.0 & 0.02 & 0.10 & 31.5 & 52.0 & 0.00 & 131.2 \\
2017 Dec 12 12:39--16:32 & PICO & 30 &180 & 1.08 & 0.11 & 149.0 & 22.6 & 0.94 & 0.11 & 67.5 & 7.9 & 0.90 & 8.5 \\
2017 Dec 13 10:24--15:08 & PICO & 30 &172 & 1.07 & 0.09 & 129.7 & 27.1 & 1.92 & 0.10 & 34.5 & 3.1 & 1.89 & -5.1 \\
\hline
\multicolumn{14}{l}{$ ^a $ UT at exposure start,$ ^b $ Instrument, $ ^c $Exposure time, $ ^d $ Number of images used to the analysis, $^e$ Median heliocentric distance,}\\
\multicolumn{14}{l}{$ ^f $ Median geocentric distance, $ ^g $ Position angle of the scattering plane, $ ^h $ Median solar phase angle, $^i$ Nightly averaged polarization degree,}\\
\multicolumn{14}{l}{$^j$ Uncertainty of $P$, $ ^k $ Position angle of the strongest electric vector, $^l$ Uncertainty of $\theta_\mathrm{P}$, $^m$ Polarization degree referring to the scattering plan}\\
\multicolumn{14}{l}{$^n$ Position angle referring to the scattering plane. }\\
\multicolumn{14}{l}{We note that the PICO data in this table is the result of reanalysis of data published by \citet{Shinnaka+2018}.}\\
\multicolumn{14}{l}{The web-based JPL Horizon system (\url{http://ssd.jpl.nasa.gov/?horizons}) was used to obtain $r$, $ \Delta$, $ \phi$, and $ \alpha$ in the table.}\\
\end{tabular}
\end{table*}
\section{Derivation of polarimetric parameters at low phase angles}
\label{sec:polpara}
Table \ref{table:obs&result} summarizes the weighted means of nightly data. We computed the polarization degree and the position angle referring to the scattering plane ($P_\mathrm{r}$ and $\theta_\mathrm{r}$). Fig. \ref{fig:phase-plot} indicates the phase angle dependence of $P_\mathrm{r}$.
In Fig. \ref{fig:phase-plot}, the data taken with different instruments agree well, indicating that the data reduction processes described in Section \ref{sec:observations} seem to work well to eliminate the instrumental effects. Moreover, Phaethon's profile is in good agreement with (155140) 2005 UD (a dynamical association with Phaethon, \citealt{Ohtsuka+2006}), supporting previous results \citep{Ishiguro2022}.
We fit the data of Phaethon at low phase angles ($\alpha < 30$\degr) by using the Lumme--Muinonen function (L/M, \citealt{Lumme+1993}) and linear-exponential function (L/E, \citealt{Muinonen+2009}). We use the same notations as the one used in \citealt{Cellino+2015}.
The Markov chain Monte Carlo method implemented in PyMC3 \citep{Salvatier+2016} is employed.
We set boundary conditions of $h\in [0\,\%\,\mathrm{deg}^{-1}, 1\,\%\,\mathrm{deg}^{-1}$], $\alpha_{0} \in [10\degr, 30\degr]$, $c_1 \in [0, 10]$, and $c_2 \in [0, 10]$ for L/M, and $A\in[10, 20]$, $B\in[15, 25]$, and $C\in[0, 1]$ for L/E. The uncertainties of the optimal parameters are derived in the same manner as \citet{Geem2022}. The fitting results and their uncertainties obtained by using L/E are covered by those of L/M.
\section{Results}
As a result of the data fitting, we obtained the minimum polarization degree $ P_\mathrm{min}=-1.3 ^{+ 0.1 }_{- 0.1 }$\,\% at the phase angle $\alpha_\mathrm{min}= 9.0^{+ 0.7 }_{- 0.8 }\degr$, the polarimetric slope $h= 0.22^{+ 0.01 }_{- 0.02 } $\,\%\,$\mathrm{deg}^{-1}$, and the inversion angle $\alpha_0= 19.9^{+ 0.3 }_{- 0.3 }\degr$. As shown below, we further examined the composition and geometric albedo with this result.
\label{sec:results}
\subsection{Comparison with meteorites and other asteroids}
Fig. \ref{fig:h-Pmin} compares $P_\mathrm{min}$ and $h$ of Phaethon with those of carbonaceous chondrites and other C-complex asteroids. As described in \citet{Ishiguro2022}, anhydrous meteoritic samples (CK, CO, and CV) are distributed in the upper left, while hydrous ones (CM and CI) are in the lower right. Because the distribution of Ch-type asteroids (defined by the presence of an absorption near 0.7\,$\mu$m due to Fe-bearing phyllosilicates) mostly matches the hydrous meteorite samples, this $P_\mathrm{min}$--$h$ plot is adaptable to actual asteroids. Since the low albedo B-type asteroids (the so-called Themis group, \citealt{Clark+2010}) are distributed between hydrous and anhydrous, their surfaces likely experienced some degree of dehydration. Both Phaethon and 2005 UD are located near the concentration of anhydrous samples and (2) Pallas (B-type with a moderately high albedo, \citealt{Clark+2010}) but significantly deviated from the concentration of hydrous samples. Therefore, we conclude that the surface of Phaethon is likely composed of anhydrous carbonaceous material. Although the anhydrous nature was suggested based on the spectral studies \citep{Clark+2010,Clark+2011,2012Icar..218..196D,Takir+2020}, it is significant that the independent approach via polarimetry ensures the possibility of anhydrous nature.
\subsection{Geometric albedo}
It is known that the geometric albedo in $V$-band ($p_\mathrm{V}$) has a tight correlation with $h$ \citep{Geake+1986}. This relationship is expressed as $\log_{10} \left( p_\mathrm{V} \right) = C_1 \log_{10} \left( h \right) + C_2 $, where $ C_\mathrm{1} $ and $C_\mathrm{2} $ are constants.
These constants are derived using databases of asteroid polarimetry and albedos. We employed the constant values from two recent works \citep{Cellino+2015,Lupishko+2018}. \citet{Cellino+2015} derived these constants for asteroids without albedo constraint and with $p_\mathrm{V}>0.08$, while \citet{Lupishko+2018} derived these constants without albedo constraint. We used three sets of these constants and estimated the $R_\mathrm{C}$-band geometric albedo of $p_\mathrm{R_\mathrm{C}}=0.09 \pm 0.01$ for the constants in \citet{Cellino+2015} (without the albedo constraint) and \citet{Lupishko+2018}, and $p_\mathrm{R_\mathrm{C}}=0.11 \pm 0.02$ for the constants in \citet{Cellino+2015} (with the albedo constraint). We regard $p_{R_\mathrm{C}}=p_\mathrm{V}$ because Phaethon has a nearly flat spectrum over this wavelength.
With these $p_\mathrm{V}$ values and errors, the median, minimum, and maximum values are $p_\mathrm{V}=0.11$, 0.08, and 0.13.
\section{Discussion}
\label{sec:discussion}
Phaethon's geometric albedo had been derived by various methods, yet it varies widely from 0.037 to 0.220 in the literature when the errors are considered \citep{Green+1985, Harris+1998, Tedesco+2004, Usui+2011, Hanus+2018, McAdam+2018, Ali-Lagoa+2018,Masiero+2019}. This factor of $\sim 6$ difference made it difficult to establish the fly-by observation plan for the {\it DESTINY$^+$} mission. This large discrepancy may be caused by different thermal models with different absolute magnitudes. Polarimetry has the advantage of converting directly from $h$ to $p_\mathrm{v}$ without any assumptions. It is worth noting that we considered all possible uncertainties (i.e., in $h$, $C_\mathrm{1}$, and $C_\mathrm{2}$) for deriving the reliable $p_\mathrm{v}$ and its range. Although the median albedo value is not so different from previous works, it is significant to narrow the possible range to 1/3 of the previous estimate range for preparing the {\it DESTINY$^+$} fly-by observation. The updated albedo value is also valuable for considering the nature of the asteroid.
The association of Phaethon with comet nuclei has been discussed. From the visible spectrum, Phaethon is classified as either B- (based on \citealt{Bus+2002}) or F-type (based on \citealt{Tholen+1989}). Although F-type asteroids account for only 3\,\% of all asteroids in the Tholen classification, they show an interesting polarization property. \citet{Belskaya2005} noticed that three F-type asteroids exhibited unique $\alpha_0$ values (14--16\degr), which are predominantly smaller than asteroids in general ($\alpha_0\sim20 \degr$). The small $\alpha_{0}$ values of F-types may be linked to two comets, (7968) Elst-Pizarro (i.e., main-belt comet) and 2P/Encke ($\alpha_0$=17.6$\pm$2.1\degr\ in $R$-band and $\sim 13$\degr, respectively, \citealt{Bagnulo2010,Boehnhardt2008}). While the number of comet samples is only two, \citet{Cellino2018} suggested a connection between F-type and comet nuclei. \citet{Belskaya2005} suggested a possible interpretation that an optical homogeneity of regolith microstructure at scales of the order of visible light wavelengths may be responsible for the small $\alpha_0$. %
However, Phaethon's $\alpha_0$ is different from F-type asteroids and two comet nuclei but consistent with asteroids in general. The geometric albedo determined in this study ($p_{V}\sim 0.11$) is significantly higher than comets (including Elst-Pizarro, $p_{V}=0.06-0.07$, \citealt{Boehnhardt2008}) and F-type asteroids ($p_{V}=0.058 \pm 0.011$, \citealt{Belskaya+2017}). Comets generally have a red spectra, while Phaethon has a flat or even blue spectrum. Accordingly, Phaethon's surface materials are likely different from ordinary comets.
How can we explain Phaethon's recent activity \citep{Jewitt+2010} and its anhydrous nature found in our study? From polarimetry at large phase angles, \citet{Ito+2018} suggested that (1) Phaethon's actual albedo could be much lower than the estimate at the time, or (2) the asteroid was covered with large grains (probably produced via a sintering effect at the perihelion). With the updated albedo, we estimated the particle size using the same method as \citet{Ito+2018} and found that it is $\sim$ 300 $\mu$m (with an error of $\sim$ 70 $\mu$m). This particle size is larger than other asteroids such as Ryugu \citep{Kuroda+2021}, increasing the confidence of the sintering hypothesis.
Dust ejection under such a high-temperature condition has also been studied recently. \cite{Masiero2021} devised a mechanism for dust ejection by sodium sublimation. \citet{Bach2021} developed an idea of \citet{Jewitt+2010} and proposed that dust production and ejection would happen by the combination of thermal fatigue, thermal radiation pressure from the surface, and solar radiation pressure. These recent studies considered a cometary activity in the high-temperature environment ($\sim 1000$\,K) near the Sun, completely different from general comets, whose activities are driven by ice sublimation. Under such an environment at high temperatures, dehydration \citep{Hiroi+1996} and subsequent sintering would happen near the perihelion. To sum up our findings and other recent research on Phaethon, the surface is unlikely primordial but experiences a high degree of thermal alternation.
\section{summary}
We conducted the polarimetric observations of Phaethon at low phase angles and found that this asteroid has a polarimetric property similar to anhydrous chondrites. Phaethon's albedo and inversion angle are significantly different from comet nuclei. Although the interior composition is still unknown, we conjecture that the surface material shows considerably-evolved features that have experienced thermal metamorphism and dehydration rather than primitive features of comets and hydrous asteroids.
\section*{Acknowledgments}
Research activity at Seoul National University was supported by the NRF funded by the Korean Government (MEST) grant No. 2018R1D1A1A09084105. This research was partially supported by the Optical \& Near-Infrared Astronomy Inter-University Cooperation Program, MEXT, of Japan. The observations at NHAO were conducted as an open-use program. SH was supported by the Hypervelocity Impact Facility, ISAS, JAXA. Partly based on observations made with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias, and the data was obtained with ALFOSC, which is provided by the Instituto de Astrofisica de Andalucia (IAA) under a joint agreement with the University of Copenhagen and NOT. This research was partially supported by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant-in-Aid for Scientific Research (Early-Career Scientists), 20K14541. We appreciate Dr. Irina Belskaya for providing $ \alpha_0 $--$ P_\mathrm{min} $ values of asteroids.
\section*{Data Availability}\label{dataave}
The observational data are available in Zenodo\footnote{\url{https://doi.org/10.5281/zenodo.6791884}}. The source codes and scripts for the data analyses, plots and resultant data tables are available via the GitHub service\footnote{\url{https://github.com/Geemjy/Geem_etal_MNRAS_2022.git}}.
\section*{NOTE ADDED IN PROOF}
We calculated the geometric albedo using recently published results in \citet{Kiselev+2022} with ours and updated it to $p_{V} = 0.10$ (in the range of $0.08$--$0.12$).
\bibliographystyle{mnras}
\bibliography{references.bib} %
\bsp %
\label{lastpage} |
Title:
Misaligned circumbinary disks as efficient progenitors of interstellar asteroids |
Abstract: Gaseous circumbinary disks (CBDs) that are highly inclined to the binary
orbit are commonly observed in nature. These disks harbor particles that can
reach large mutual inclinations as a result of nodal precession once the gas
disk has dissipated. With n-body simulations that include fragmentation we
demonstrate that misaligned disks of particles can be efficient progenitors of
interstellar asteroids (ISAs). Collisions that take place between particles
with large mutual inclinations have large impact velocities which can result in
mass ejection, with a wide range of fragment sizes and ejection velocities. We
explore the binary parameters for which the majority of the terrestrial planet
forming material is ejected rather than accreted into planets. The misalignment
required to eject significant material decreases with binary eccentricity. If
the distribution of binary eccentricity is uniform and the initial particle CBD
orientation relative to the binary orbit is isotropic, about 59% of binaries
are more likely to eject the majority of their CBD terrestrial planet disk mass
through high velocity body-body collisions rather than retain this material and
build terrestrial planets. However, binary--disk interactions during the gas
disk phase with non-zero disk viscosity will reduce this fraction. The
composition, small size, highly elongated shape, and tumbling motion of
`Oumuamua is consistent with ISAs generated by misaligned CBDs.
| https://export.arxiv.org/pdf/2208.05874 | command.
\usepackage{chngpage}
\usepackage{booktabs}
\usepackage{amsmath}
\usepackage{bm}
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\newcommand{\uvec}[1]{\boldsymbol{\hat{\textbf{\textit{#1}}}}}
\newcommand{\RGM}[1]{\textcolor{cyan}{#1}}
\newcommand\RGMX{\bgroup\markoverwith{\textcolor{cyan}{\rule[0.5ex]{4pt}{1pt}}}\ULon}
\newcommand{\ACC}[1]{\textcolor{red}{#1}}
\newcommand\ACCX{\bgroup\markoverwith{\textcolor{red}{\rule[0.5ex]{4pt}{1pt}}}\ULon}
\shorttitle{CBDs and interstellar asteroids}
\shortauthors{Childs \& Martin}
\graphicspath{{./}{figures/}}
\begin{document}
\title{Misaligned circumbinary disks as efficient progenitors of interstellar asteroids}
\author[0000-0002-9343-8612]{Anna C. Childs}
\author[0000-0003-2401-7168]{Rebecca G. Martin}
\affiliation{Nevada Center for Astrophysics, University of Nevada, Las Vegas, NV 89154, USA}
\affiliation{Department of Physics and Astronomy, University of Nevada, Las Vegas, 4505 South Maryland Parkway,
Las Vegas, NV 89154, USA}
\keywords{Binary stars (154), Asteroids (72), Extrasolar rocky planets (511), Interstellar objects (52), Exoplanet formation (492)}
\section{Introduction} \label{sec:intro}
Circumbinary gas disks (CBDs) with large misalignments relative to the binary orbital plane are commonly observed in nature \citep[e.g.][]{Chiang2004, Kohler2011, Andrews2014, Brinch2016, Fang2019, Takakuwa2017, Kennedy2019,Zhu2022,Kenworthy2022}. The degree of CBD misalignment often increases with binary separation and eccentricity \citep{Czekala2019}. Misalignments may initially arise as a result of turbulence in the molecular gas cloud \citep{Offner2010, Tokuda2014, Bate2012}, later accretion of material by the young binary \citep{Bates2010, Bate2018}, warping by a tertiary companion such as a stellar flyby \citep{Nealon2020} or, if the binary forms from a cloud whose elongated axis is misaligned to its rotation axis \citep{Bonnell1992}.
The misaligned disk may precess as a solid body if the communication timescale is shorter than the precession timescale \citep{PT1995,Larwood1996}. As a result of dissipation, a viscous disk evolves towards either a coplanar or polar ($90^{\circ}$) alignment to the binary orbital plane \citep{Martin2017, Lubow2018, Zanazzi2018,Cuello2019} although depending on the binary and disk parameters, the timescale for alignment may be longer than the disk lifetime meaning that planet formation can take place in misaligned disks \citep[e.g.][]{Martin2018}.
The late stage of terrestrial planet formation takes place after Moon-size planetesimals and Mars-size embryos have formed and the gas disk has dispersed. These solid bodies interact with one another through purely gravitational interactions to form terrestrial planets through core accretion \citep{Artymowicz1987,Lissauer1993, Pollack1996}. Coplanar and polar circumbinary orbits are stationary states in which the particles do not undergo significant nodal precession. As a result, terrestrial planets can efficiently form in coplanar \citep{Quintana2006,Childs2021} and polar aligned \citep{Childs2021ApJ} circumbinary disks through core accretion. Such terrestrial circumbinary planets (CBPs) have yet to be observed however. While this may be attributed to observational bias against such small planets in a circumbinary orbit \citep{Windemuth2019,MartinDV2021,Standing2022}, it may also indicate that terrestrial planets do not form through core accretion in a circumbinary disk or, that current core accretion models are missing key physics.
In a disk that is misaligned from a stationary state, nodal precession can lead to large mutual misalignments and collisions with large impact velocities that may result in ejection from the system.
Whether planets can form or not depends upon the misalignment and the binary eccentricity.
Terrestrial CBPs that do form end up either coplanar or polar to the binary orbital plane since mergers between bodies with random nodal phase angles leads to lower inclination to the stationary states \citep{ChildsMartin2022}. While collisions were resolved with only perfect merging in \cite{ChildsMartin2022}, in this work we consider fragmentation as a more realistic outcome from such high energy collisions.
`Oumuamua was the first confirmed interstellar asteroid (ISA) to be observed \citep{ChambersK2016}. `Oumuamua does not exhibit comet-like features indicating its composition is more consistent with a refractory planetoid \citep{Jewitt2017, Ye2017, Meech2017}. This ISA has an unexpectedly low velocity relative to the local standard of rest (LSR), $\sim10 \, \rm km \, s^{-1}$ \citep{Mamajek2017} and has a highly elongated shape \citep{Meech2017, Bolin2018}. The elongated shape and tumbling motion of this body suggests that it was involved in a violent collision in its past and was sent tumbling in its parent planetary system, indicating that collisions of solid bodies in other planetary systems is not uncommon \citep{Drahus2017, Fraser2018}. The currently observed mass of `Oumuamua is estimated to be about $10^{-17} M_{\oplus}$ however, if `Oumuamua is composed of entirely N$_{2}$ ice, it could have lost up to 92\% of its initial mass upon entering the solar system \citep{Desch2021, Jackson_2021}. \cite{Seligman_2020} proposed that if `Oumuamua contained a significant amount of H$_{2}$ ice, it was likely pancake shaped when it was near periapsis and \cite{Mashchenko_2019} found that the light curve is consistent with such a shape. In these cases, the observed properties of `Oumuamua are not representative of its origins.
Various formation scenarios for `Oumuamua and other ISAs have been proposed such as ejections from a system as a result of tidal interactions with a white dwarf \citep{Rafikov2018}, ejections of fragments from tidally disrupted planets by a dense member of a binary system \citep{Cuk2018}, and ejections of a comet-like planetesimal from giant planet interactions \citep{Raymond2018}.
Binary stars have been suggested to dominate rocky body ejections from planetary systems over that from single stars \citep{Jackson2018}. Planetesimals are ejected when they migrate inside the stability limit of the binary although this may require the presence of other planets \citep{Fitzmaurice2022}.
In this letter, we propose that \textit{shortly after a highly misaligned circumbinary gas disk dissipates, solid bodies undergo violent collisions and become a source for ISAs such as `Oumuamua}. The highly inclined particles have no requirement to migrate close to the binary as ejections occur over a wide radial range. In Section~\ref{sec:dynamics} we first conduct three-body simulations to show how the initial disk misalignment and the binary eccentricity affect the particle mutual inclinations, and thus impact velocities. In Section~\ref{sec:N-body} we then conduct $n$-body studies of terrestrial CBP formation in highly misaligned CBDs and resolve collisions with fragmentation. We closely follow the collisions and the fate of the ejected material to better understand the nature of ISAs that are generated from misaligned CBDs. In Section~\ref{sec:ISA_formation} we discuss the implication of these results for ISAs. Lastly, we conclude with a summary of our findings in Section \ref{sec:Conlusions}.
\section{Circumbinary particle dynamics}\label{sec:dynamics}
A particle in a circumbinary orbit around an eccentric binary can undergo two types of nodal precession depending upon its initial inclination. For low initial tilt, the orbit is circulating, meaning that the particle orbit precesses around the binary angular momentum vector. If the initial inclination is above the critical value, the orbit will be librating, meaning that it precesses about the binary eccentricity vector \citep{Verrier2009,Farago2010,Doolin2011,Aly2015}.
The critical inclination depends upon the binary eccentricity and the angular momentum of the particle \citep{Martin2019,Chen2019}. In the test particle limit, the minimum critical inclination that separates circulating and librating orbits is given by,
\begin{equation}\label{eq:i_crit}
i_{\rm crit}=\rm{sin^{-1}} \sqrt{\frac{1-\textit{e}_{\rm b}^2}{1+4\textit{e}_{\rm b}^2}}
\end{equation}
\citep{Farago2010}. This critical inclination occurs for longitude of ascending node of $\phi=90^\circ$ measured in the frame of the binary \citep[see equation~(3) in][]{Chen2019}.
We measure the particle misalignment with respect to one of the two stable configurations, coplanar or polar. The inclination of the particle orbit relative to the binary orbit is given by
\begin{equation}
i_{\rm b} =\textrm{cos}^{-1} (\uvec{l}_{\rm b} \cdot \uvec{l}_{\rm p}),
\end{equation}
and the inclination of the particle orbit relative the binary eccentricity vector is given by
\begin{equation}
i_{\rm e}=\textrm{cos}^{-1} (\uvec{e}_{\rm b} \cdot \uvec{l}_{\rm p}),
\end{equation}
where $\bm{l}_{\rm b}$ and $\bm{l}_{\rm p}$ are the angular momentum vectors of the binary and particle, respectively, $\bm{e}_{\rm b}$ is the eccentricity vector of the binary, and $\uvec{}$ denotes a unit vector. For orbits with $\phi=90^\circ$ initially, if the initial particle inclination is smaller than the critical inclination (circulating orbit), we measure $i_{\rm b}$ and if the particle inclination is larger than the critical inclination (librating orbits) we measure $i_{\rm e}$.
The maximum impact velocity for a collision between two particles orbiting at the same semi-major axis in circular orbits with Keplerian velocity, $v_{\rm K}$, can be estimated with
\begin{equation}\label{eq:vel}
v_{\rm max}=2\textit{v}_K\rm sin \left ( \textit{i}_{\rm max}/2 \right ) ,
\end{equation}
where $i_{\rm max}={\rm max\,} i_{\rm m}$ is the maximum value over a nodal precession period of the mutual inclination between the two particles, $i_{\rm m}$. In a colliding system with particles of different nodal precession rates, $i_{\rm max}$ is twice the maximum inclination a particle reaches over its nodal precession period measured with respect to the stationary inclination about which it precesses (either coplanar or polar).
To probe the maximum mutual inclinations expected in a circumbinary disk as a function of binary eccentricity we conduct three-body simulations of a very low mass particle at an orbital radius of $5 \, a_{\rm b}$ from the barycenter of the three-body system. We change the eccentricity and initial inclination of the particle to the binary, $i_{\rm b0}$, and integrate for two full nodal precession periods (see Equations 6-10 of \cite{ChildsMartin2022}) using the WHFAST integrator in \textsc{rebound} \citep{Rein2012}. Initially the longitude of ascending node is $\phi=90^\circ$ in all cases.
Figure \ref{fig:countour_eb} shows the results from our three-body simulations. We plot the maximum inclination a single particle experiences relative to the axis about which it precesses (coplanar or polar), which is equivalent to $i_{\rm m}/2$, as a function of the particle's initial inclination to the binary, $i_{\rm b0}$, and binary eccentricity, $e_{\rm b}$. We plot the critical inclination (Equation~\ref{eq:i_crit}) in the solid black line. We see that as the binary eccentricity increases, the less inclined the particle needs to be to reach the maximum inclination away from a stable configuration. This is expected since the maximum inclination a particle experiences is near the critical inclination, which decreases as binary eccentricity increases. This indicates that CBDs with relatively modest inclinations around highly eccentric binaries can harbor particles with high mutual inclinations and thus, colliding particles can experience high impact velocities. We do not expect this trend to change for binary systems with different mass ratios or different separations since these effects change only the timescale for the particle dynamics. The critical inclination and the particle dynamics depend only on the binary eccentricity and angular momentum ratio of the particle to the binary \citep{Farago2010,Martin2019,Chen2019}.
\begin{table*}
\caption{The model name, binary eccentricity ($e_{\rm b}$), initial inclination above the binary plane ($i_{\rm b0}$), initial inclination away from a polar configuration ($i_{\rm e0}$), and the particle surface density fit ($\Sigma$) \citep[from Figure 2 in][]{Childs2021ApJ} used for our $n$-body simulations. We denote whether the particle orbits in the disk are initially circulating (C) or librating (L). We list the multiplicity of the terrestrial planetary system and the average and standard deviation of the planet properties. A planet is defined as a body with mass $M_{\rm p}\ge M_{\oplus}$. In the last column, we list the mean and standard deviation for the fraction of disk mass that is ejected in each run.
}
\begin{adjustwidth}[]{.5cm}{}
\resizebox{1\linewidth}{!}{
\hskip-4.0cm
\begin{tabular}{c|c|cccc|ccccc|c}
\hline
{Model} & {$e_{\rm b}$} & {$i_{\rm b0}^{\circ}$} & {$i_{\rm e0}^{\circ}$} & {$\Sigma$} & {C}/{L} & {\#} & {$M_{\rm p}/M_\oplus$} & {$a_{\rm p}/ \rm au$} & {$e$} & {$i_{b/ \rm e}^{\circ}$} & {$M_{\rm e}/M_{\rm d}$} \\
\hline
C30 & 0.0 & 30.0 & 60.0 & CC & C & 1.82 $\pm$ 0.48
& 1.76 $\pm$ 0.55
& 2.25 $\pm$ 0.38
& 0.05 $\pm$ 0.03
& 4.97 $\pm$ 2.64 & 0.07 $\pm$ 0.05\\
C60 & 0.0 & 60.0 & 30.0 & CC & C & 0 & - &- &- &-& 0.81 $\pm$ 0.21\\
E30 & 0.8 & 30.0 & 60.0 & EP & L & 0 & - &- &- &-& 0.79 $\pm$ 0.22\\\
E60 & 0.8 & 60.0 & 30.0 & EP & L & 1.62 $\pm$ 0.57
& 1.93 $\pm$ 0.66
& 2.32$\pm$ 0.48
& 0.07 $\pm$0.04
& 4.51 $\pm$2.73& 0.13 $\pm$ 0.06\\
\hline
\end{tabular}
}
\end{adjustwidth}
\label{tab:systems}
\end{table*}
\section{Terrestrial circumbinary planet formation}\label{sec:N-body}
We now explore simulations of terrestrial planet formation in the inner parts of an initially misaligned circumbinary disk including the effects of fragmentation. The fragmentation code we use is detailed in \cite{ChildsSteffen2022}. We distribute 26 Mars-sized embryos ($m \approx 0.1 \, M_{\oplus}$) and 260 Moon-sized planetesimals ($m \approx 0.01 \, M_{\oplus}$) along the fits from SPH simulations detailed in \cite{Childs2021}. This bimodal mass distribution is adopted from $n$-body simulations that were successfully able to recover the masses of the solar system terrestrial planets \citep{Chambers2001}.
Unlike WHFAST, the IAS15 integrator in \textsc{rebound} is a high-precision non-symplectic integrator that is able to resolve close encounters and collisions \citep{Rein2015}. This feature is necessary for modeling core accretion of the terrestrial planets. To overcome the excessive CPU time associated with a non-symplectic integrator we apply an expansion factor of $f=25$ to all the particles after integrating the system for $100 \, \rm Kyr$ and the phase angles of the particles have randomized. \cite{Childs2021} performed convergence tests with $f=25$ and smaller expansion factors in $n$-body simulations of CBP terrestrial planet formation using the IAS15 integrator. They found that while larger expansion factors lead to some differences in system architecture, the general planet formation trends that emerge as a function of binary eccentricity and separation remain. Furthermore, \cite{ChildsSteffen2022} studied the effects of expansion factors with fragmentation. Their findings indicate that our use of an expansion factor will lead to shorter collision timescales and more damped orbits which are more likely to underestimate impact velocities in collisions.
We set the minimum fragment mass to half the size of the Moon ($m \approx 0.005 \, M_{\oplus}$). Fragments are expected to be much smaller, but we choose this value to reduce the CPU time of the simulations. While this fragment mass is orders of magnitude larger than `Oumuamua, it should be viewed as an upper limit as fragment producing collisions will produce a wide distribution of fragment sizes \citep{Leinhardt2012}.
The orbital elements for each body are randomly chosen in each run. We consider perfectly circular binaries with $e=0.0$ and highly eccentric binaries with $e=0.8$ with particle disks that are initially inclined $30^{\circ}$ and $60^{\circ}$ above the binary orbital plane. A particle is considered ejected from the system once its distance from the barycenter of the system exceeds $100 \, \rm au$. The binary consists of equal mass stars with a total binary mass of $1 \, M_{\odot}$ separated by $0.5 \, \rm au$. We perform 50 runs for each setup and integrate for a total of $7 \, \rm Myr$.
The different binary models and their corresponding binary eccentricities are listed in Table \ref{tab:systems} and are marked by red triangles in Fig.~\ref{fig:countour_eb}. The initial surface density profile is taken to be that of steady circumbinary gas disk as described in \cite{Childs2021ApJ}. At least initially, both particle disks around the circular orbit binary are in circulating orbits while around the eccentric orbit binary they are librating.
Our simulations reveal that the C60 and E30 eject the most material and do so throughout the entirety of the simulation. The sustained ejection rates indicate that particles are not just being quickly ejected in the inner, unstable region of the disk close to the binary as a result of strong binary-particle interactions \citep[e.g.][]{Holman1999,Chen2020}. Particles found at larger, initially more stable, orbits also get ejected on longer timescales as a result of particle-particle interactions. On average, the C60 and E30 systems eject $~80\%$ of their disk mass and the C30 and E60 systems eject $~1\%$ of their disk mass by the end of our simulations. The large difference between these ejection percentages is the result of the different mutual inclinations the particles reach in the runs. In Figure \ref{fig:countour_eb} we see that the C60 and E30 particles can reach maximum mutual inclinations of $i_{\rm m} = 120^{\circ}$ and $i_{\rm m} = 132^{\circ}$, respectively, for circular orbits. Such large mutual inclinations will result in high impact velocities when a collision takes place which is likely to result in mass ejection. The C30 and E60 particles reach maximum inclinations of $i_{\rm m} = 60^{\circ}$ and $i_{\rm m} = 64^{\circ}$, respectively, which will result in much lower impact velocities and less mass being ejected from the system.
Because of the high ejection rates in C60 and E30, no terrestrial planets are formed. On average, the C30 and P60 systems form at least one terrestrial planet with a mass greater than $1 \, M_{\oplus}$ that is nearly circular and coplanar to the circular binary or polar to the eccentric binary.
The final planetary systems with fragmentation are similar to the final planetary systems of \cite{ChildsMartin2022} who modeled planet formation in similar misaligned CBDs but resolved collisions with only perfect merging. They did not consider a system analogous to E60, but our E60 planetary systems closely resemble those formed in the C30 runs since both are inclined by $30^\circ$ to a stationary inclination. The planet eccentricities and inclinations are slightly damped, relative the planets formed with only perfect merging, due to dynamical damping from the fragments. A notable difference between the planetary systems that formed with perfect merging and fragmentation is the formation timescale. In agreement with \cite{Quintana2016} who compared planet formation in systems with and without fragmentation, fragmentation approximately doubles the formation and CPU time but results in similar planetary systems as those formed with only perfect merging.
\section{Formation of interstellar asteroids}\label{sec:ISA_formation}
The left column of Figure \ref{fig:v_semi} shows the impact velocity versus the semi-major axis for all the collisions that took place in one run of each model. The pluses mark collisions that lead to ejections. We also plot the curves that represent the circular orbit maximum impact velocity, Equation \ref{eq:vel}, for each model. We use the maximum inclination the particle reached in our three-body simulations for $i_{\rm m}/2$, to predict the maximum impact velocity in systems with the same binary setup. We see that the majority of the collisions in each model are near the circular orbit impact velocity curves. Particles that are on eccentric orbits can result in larger impact velocities than $v_{\rm max}$. We see that collisions which lead to ejections can be found for a wide range of velocities but are typically found when the impact velocity is greater than $\sim 15 \, \rm km \, s^{-1}$.
When the kinetic energy (KE) of a body is equal to or less than the potential energy (PE) of the body, it will remain gravitationally bound to the binary. The right column of Figure \ref{fig:v_semi} shows the KE=PE line for a range of masses and radii with a black dashed line. We expect collisions on and below this line to remain bound to the binary and collisions above this line to be ejected from the system. This line also corresponds to $v_{\rm i}=2v_{\rm k} \rm sin 45^{\circ}$. Using data from one run in each setup, we plot the PE and KE of each collision using the total mass of the colliding system, the impact velocity, and the last recorded semi-major axis of the target. Collisions that involve bodies that are eventually ejected from the system are marked with a plus. We also plot the line with slope $\frac{KE}{PE}$ using the circular orbit maximum impact velocity for each system, over a range of radii and mass. CBDs with particles $45^{\circ} \leq i_{\rm m}/2$ will result in collisions where $1 \leq \frac{KE}{PE}$, which will lead to ejection from the system unless dynamics from the multi-body system prevent such. We see that the KE versus PE line for the C60 and E30 system, which have a circular orbit maximum impact velocity of $v_{\rm i}=2 v_{\rm k} \rm sin 60^{\circ}$ and $v_{\rm i}=2 v_{\rm k} \rm sin 66^{\circ}$ respectively, is above the KE=PE line where $\sim 80 \%$ of ejections are found. The C30 and E60 systems, which have a circular orbit maximum impact velocity of $v_{\rm i}=2 v_{\rm k} \rm sin 30^{\circ}$ and $v_{\rm i}=2 v_{\rm k} \rm sin 32^{\circ}$ respectively, is found below the KE=PE line explaining the low number of ejections we observe in these systems. These $i_{\rm m}/2$ values are taken from the corresponding binary systems in Figure \ref{fig:countour_eb} which are marked by red triangles.
The largest remnant of a collision that results in fragmentation is $M_{\rm lr}$. The formula for calculating $M_{\rm lr}$ is taken from \cite{Leinhardt2012} and is a function of impact energy and the impact angle. As expected, the systems that are inclined $60^{\circ}$ away from a stable configuration (C60 and E30) experience collisions with the highest impact velocities which result in smaller values for the $M_{\rm lr}$. Although the resolution of our simulations is limited by the minimum mass of the fragment we define, we calculate the true $M_{\rm lr}$ for all collisions. The smallest mass of the largest remnant calculated in our simulations (although not included in the simulation due to lower limits on the fragment mass) is $2 \times 10^{-6} \, M_{\oplus}$. While this is still orders of magnitudes larger than the expected mass of `Oumuamua, this is the largest remnant expected from a collision which will also produce a distribution of smaller fragments. The most massive body ejected was $0.91 \, M_{\oplus}$ in an E60 run and so, we expect a large distribution of fragment masses in misaligned CBDs.
Using our results, we can place an upper limit on the fraction of binaries in the galaxy that are likely to eject the majority of their terrestrial planet building material.
\cite{Moe2017} compiled observations of early-type binaries to quantify the distributions of binary eccentricities. Close-in binaries with separations $\leqslant 0.4 \, \rm au$ have small eccentricities $\leqslant0.4$ due to tidal circularization, while more widely separated binary eccentricities are weighted to larger values. However, here we assume that binary eccentricity is uniformly distributed in the range 0.0 - 0.8 \citep{Raghavan2010} and that the initial orientation of the CBD is uniformly distributed for simplicity.
A CBD ejects most of its solid material when the mutual inclination between colliding bodies becomes greater than $90^{\circ}$ which is $v_{\rm i}=2v_{\rm k} \rm sin 45^{\circ}$, the PE=KE black line in Figure \ref{fig:v_semi}. The probability, $p$, that a binary ejects most of its solid material can be estimated with the fraction of orbits that a particle in a CBD around the binary has a maximum inclination to either coplanar or polar that is greater than $45^{\circ}$.
Figure \ref{fig:phase_plots} shows precession paths of a particle angular momentum vector in the frame of the binary from our three-body simulations, for nine different binary eccentricities.
To find $p$ we find the fraction of the sphere's surface area which corresponds to paths with particle inclinations that are at some point greater than $45^{\circ}$, given by
\begin{equation}
p = 1 - \frac{(2A_{\rm e1} + 2A_{\rm e2})}{4\pi},
\end{equation}
where $A_{\rm e1}$ is the surface area of a bold green curved ellipse
that corresponds to the phase space that the particles inclination is always less than $45^{\circ}$ to coplanar and $A_{\rm e2}$ is the surface area enclosed by the bold and dashed purple curved ellipse which corresponds to the phase space where the particles inclination is always less than $45^{\circ}$ to polar. To calculate the curved area of the ellipses we consider the edge as the cross section of an elliptical cylinder with the sphere. The surface area of the curved ellipse is then given by
\begin{equation}
A_{\mathrm e} = 2 \pi a^2 - 4 a^2 \textrm{sin}^{-1} \left ( \frac{\sqrt{ a^2 + b^2}}{a} \right ),
\end{equation}
where $a$ and $b$ are the semi-major and minor axes, respectively, of the elliptical cylinder.
In Figure \ref{fig:phase_plots} we list the $p$ value for each binary eccentricity. Since we assume a uniform distribution of binary eccentricity, we take the average of all the $p$ values to find the fraction of binaries in the galaxy that are likely to eject most of their disk material. We find $\Bar{p}=0.59$ meaning that with these assumptions, more than half of the binaries in the galaxy are more likely to eject their terrestrial planet disk mass, and produce ISAs, rather than form terrestrial planets. Additionally, we find that binaries with eccentricities greater than 0.4 have the same probability, $p=0.54$, of ejecting most of their disk mass.
This is a strict upper limit to the fraction because we have assumed an initially isotropic orientation of the particle disk which is equal to the gas disk orientation at the time of disk dispersal. However, the gas disk may evolve towards either coplanar or polar alignment for non-zero disk viscosity. Depending on the binary and disk parameters, the alignment timescale can be longer than the disk lifetime and so the planetesimal disk may form in a misaligned disk. We note that the two currently observed polar circumbinary gas disks have external companions \citep{Kennedy2019,Kenworthy2022} that truncate the outer part of the circumbinary disk leading to a radially narrow disk and a short alignment timescale \citep{Martin2022}. The timescale for gas disk alignment also depends upon a number of other binary and disk parameters including the binary semi-major axis, binary eccentricity, disk viscosity and disk temperature \citep[e.g.][]{Lubow2018}. Gas disk alignment doesn't proceed in a strictly linear fashion and even a small initial misalignment can lead to very large misalignments during the disk evolution \citep{Smallwood2019}. Because of the complexity of this problem, we leave a more detailed investigation of this to future work.
\section{Conclusions}\label{sec:Conlusions}
We conducted a suite of $n$-body simulations around circular and highly eccentric, equal mass binaries separated by $0.5 \, \rm au$. The circumbinary particle disks were inclined by either $30^{\circ}$ or $60^{\circ}$ above the binary orbital plane. We resolved all collisions with fragmentation which allowed us to analyze the collisions and better understand the post-collision mass and velocity distributions.
We found that around highly eccentric binaries, CBDs with mild initial misalignment can result in large mutual inclinations between the particles. The impact velocity between two bodies in circular Keplerian orbits with mutual inclination $i_{\rm m}$ is given by $v_{\rm i}=2 v_{\rm k} \mathrm{sin} (i_{\rm m}/2)$. CBDs that harbor particles with mutual inclinations greater than $90^{\circ}$ have particle collisions with kinetic energies greater than the potential energy between the colliding system and the central binary which results in ejection from the system, unless dynamics from the multi-particle disk inhibit such. This mechanism is an efficient source for ISAs with a wide range of sizes and velocities. These ISAs will have characteristics consistent with a violent past, such as the small size, elongated shape, and tumbling motion of `Oumuamua. ISAs formed in this way will mostly be rocky in composition since the terrestrial planets are expected to form inside of the snow line radius.
Assuming a uniform distribution of binary eccentricity and an isotropic distribution of the disk orientation relative to the binary, we find that 59\% of binaries in the galaxy are more likely to eject their CBD terrestrial material through high velocity particle-particle collisions rather than retain their material and build terrestrial planets. This is an upper limit to the fraction since a non-zero disk viscosity during the gas disk phases causes alignment towards either coplanar or polar alignment depending upon the initial misalignment. These findings can help place constraints on occurrence rates for both ISAs and terrestrial CBPs.
\begin{acknowledgements}
We thank the anonymous referee for useful comments that improved
the manuscript. We thank Stephen Lepp, Charlie Abod and Ian Rabago for help with Figure 3. We acknowledge support from NASA through grant 80NSSC21K0395. Computer support was provided by UNLV’s National Supercomputing Center.
\end{acknowledgements}
\bibliography{ref}{}
\bibliographystyle{aasjournal}
|
Title:
RR Lyrae stars in the globular cluster Palomar 2 |
Abstract: A CCD VI imaging time-series over 11-year is employed to explore the light
curves of stars in the field of Palomar 2. We discovered 20 RRab and 1 RRc
variables. A revision of Gaia-DR3 data enabled us to identify 10 more variables
and confirm the RRab nature of 6 of them and one RGB. The cluster membership is
discussed and 18 variables are most likely cluster members. The Fourier light
curve decomposition for the 11 best quality light curves of cluster member
stars leads to independent estimates of the cluster distance 27.2 +- 1.8 kpc
and [Fe/H]ZW=-1.39 +- 0.55. We confirm the cluster as of the Oo I type.
| https://export.arxiv.org/pdf/2208.07849 |
\section{Introduction}
\label{intro}
The globular cluster Palomar 2 is a distant (~30 kpc) stellar system in the direction of the Galactic anticenter and close to the Galactic plane ($l = 170.53^{\circ}$, $b = -9.07^{\circ}$). It is buried in dust with $E(B-V) \sim 0.93$ and \textcolor{blue}{shows evidence} of differential reddening \citep{Bonatto2020}. It is therefore a faint cluster with the HB at about $V \sim 21.5$ \citep{Harris1996}. Most likely due its faintness no variables in the cluster have ever been reported.
In the present paper we take advantage of a 11-year long time-series of CCD \emph{VI} data, analyzed in the standard Differential Imaging Approach (DIA), to explore the light curves of nearly 500 stars in the field of view (FoV) of the cluster. We have found 21 new RR Lyrae stars (V1-V14 and SV1-SV7 in Table \ref{variables}). \textcolor{blue}{In conjunction} with the $Gaia$-DR3 variabilty index, we confirm the RRab nature of 6 more stars (G3, G11, G12, G13, G18 and G23), plus 1 RGB (G17), for a total 28 variables in the field of view of our images. In what follows, we argue in favour of the membership of 18 of them and \textcolor{blue}{present} their light \textcolor{blue}{curve} and ephemerides. The mean distance and [Fe/H] of the cluster shall be calculated by the Fourier decomposition of RRab stars with the best quality light curves.
\section{Observations and Data Reductions}
\label{observations}
The data were obtained between December 12, 2010 and February 12, 2021 with the 2.0-m telescope at the Indian Astronomical Observatory (IAO), Hanle, India. The detector used was a SITe ST-002 2Kx4K with a scale of
0.296 arcsec/pix, for a field of view of approximately 10.1$\times$10.1 arcmin$^2$.
From October 14, 2018 and February 17, 2020 the detector used was a Thompson
grade 0 E2V CCD44-82-0-E93 2Kx4K with a scale of 0.296 arcsec/pix, or a FoV of approximately 10.1$\times$10.1~arcmin$^2$.
A total of 197 and 240 images were obtained in $V$ and $I$ filters, respectively.
\subsection{Difference imaging analysis}
The image reductions were performed employing the software Difference Imaging Analysis (DIA) with its pipeline implementation DanDIA (\citealt{Bramich2008}; \citealt{Bramich2013,Bramich2015}) to obtain high-precision photometry of all the point sources in the field of view (FoV) of our CCD. This allowed us to construct an instrumental light curve for each star. For a detailed explanation \textcolor{blue}{of} the use of this technique, the reader is referred to the work by \citet{Bramich2011}.
\subsection{Transformation to the standard system}
Since two different detectors were used to achieve the observations as described in the previous section, we treated the transformation to the standard system as two independent instruments. Otherwise, the procedure was the standard one described into detail in previous publications, in summary; we used local standard stars taken from the catalog of Photometric Standard Fields \citep{Stetson2000} to set our photometry into the $\emph{VI}$ Johnson-Kron-Cousin standard photometric system \citep{Landolt1992}.
The transformation equations carry a small but mildly significant colour term and are of the form: $V-v = A (v-i)+B$ and $I-i = C (v-i)+D$ for each filter respectively. The interested reader can find the details of this transformation approach in \citet{Yepez2022}.
\section{Star membership using $Gaia$-eDR3}
\label{gaia}
We have made use of the latest data release $Gaia$-DR3 \citep{Gaia_edr32021} to \textcolor{blue}{perform} a membership analysis of the stars in the field of Pal 2. To this end, we employed the method of \citet{Bustos2019}, which is based on
the Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) algorithm developed by \citet{Zhang1996}. The method and our approach to it have been described in a recent paper by \citet{Deras2022}. We recall here that our method is based on a clustering algorithm at a first stage and a detailed analysis of the residual overdensity at a second stage; member stars extracted in the first stage are labeled M1, and those extracted in the second stage are labeled M2. Stars without proper motions were retained labeled as unknown membership status or UN.
The analysis was carried out for a 10 arcmin radius field centered in the cluster. We considered 1806 stars with measured proper motions \textcolor{blue}{of} which 407 were found to be likely members. \textcolor{blue}{Out} of these only 288 were in the FoV of our images, for which we could produce light curves.
From the distribution of \textcolor{blue}{the} field stars in the phase space we estimated the number that is expected to be located in the same region of the sky and of the VPD as the extracted members, therefore they could have been erroneously labelled as members. Within the M1 stars the resulting expected contamination is 36 (11\%) and within the M2 stars it is 87 (7\%); therefore, for a given extracted star its probability of being a cluster member is 89\% if it is labeled M1, or 93\% if it is labeled M2.
\section{Differential Reddening and the CMD}
\label{CMD}
Palomar 2 is a heavily reddened cluster subject to substantial differential reddening as it is evident in the crowded and deep HST Color Magnitude Diagram (CDM) shown by \citet{Sarajedini2007}. A thorough treatment of the differential reddening in the cluster enabled \citet{Bonatto2020} to produce a reddening map which these authors have kindly made available to us. In Figure \ref{CMD_Pal2} the observed CMD and the dereddened versions are shown. To deredden the CMD, the differential reddening map was added to a forground reddening of $E(B-V)=0.93$.
\begin{table*}
\scriptsize
\begin{center}
\caption{Data of variable stars in the FoV of our Pal 2 images.}
\label{variables}
\begin{turn}{90}
\begin{tabular}{cccccccccccc}
\hline
ID& Gaia & Type & P &E$_0$&$V$ & $V$ Amp& RA & DEC & $P_{\rm Gaia}$ & Membership& Gaia \\
&variable & & (d)& (+2450000)& (mag)&(mag)& (J2000.0)&(J2000.0)& (d)& status& number \\
\hline
V1& & RRab& 0.542848 &6312.3363&20.534&0.805& 4:46:03.57 &+31:22:45.8& & M1 &159504640014524672\\
V2& G5& RRab& 0.551396& 5542.2114 &21.342 &1.056 & 4:46:04.60& +31:23:41.5& 0.5513624 & M1& 159504747388302336\\
V3& & RRab& 0.554363 &6948.4976&21.792&0.951&4:46:05.53 &+31:23:29.0& & M1& 159504747388520064\\
V4& G14& RRab& 0.651889 &5912.2228&21.413 &0.814 & 4:46:05.61 &+31:23:43.2 &0.6518656 & M1& 159504747387726464\\
V5 & G4 &RRab& 0.511639 &8896.2470&21.382&0.997 &4:46:07.02& +31:23:13.5 &0.5067667 & M2& 159504678667943552\\
V6 & G21&RRab& 0.553259& 9258.3356&21.461&1.168 &4:46:07.82& +31:23:07.7& 0.5532034 & M2& 159504678668831872\\
V7 & G16&RRab& 0.655812& 8407.3827&20.925&0.914 &4:46:08.11 &+31:23:37.1 &-- & M1& 159504678667937024\\
V8 & G7&RRc& 0.373408 &5542.2114&20.757 &0.548 &4:46:08.06 &+31:22:21.7& --& M1& 159501689370744192\\
V9 & G6 &RRab& 0.629619 &8896.1493&21.521&0.787 &4:46:08.24 &+31:23:09.3& 0.6129630 & M1& 159504674373384320\\
V10& G8&RRab& 0.685890&5912.3072&20.700 &0.512 &4:46:09.11& +31:22:38.0 &0.6858277& M1& 159501723731340288\\
V11& G19&RRab& 0.575280& 6222.3870&20.673 &0.842 & 4:46:10.58 &+31:22:35.0& 0.5752915 & M1&159501719435472896\\
V12 &G9&RRab& 0.583630& 6633.3246&20.894 &0.603 & 4:46:12.82 &+31:22:26.3& 0.5953860 & M1& 159501650715992064\\
V13 & & RRab& 0.546972& 6948.4441&21.327 &0.887 &4:46:07.17 &+31:23:15.5& & M2& 159504678668829184\\
V14 &G1 &RRab& 0.574697 &6948.4591&21.842 &1.610 &4:46:07.21 &+31:22:47.2 &0.5513435 & M2 & 159504678667961856\\
V15&G12 &RRab& 0.508471& 8781.4301 &20.918 &0.323 & 4:46:05.00 &+31:22:52.9 &-- & M1& 159504644308236672\\
V16 &G13 &RR?&0.490213 &5912.1144&19.179 &0.330 &4:46:04.64 &+31:22:42.0& -- & M1& 159504644308250624\\
V17 & G17&RGB& & &19.0 &0.9 &4:46:02.96& +31:23:09.2& -- & M1 &159504708733123200\\
V18 & G11 &RR?& 0.510211 &5912.1144&18.876 &0.768 &4:46:05.85 &+31:23:03.3& --& M1& 159504644308215808\\
SV1 &&RRab& 0.588566& 6634.1554&21.267 &1.024 & 4:46:04.22 &+31:22:34.8& & UN &159504644309111808\\
SV2 &&RRab &0.537325& 8406.4629&21.876 &1.299 & 4:46:06.39 &+31:23:54.0 & & UN &159504747388298112\\
SV3& & RRab & 0.661914& 5868.4136&21.517 &1.077 & 4:46:03.96 &+31:23:16.2& & FS&159504713028573696\\
SV4& & RRab & 0.587210 &8407.3175&21.585&1.363 & 4:46:06.56 &+31:23:27.2 & &FS& 159504674373556992\\
SV5& G15&RRab & 0.490941 &6221.4206&21.312&1.391 & 4:46:09.04 &+31:23:12.8 &0.4909349& FS& 159504678668828160\\
SV6& G10&RRab & 0.570669 &6946.4683 &20.840&0.960 & 4:46:12.31& +31:22:45.3 &0.5706582& FS& 159501723731332480\\
SV7& &RRab&0.551215& 6634.1714&19.274&1.371 &4:46:13.65 &+31:24:11.5& & FS& 159506190497880832\\
&G3 & RRab& 0.531512 &6633.3479&20.486 &0.769& 4:45:57.72 &+31:24:19.0& 0.5242873& FS& 159504987906469248\\
&G18& RRab& 0.562320& 6223.3662&20.700 &0.576&4:46:09.50 &+31:23:01.9& 0.5623196&FS &159501723732926208 \\
&G23&RRab& 0.595453& 6633.3810&21.178 &1.104 & 4:45:59.23 &+31:22:53.4& 0.56065912& FS& 159504609949939072 \\
&G2$^1$ &&&&&& & & &&159501655012584064\\
&G20$^2$& & & &21.513 & &4:45:56.77& +31:21:09.0&0.59148323& FS &159504128913233536\\
&G22$^1$ &&&&&&&\\
\hline
\hline
\end{tabular}
\end{turn}
\end{center}
\raggedright
\center{
1. Out of our FoV. 2. Not measured by our photometry.
}
\end{table*}
\section{The variable stars in Pal 2}
\label{var_star}
No variable stars in Pal 2 have been thus far reported. The case of Pal 2 is a particularly challenging one since
the cluster is not only distant but it is also behind a heavy dust curtain, its horizontal branch (HB) is located below 21 mag. We have occasionally taken CCD \emph{VI} images of Pal 2 since 2010 and until 2021 and we have attempted to take advantage of this image collection to search for variables in the FoV of the cluster. We were able to measure 400-500 point sources in the $V$ and $I$ images that span a range in magnitude and colour shown in the left panel of Fig. \ref{CMD_Pal2}. The HB being located at the bottom of the stellar distribution, we are in fact working at the very limit of our photometry in order to detect cluster member RR Lyrae.
To search for variability we proceeded as follows.
By using the string-length method (\citealt{Burke1970}, \citealt{Dworetsky1983}), we phased each light curve in our data with a period varying between 0.2 d and 1.0 d, a range adequate for RR Lyrae stars, in steps of $10^{-6}$ d. For each phased light curve, the length of the line joining consecutive points, called the string-length, represented by the parameter $S_{Q}$ was calculated. The best phasing occurs when $S_{Q}$ is minimum, and corresponds to the best period that our data can provide. A detail visual inspection of the best phased light curve helped \textcolor{blue}{ to confirm} the variability of some stars.
We noticed however, that the seasonal scatter of the light curve could vary largely depending mainly of the prevailing seeing conditions and crowdedness of a particular star, a situation that worsens near the core of the cluster. Therefore, it may happen that in some seasons the light curve variation is dubious but extremely clear in the runs of best quality, which turned out to be from the 2013 and 2018-2020 seasons.
\textcolor{blue}{With} the above method we discovered 21 RR Lyrae variables, mostly of the RRab type. Confronting with the membership analysis described in $\S$ \ref{gaia}, we concluded that 14 of them were likely cluster members.
The latest $Gaia$-DR3 enable us to search for stellar variability flags in the field of Pal 2. In fact Gaia flags 22 variables. A cross-match with our variables list show 12 matches; we found some variables not marked by $Gaia$ and {\it a posteriori} we confirmed the variability of a few $Gaia$ sources not previously detected by us.
In Table \ref{variables} we list the 32 variables in the field of Pal 2. The table is organized as follows.
We have given the name with a prefix "V" only to those stars that seem likely cluster members (status M1 or M2), 18 in total, V1-V18.
Arbitrarily, we identified the $Gaia$ variables as G1-G22. This identification is listed in column 2.
In the bottom 14 rows of Table \ref{variables} we list the likely non-members (status FS). For non-member variables detected by us, we used the nomenclature with the prefix "VS".
\subsection{Variables in the CMD}
In the right panel CMD of Fig. \ref{CMD_Pal2} all variable stars have been circled with a red circle if cluster member or a black circle otherwise. As a reference we included two isochrones from the models of VandenBerg et al. (2014)
for [Fe/H]=$-1.6$ and $-2.0$ and a theoretical horizontal branch built by Yepez et al. (2022). Isochrones and HB were
placed at a distance of 26.1 kpc \citet{Bonatto2020}. It is heartening to see nearly all the RR Lyrae stars fall in the whereabouts of the HB.
In the following section we address some peculiar individual cases.
\subsection{Individual cases}
V1. Its position on the CMD above the HB and in the mid-RGB is intriguing since the light curve and period suggest this star to be a member RRab star. An alternative possibility is that the star is a binary. Our data are not sufficient to explore this possibility.
V16, V17 and V18. Their position on the CMD near the tip of the RGB suggests these stars to be red giant variables. However our photometry was not intensive enough to confirm a long term variability. Alternatively, we were able to identify short therm variations in V16 and V18 (see Fig. \ref{membervars}). V17 light curve is in fact consistent with that of a long-term RGB.
SV1. It is a clear RRab star falling too much to the red of the HB. The star is not a cluster mamber.
SV7. We have detected clear RRab-like variations in our $V$ data. However no variation is seen in the $I$ data. While variations might be spurious, we retain the star as a candidate variable to be confirmed.
SV4, SV5, SV6 and G23. These are the four non-member stars hence identified by black circle or square in the DCM. However they lie very near the HB.
Their non membership status was assigned by the statistical approach to their proper motions but they might be cluster members.
G3 and G20. G3 is a clear RRab star not a cluster member. For G20 we got a very noisy light curve that that makes its classification very difficult, however, the star is likely a non-member.
\section{{The Oosterhoff type of Pal 2}}
\label{Oosterhoff}
\textcolor{blue}{The average period of the member RRab star listed in Table \ref{variables} is 0.55 days which indicates that Pal 2 is of the Oo I type. We can further confirm this from the distributions of the RRab stars in the Amplitude-Period or Bailey diagram, shown in Fig. \ref{bailey}. Given the dispersion of the light curves, the amplitude distribution is also scattered, however it is clear that the RRab stars follow the expected sequence for unevolved stars typical of OoI clusters \citep{Cacciari2005}, in both $V$ and $I$ bands . The upper sequence corresponds to evolved stars of the Oo II clusters \citep{Kunder2013}. Hence, Pal 2 is a Oo I type cluster.
We note the stars V16 and V18, whose nature is not clear due to their position in the RGB and short period ($\S$ \ref{var_star}), do not follow the general trend rather confirming that they are not field RR Lyrae stars. Alternatively they may be binary stars. Further observations may be required to provide a proper classification.}
\section{Cluster distance and metallicity from member RR Lyrae stars}\label{fourier}
Although the scatter of all \textcolor{blue}{these} faint cluster member stars may be large, we attempted an estimation of the mean distance and [Fe/H] via the Fourier light curve decomposition. This approach has been amply described in previous papers. Both the method details and the specific calibrations for $M_V$ and [Fe/H] for RRab stars can be found in a recent paper by \citet{Arellano2022}.
We selected the RRab members with best quality light curves and restricted the Fourier approach to this sample. These are the variables V2-V13 shown in Fig. \ref{membervars}. The mean values for the distance modulus $(V-M_V)_o$=17.18 and [Fe/H]$_{\rm ZW}$= $-1.39 \pm 0.55$ were found. The corresponding distance is $27.2 \pm 1.8$ kpc for a foreground reddening of $E(B-V)=0.93$ plus the differential values according to the reddening map of \citet{Bonatto2020}. The quoted errors are the standard deviation of the mean, they are a bit too large but given the faintness of the stars and their consequent photometric scatter, the results are remarkably in good agreement with independent determinations: $(V-M_V)_o$=$17.1 \pm 0.1$ and [Fe/H]=-1.3 \citep{Harris1997}; Fe/H]$_{\rm ZW}$=$-1.68\pm 0.04$ \citep{Ferraro1999}; or $d=27.2$ kpc and [Fe/H]=$-1.42$ listed by \citet{Harris1996} (2010 update).
\section{summary}
We have found and identified 32 variables in the field of the globular cluster Palomar 2. A membership analysis based on $Gaia$-DR3 proper motions and the positioning of the variables in the corresponding intrinsic CMD, demonstrates that at least 18 of these variables are cluster members. Most of the detected variables are of the RRab type but one RRc and at least one RGB were identified.
The mean cluster distance and metallicity, estimated from the Fourier light curve decomposition of 11 cluster member RRab stars with the best quality available data, lead to $d= 27.2 \pm 1.8$ kpc and metallicity $-1.39 \pm 0.55$ in reasonable agreement with the previous estimates. A detailed finding chart of all these variables is provided.
\label{summary}
\section{ACKNOWLEDGMENTS}
This project was partially supported by DGAPA-UNAM (Mexico) via grants IG100620. AAF is thankful to Mr. G.A. Garc\'ia P\'erez and Mr. G. R\'ios Segura for computational help. The
facilities at IAO and CREST are operated by the Indian Institute
of Astrophysics, Bangalore, we are grateful for the observing time allocated and for the valuable help of the support staff.
\bibliographystyle{rmaa}
\bibliography{Pal2}
|
Title:
The chemical abundance pattern of the extremely metal-poor thin disk star 2MASS J1808-5104 and its origins |
Abstract: We present a high-resolution ($R\sim35,000$), high signal-to-noise
($S/N=350$) Magellan/MIKE spectrum of the bright extremely metal-poor star
2MASS~J1808$-$5104. We find [Fe/H] = $-$4.01 (spectroscopic LTE stellar
parameters), [Fe/H] = $-$3.8 (photometric stellar parameters), [Fe/H] = $-$3.7
(spectroscopic NLTE stellar parameters). We measured a carbon-to-iron ratio of
$\mbox{[C/Fe]}= 0.38$ from the CH G-band. J1808$-$5104 is thus not
carbon-enhanced, contrary to many other stars with similarly low iron
abundances. We also determine, for the first time, a barium abundance
($\mbox{[Ba/Fe]} =-0.78$), and obtain a significantly reduced upper limit for
the nitrogen abundance ([N/Fe]$ < - 0.2$). J1808$-$5104 has low ratio of
$\mbox{[Sr/Ba]}=-0.17$, which is consistent with that of stars in ultra-faint
dwarf galaxies. We also fit the abundance pattern of J1808$-$5104 with
nucleosynthesis yields from a grid of Population\,III supernova models. There
is a good fit to the abundance pattern which suggests J1808$-$5104 originated
from gas enriched by a single massive supernova with a high explosion energy of
E $=10\times10^{51}$\,erg and a progenitor stellar mass of
M$=29.5$\,M$_{\odot}$. Interestingly, J1808$-$5104 is a member of the Galactic
thin disk, as confirmed by our detailed kinematic analysis and calculated
stellar actions and velocities. Finally, we also established the orbital
history of J1808$-$5104 using our time-dependent Galactic potential the
\texttt{ORIENT}. J1808$-$5104 appears to have a stable quasi-circular orbit and
been largely confined to the thin disk. This unique orbital history, the star's
very old age ($\sim13.5$\,Gyr), and the low [C/Fe] and [Sr/Ba] ratios suggest
that J1808$-$5104 may have formed at the earliest epoch of the hierarchical
assembly of the Milky Way, and it is most likely associated with the primordial
thin disk.
| https://export.arxiv.org/pdf/2208.03891 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
Early universe --- Galaxy: disk --- stars: abundances ---
stars: Population II --- stars: individual (2MASS~J18082002$-$5104378)
\end{keywords}
\section{Introduction}
The chemical abundances of the most metal-poor stars trace the earliest nucleosynthesis events of
elements heavier than H and He, which took place within the first billion years after the Big
Bang \citep{alvarez06,Becker2012}. Stars with ${\metal} \sim -4.0$ and less (also known as Ultra Metal-Poor - UMP) are best suited for this, as they are likely second-generation stars, thus enabling the study of their massive and short-lived progenitor Population\,III (Pop\,III)
stars \citep{fn15}.
Measuring as many chemical elements as possible in these stars thus helps to constrain models of Pop\,III nucleosynthesis. Carbon, in particular, has played an important role in learning about progenitor properties. Large observed carbon abundances have been interpreted as a signature of mixing and fallback supernovae \citep[e.g.,][]{UmedaNomotoNature,heger10,cooke14}.
On the contrary, other types of supernovae must have been responsible for the abundance patterns observed in ultra-metal-poor stars that are not carbon enhanced \citep[e.g.,][]{Caffau2011,placco2021,skuladottir2021}. Their formation mechanism might also have been entirely different, and e.g., not driven by carbon and oxygen-cooled gas \citep{brommnature, dtrans} but through dust cooling \citep{chiaki15,debennassuti14,ji14}.
Indeed, for the ${\metal} \lesssim -4.0$ metallicity regime, about 81\% of the $\sim40$ stars observed to date are carbon enhanced, i.e., have \cfe$>0.7$ \citep{placco14,arentsen2022}. Restricting stars to have halo kinematics increases the carbon fraction to $\sim 90\%$. This difference can be attributed to a handful of stars being in fact associated with the metal-poor Atari disk population \citep[for more details, see][]{Mardini2022}. Interestingly, these stars are all non-carbon-enhanced. One star is associated with the thin disk, which is the subject of this paper -- it is also not carbon-enhanced (see Section~\ref{sec:analysis}).
\citet{melendez16} reported the discovery of 2MASS~J18082002$-$5104378 (hereafter J1808$-$5104), with ${\metal} =-4.07$ and upper limits on the carbon abundance of $\mbox{[C/Fe]}<0.94$ and the barium abundance $\mbox{[Ba/Fe]}<-0.31$, based on an ESO/UVES high-resolution spectrum. Their upper limit on the carbon abundance indicated this star might fall into the category of stars
without carbon-enhancement, but firm conclusions could not be drawn. More recently, \citet{Spite2019} confirmed a mild enhancement of J1808$-$5104 of $\mbox{[C/Fe]}=0.49$ and also reported oxygen and beryllium measurements. Both studies were able to detect strontium (Sr\,{II}, Z=38), but no other neutron-capture elements were reported.
Here, we report on results of our new high-resolution, high signal-to-noise spectroscopic observations with the Magellan telescope, which confirmed the mild carbon enhancement and enabled a barium (Ba\,{II}, Z=56) detection for the first time. A low upper limit of zinc (Zn\,{I}, Z=30) is also found which is unusual at this low metallicity. J1808$-$5104 has a low barium abundance and is thus not a neutron-capture element enhanced star. Our observations also produced upper limits on additional neutron-capture elements yttrium (Y\,{II}, Z=39) and europium (Eu\,{II}, Z=63), as well as nitrogen which all help constrain the main characteristics of the Pop\,III stellar progenitors \citep{placco16}.
Dynamically, and based on Z$_{max}$ values, \citet{Schlaufman2018} speculated that J1808$-$5104 is confined within the thin disk. The disk-like kinematics were later confirmed by \citet{Sestito2019}. Also, \citet{Schlaufman2018} confirmed the binarity of J1808$-$5104 by investigating its radial velocity using 14 Magellan/MIKE, the three VLT/UVES spectra observed by \citet{melendez16}, and 31 GMOS/Gemini South spectra. \citet{Spite2019} provided additional radial velocity measurements for the UVES R760 spectrum, and confirm velocity variations for J1808$-$5104.
Chronologically, \citet{Schlaufman2018} used isochrones to estimate an age of 13.5\,Gyr for J1808$-$5104.
In this paper, we report on a further abundance analysis. Details of our spectroscopic observations are provided in Section~\ref{sec:obs}, stellar parameters and chemical abundances are described in Sections~\ref{sec:chem} and \ref{sec:discussion}, respectively, and our conclusion that J1808$-$5104 is an extremely-metal-poor star showing an abundance signature typical for metal-poor stars formed from well-mixed gas and belong to the thin disk is presented in Section~\ref{sec:Conclusions}.
\section{Observations and Radial Velocity Measurements}\label{sec:obs}
We observed J1808$-$5104 (R.A. = 18:08:20.02, Dec. = $-$51:04:37.8,
$V=11.9$) with the MIKE spectrograph on the Magellan-Clay
telescope at Las Campanas Observatory on April 15, 16, and 17, 2016,
for a total of 2\,h during clear weather and seeing conditions varying
from 0\farcs5 to 0\farcs7. The employed $0\farcs7$ slit yields a high
spectral resolution of $\sim30,000$ in the red and $\sim35,000$ in the
blue wavelength regime of our spectrum, covering 3300\,{\AA} to
9400\,{\AA}. Data reductions were carried out with the MIKE Carnegie
Python pipeline \citep{kelson03}\footnote{Available at \url{http://obs.carnegiescience.edu/Code/python}}. The resulting $S/N$
per pixel is $\sim65$ at $\sim3500$\,{\AA}, $\sim200$ at
$\sim4000$\,{\AA}, $\sim350$ at $\sim4700$\,{\AA}, $\sim270$ at
$\sim5200$\,{\AA}, and $\sim420$ at $\sim6000$\,{\AA}. In
Figure~\ref{specs} we show several representative portions of the
J1808$-$5104 spectrum, around the Ca\,{II}\,K line at 3933\,{\AA}, the Mg\,b lines at
5170\,{\AA}, the G-bandhead at 4313\,{\AA}, and the Ba line at
4554\,{\AA}, compared with the spectra of SD~1313$-$0019 \citep{frebel15b} and
HE~1300$+$0157 \citep{frebel07}.
We measured the radial velocity in our three individual spectra taken on consecutive nights, and confirm
velocity variations reported by \citet{Schlaufman2018} and \citet{Spite2019}. Our heliocentric values are 21.2\,km\,s$^{-1}$
(2016 April 15), 22.7\,km\,s$^{-1}$ (2016 April 16), and
24.9\,km\,s$^{-1}$ (2016 April 17). We furthermore obtained followup
observations to make additional radial velocity measurements. We
find 26.5\,km\,s$^{-1}$ (2017 May 7) and 22.6\,km\,s$^{-1}$ (2017 Aug
15). We fit a Keplerian orbit to all available radial velocities measurements from the literature, based on high resolution spectra (see table~2 in \citealt{Schlaufman2018} and \citealt{Spite2019}) using the program \texttt{BinaryStarSolver} \citep{Milson2020}. Figure~\ref{orbital_period} shows the fitted Keplerian orbit overplotted with all the data.
We find an orbital period of $P = 34.7385_{-0.2}^{+0.2}$\,days, a system velocity $\gamma = 16.745_{-0.2}^{+0.1}$\,km s$^{-1}$, a velocity semi-amplitude $K = 9.53_{-0.2}^{+0.3}$\,km s$^{-1}$, an eccentricity $e = 0.039_{-0.03}^{+0.03}$, a longitude
of periastron $\omega = 271.42_{-52}^{+51}$ deg, and a time of periastron
$t_{0} = 57873.2_{-4.9}^{+4.6}$ HJD. We also calculated the projected
semimajor axis $a_{1} \sin{i} = 4.55_{-0.2}^{+0.2}$\,R$_{\odot}$ and the mass function $f(M) = 0.0031_{-0.00034}^{+0.00034}$\,M$_{\odot}$. In general, these orbital parameters are in good agreement with the ones reported by \citet{Schlaufman2018} and \citet{Spite2019}. Thus, J1808$-$5104 the oldest thin-disk confirmed binary at the lowest metallicities. Overall, the number of binaries among metal-poor stars with ${\metal} \lesssim -2.0$ is close to 20\% \citep{hansen16}.
\section{Chemical abundance analysis}\label{sec:chem}
\subsection{Stellar parameters}\label{sec:stellpar}
We measured equivalent widths to obtain atmospheric chemical abundances, by fitting Gaussian profiles to absorption features. Results are listed
in Table~\ref{Tab:Eqw}. To perform the abundance determination, we used a 1D plane-parallel
model atmosphere with $\alpha$-enhancement \citep{castelli_kurucz} and the latest version of MOOG\footnote{\url{http://www.as.utexas.edu/~chris/moog.html}} \citep{moog, sobeck11}. Abundances of blended features were
determined with spectrum synthesis, using the Spectroscopy Made Hard (SMH) software package \citep{casey14}. The line lists for the atomic and molecular features were generated by the \texttt{linemake}\footnote{\url{https://github.com/vmplacco/linemake}} code \citep{placco2021b}. The isochrone fitting done by \citet{Schlaufman2018} suggested a slightly warmer {\teff} and more metal-rich {\metal} star than what was reported by \citet{melendez16}. \citet{Spite2019} also estimated their stellar parameters using Gaia\,DR2 photometry \citep{Gaia_dr2} and 3D maps of interstellar reddening \citep{Lallement2018} to be warmer and more metal-rich. We then employed three different methods to obtain
stellar parameters from Fe\,{I} and Fe\,{II} lines: (1) The commonly used technique of calculating line formation under the assumption of local thermodynamic
equilibrium (LTE). (2) The quantum fitting method (QFM) that invokes
non-local thermodynamic equilibrium (NLTE). (3) The procedure outlined in \citet{roederer2018}.
\input{tables/tab1}
\subsubsection{LTE Stellar Parameters}\label{lte_stellpar}
We initially derived the stellar parameters ({\teff} = 5070\,K, {\logg} = 2.40\,dex, {\metal} = $-4.20$\,dex, and $v_{\text{micro}}$ = 1.30\,km s$^{-1}$) by (1) minimizing the trend between the reduced equivalent width and excitation potential for the Fe\,{I} lines, and (2) forcing agreement between Fe\,{I} and Fe\,{II} abundances. The initial {\teff} was then adjusted following the photometric correction presented in \citet{Frebel2013}. The rest of the stellar parameters were adjusted to produce no trend of Fe\,{I} abundances with reduced equivalent width. We also adjusted the Fe\,{II} abundance to match again the Fe\,{I} abundance. This yields {\teff} = 5233\,K, {\logg} = 2.80\,dex, {\metal} = $-4.01$\,dex, and $v_{\text{micro}}$ = 1.35\,km s$^{-1}$.
\subsubsection{NLTE Stellar Parameters}\label{nlte_stellpar}
We use the QFM method developed by \citet{ezzeddine16} for the iron abundance to spectroscopically determine a second set of stellar parameters. The Fe\,{I}/Fe\,{II}/Fe\,{III} model atom used in \citet{ezzeddine16} is compiled from a large number of energy levels taken from the NIST\footnote{\url{http://www.nist.gov/}} database (846 Fe\,{I}, 1027 Fe\,{II}
and the Fe\,{III} continuum) and theoretical levels from
\citet{petkur15}, and reduced into a super-atom of 424 total
levels. Levels are coupled radiatively via a large number of
bound-bound transitions ($\sim$25,000 lines from the VALD3
database\footnote{\url{http://vald.astro.uu.se/}}) and photoionization
tables. Additionally, all levels are coupled collisionally via
inelastic electron and hydrogen collisions. Hydrogen collisional rates
are estimated using the new semi-empirical QFM. It includes the
dominating ion-pair production process, which avoids the large
uncertainties usually obtained when using the classical
\citet{drawin1968,drawin1969a,drawin1969b} approximation. We refer the
interested reader to \citet{ezzeddine16} for a more detailed
description of the atom.
Departures from LTE for each individual Fe\,{I} and Fe\,{II} line were
calculated to iteratively determine the stellar parameters
spectroscopically, just as in the LTE case. We follow the procedure
described in \citet{ezzeddine17} who applied it to the 20 most
iron-poor stars. As for these stars, the scatter among line abundances of J1808$-$5104 is reduced compared to the LTE analysis. The standard deviation of Fe\,{I} line abundances is 0.05\,dex for the NLTE analysis, compared to the already low $0.07$\,dex LTE result. This further supports applying quantum mechanically based NLTE corrections to individual lines, as it leads to improved overall results, independent of the data quality.
Based on the NLTE Fe\,{I} line analysis, we spectroscopically obtain T$_{\rm
eff}=5250$\,K. This yields an Fe\,{I} abundance of
${\metal}=-3.69$, and $-3.65$ for Fe\,{II}. The NLTE Fe\,{I} abundance is higher by 0.46\,dex compared to the Fe\,{I} LTE case. Fe\,{II} lines are hardly affected by NLTE, at the level of 0.02\,dex which we neglect. This difference (i.e., $\Delta\mbox{[FeI/H]} = 0.46$) is in line with what can be obtained from the relation of $\Delta\mbox{[FeI/H]} = -0.14 \times
{\metal}_{\rm{LTE}} - 0.15$ developed by \citet{ezzeddine16}, $\Delta{\metal} =0.43$. It illustrates the strong metallicity dependence of departures from LTE. As a consequence of the differential NLTE effect for Fe\,{I} and Fe\,{II}, the surface gravity is somewhat higher, $\log g=3.2\pm0.2$ since our spectroscopic analysis aims for an ionization balance of Fe\,{I} and II in NLTE. The microturbulence is somewhat higher also, $v_{micro}=1.8\pm0.2$\,km\,s$^{-1}$.
\subsubsection{Photometric Stellar Parameters}\label{rpa_stellpar}
Photometric stellar parameters were also obtained for J1808$-$5104 based on a procedure described in detail by \citet{roederer2018}. In summary, the effective temperature (\teff) was calculated with the color--\metal--\teff\ relations of \citet{casagrande2010}, using the $JHK$ magnitudes from 2MASS \citep{cutri2003}, the Johnson $BV$ magnitudes from APASS DR9 \citep{henden2014}, and the corrected \metal\ value from Section~\ref{lte_stellpar} as a first-pass estimate. The \logg\ was calculated from the fundamental relation described in \citet{roederer2018}, using the 3D reddening values, $E(B-V)$, from the \texttt{bayestar2017} version of the \texttt{\href{https://dustmaps.readthedocs.io/en/latest/}{dustmaps}} application \citep{green2018}. The distance was taken from \citet{bailer-jones2021} and the bolometric correction in the $V$ band (BC$_V$) from \citet{casagrande2014}. These initial \teff\ and \logg\ values are used to derive the \metal\ and microturbulent velocity ($v_{\text{micro}}$). Then, the first-pass \metal\ estimate is updated and \teff\ and \logg\ are recalculated. This yields {\teff} = 5665\,K, {\logg} = 3.34\,dex, {\metal} = $-3.85$\,dex, and $v_{\text{micro}}$ = 1.52\,km s$^{-1}$. Since the \teff\, and \logg\, are determined independently from the model atmospheres, we have decided to adopt the parameters above for the remainder of the analysis presented in this paper.
We estimate random uncertainties for the stellar parameters by varying only one parameter at a time until $1\sigma$ scatter in the previous procedure is achieved. In general, our stellar parameters agree well with those of \citet{Spite2019}, who derive $\sim5600$\,K $\log g=3.4$, $v_{\text{micro}}=1.6\pm0.2$\,km\,s$^{-1}$, and ${\metal}=-3.84\pm0.07$.
\subsection{Chemical abundances}\label{sec:analysis}
We determined chemical abundances of 19 elements and five upper limits
for J1808$-$5104. The final abundance ratios [X/Fe] are calculated
using the photometric stellar parameters and solar abundances of \citet{asplund09}, and listed in Table~\ref{Tab:abund}. They are also shown in Figure~\ref{fig:abundplot} where we compare them with literature data. In the following, we comment on the various observed element abundances and measurement details.
J1808$-$5104 is a warm metal-poor star. In accordance with its stellar evolutionary status at the bottom of the giant branch, we find a
lower-than-Spite Plateau \citep{spite82} abundance of A(Li) = 1.38, as measured from
the lithium doublet at 6707\,{\AA}. For comparison, A(Li) = 1.5 and 1.78 were found by
\citet{melendez16} and \citet{Spite2019}, respectively.
Since our spectrum has a very high S/N ratio, a carbon abundance could be
clearly measured from the CH G-bandhead at 4313\,{\AA}, yielding
$\mbox{[C/Fe]}=0.38 \pm 0.10$. Our [C/Fe] is in agreement with the one reported by \citet{Spite2019} of $\mbox{[C/Fe]}=0.49$. The detection is shown in Figure~\ref{specs},
together with the best-fit synthetic spectrum. To achieve the best possible
detection, we note that we co-added the 2016 spectrum with the two
radial-velocity spectra taken in 2017. Adding the extra data somewhat increased the S/N and aided the measurement. Given the relatively unevolved nature of J1808$-$5104, there is no need to correct the carbon abundance for the star's evolutionary status \citep{placco14} to obtain its true birth carbon abundance. As already noted in \citet{Spite2019}, 1D LTE abundances of
molecular species, such as CH, potentially suffer from strong effects
from not employing 3D model atmospheres. We thus note here as well that the 3D LTE abundance would be even lower. From Table~2 in \citet{gallagher16}, we estimate a potential correction of $-0.5$\,dex (by coarsely extrapolating their 5900\,K/4.0 model to our values). However, it remains to be seen what any 3D-NLTE abundances derived from CH might be, if available.
We obtained an upper limit for nitrogen from the non-detection of the NH band at 3360\,{\AA}. The still reasonable $S/N \sim 30$ yields a subsolar limit of [N/Fe]$<-0.2$ which shows J1808$-$5104 to be deficient in N. We also obtain a tentative oxygen abundance of $\mbox{[O/Fe]} = 1.25 \pm{0.50}$ from the two stronger of the near infrared O triplet lines at 7771 and 7774\,{\AA}, as they are very weakly detected only. This {$\mbox{[O/Fe]}$} value is in principally agreement with the one reported by \citet{Spite2019} of ${\mbox{[O/Fe]}}=1.36$ based on UV-OH line. However, we caution that 3D corrections would affect the OH abundance whereas NLTE significantly affects the triplet lines. While we do not further assess these corrections, we conservatively conclude that J1808$-$5104 is at least mildly oxygen enhanced, very similar to other metal-poor stars \citep{garciaperez_primas2006_O}.
There is some interstellar absorption by Na and Ca, as seen in the spectrum of J1808$-$5104. Both stellar Na D lines are very close to the strong interstellar lines but their equivalent widths could still be measured. Figure~\ref{specs} shows that the Ca\,II K line appears somewhat distorted by an interstellar component. However, we measure the Ca abundance from seven Ca\,I lines, so the blending of the Ca\,II\,K line is a purely cosmetic effect. As for abundances of other light elements, lines are generally weak but measurable given the good $S/N$. Seven Mg lines across the spectrum could be used to determine the Mg abundance. Given the low C abundance, the Al line at 3944\,{\AA}, which is blended with a CH feature, could be easily used in addition to the $\lambda$3961\,{\AA} line. For our star, the derived Al abundances for both lines are in good agreement.
The Si abundance was obtained from the Si\,I lines at 3905\,{\AA} and 4102\,{\AA}. Five Sc lines, six Ti\,I lines and 25 Ti\,II lines were employed to derive the respective abundances with good precision. Ti\,I and II agree within $0.08$\,dex. Three Cr\,I lines, three weak Mn\,I lines, 15 Co\,I lines, and 14 Ni\,I lines were also detected. Finally, we estimated a surprisingly low upper limit for the Zn abundance from the Zn\,{I} line at 4810\,{\AA} of $\mbox{[Zn/Fe]}<0.23$. This places J1808$-$5104 at the very bottom end of halo star Zn abundance range (see Figure~\ref{fig:abundplot}) at its [Fe/H] abundance.
Abundances of neutron-capture elements were determined for Sr and Ba from two Sr\,II lines and one weak Ba\,II line at 4554\,{\AA}. They are both significantly below solar ratios, very similar to values found for other halo and dwarf galaxy stars at these metallicities. Upper limits for Y, Zr and Eu were obtained from the lines at 4900\,{\AA}, 4149\,{\AA} and 4129\,{\AA}, respectively. The Y upper limit of $\mbox{[Y/Fe]}<-0.07$ indicates a low abundances; the Zr and Eu upper limits are too high to be very meaningful. However, even mild $r$-process enhancement can be excluded given the low Sr and Ba abundances.
Standard deviations of individual line measurements for each element are taken as random abundance uncertainties.
We assign a nominal minimum uncertainty value of 0.05\,dex for all species that have a standard deviations of $<0.05$\,dex. Further, we assign a nominal minimum uncertainty of 0.1\,dex in two cases: (i) abundances of elements with just one line measured; and (ii) standard deviations that are less than 0.1\,dex and the number of measured lines is three or less. The uncertainties are given in Table~\ref{Tab:abund}. Systematic uncertainties can be assessed from varying each stellar parameter by its uncertainty and re-determining the abundances. Typical final uncertainties ($\sigma_{rand} + \sigma_{sys}$) are about 0.15-0.25\,dex.
\input{tables/tab2}
\section{Constraints on the progenitor star of J1808$-$5104 and its birth environment}\label{sec:discussion}
The abundance signature of light elements produced in fusion processes observed in J1808$-$5104 generally agrees extremely well with established abundance trends by other metal-poor stars down to about ${\metal}\sim-4.2$. This agreement can be seen in Figure~\ref{fig:abundplot}.
Overall, these abundance trends are thought to reflect chemical enrichment by early core-collapse supernova(e) that exploded prior to the births of J1808$-$5104 and other, similar metal-poor stars. Moreover, as can be seen from Figure~\ref{fig:abundplot}, the gas from which these objects formed must have been very well mixed to help erase any local abundance anomalies or potential variations of the supernova yields that would have enriched these birth gas clouds \citep{cayrel2004}.
Interestingly, the same trends are found for metal-poor stars in dwarf galaxies, both in the ultra-faint dwarfs and classical dwarfs \citep[e.g.,][]{cohen09,frebel10,gilmore13,Simon2019}. For those stars, we know that they formed in distinct places, namely their respective host galaxies. For halo stars, we do not know their origins, but considering hierarchical assembly and the formation process of the Galactic halo, as well as their chemically primitive nature, we can assume that these stars also formed in accreted systems a long time ago. This would imply that all these metal-poor stars represent a large number of birth places of which each was chemically enriched by local supernovae. Yet, all these stars show near-identical abundance patterns. This points to a universal enrichment history, well mixed gas with mixing driven by robust processes, and/or similar supernova yields across all these environments \citep{frebel10, chan17}.
There are several exceptions to these well-behaved abundance trends of light elements. Na shows much more scatter than other elements but reasons for that remain unclear. Carbon is known to display extreme variations, particularly at the lowest metallicities \citep{yoon16}. Nitrogen also varies significantly. About 60\% of metal-poor stars with ${\metal}\lesssim -3.5$ exhibit strong overabundances of carbon of ${\metal}>0.7$ \citep{placco14}. The carbon enhanced stars have been suggested to point to a specific birth environment, e.g. to minihalos \citep{cooke14} enriched by individual faint supernovae undergoing a mixing and fallback episode that would lead to large carbon and low iron yields \citep{UmedaNomotoNature}. On the contrary, metal-poor stars with abundances closer to the solar ratio, such as J1808$-$5104, more likely formed from well-mixed gas, perhaps in somewhat larger systems that hosted more than one progenitor supernova. The low N abundance would principally support this scenario also. Similarly, most stars in dwarf galaxies are not carbon enhanced. In particular, the larger classical dwarf galaxies show little evidence for a significant population of carbon rich stars \citep{tafelmeyer10, simon15,Chiti2020} perhaps because they formed from larger building blocks or accreted gas quickly to grow to large enough for efficient mixing to take effect. The ultra-faint dwarfs contain a small fraction of carbon-rich stars (e.g., \citealt{norris10_seg}) but contain primarily stars with abundances signatures very similar to that of J1808$-$5104 \citep{norris10booseg}. These systems might have already formed from multiple building blocks, which could represent a mix of progenitor systems that produced some (or even no) carbon enhanced stars.
Assuming then that J1808$-$5104 formed in one of the earliest systems in the early universe, the progenitor supernova must thus have either not undergone a mixing and fallback mechanism, or there were not enough fallback supernovae to dominate the resulting chemical composition of the gas over the enrichment by ordinary core-collapse supernovae. After all, metal mixing processes were likely very efficient following the energy injection by supernovae and subsequent recovery time of the system, before forming the next generation of stars \citep{greif10}.
To test this idea, we attempted to model the light element abundance signature of J1808$-$5104 with theoretical Pop\,III supernova nucleosynthesis yields from \citet{heger10}. Employing their $\chi^2$ matching algorithm\footnote{\url{http://starfit.org}} provides insights into the putative progenitor(s) (e.g., stellar mass and supernova explosion energy) of metal-poor stars with {\metal} $<-3.0$.
This approach has been applied to a variety of metal-poor stars in the literature \citep[e..g.,][]{placco15b,placco16b,placco16,roederer2016,Placco2020}.
We note that the nucleosynthesis models are all (S4) fallback models with masses from 10 to 100\,M$_\odot$, and explosion energies from $0.3 \times 10^{51}$\,erg to $10 \times 10^{51}$\,erg. However, the mass ejected during the explosion is also given and not all models have significant amounts of fallback.
Running the {\sc{starfit}} algorithm for J1808$-$5104 using the chemical abundances from Table~\ref{Tab:abund}, suggest a series best-fit models (all with $\chi^{2} \sim 6$) with a progenitor stellar mass of M~=~29.5\,M$_\odot$, high explosion energy of E~=~$10.0\times10^{51}$\,erg and little mixing with a range of $\log (f_{mix}) \sim-3.0$ to $-0.6$. With the estimated ejecta of 27.8\,M$_\odot$, essentially no fallback appears to occur. We note that we used a 3D-corrected carbon abundance of $\mbox{[C/H]}=-4.02$, based on the \citep{gallagher16} estimate made in Section~\ref{sec:chem}. For comparison, \citet{Spite2019} suggested a $-0.4~\pm~0.1$\,dex correction. Similarly, for O, we utilized a 3D-corrected value. Since our abundance is uncertain, we opted adopt the correction $\mbox{[O/H]}=-3.08$, which includes a $-$0.6\,dex correction of the [O/H] reported in \citet{Spite2019}, for our fitting procedure. Other abundances were also corrected before fitting the abundance pattern. We applied a Na correction of $-$0.04\,dex, using our measured EW and the online calculator {\sc{INSPECT}}\footnote{\url{http://www.inspect-stars.com}} \citep{Lind_nlte_Na}, an Al correction of +0.7\,dex \citep{Nordlander2017_Al_NLTE,Roederer2021_Al_NLTE}, a Cr correction of 0.2\,dex \citep{Bergemann2010_Cr_NLTE,Cowan2020_cr_NLTE}, and Mn correction of 0.4\,dex \citep{Bergemann2008_Mn_NLTE,Sneden2016_Mn_NLTE}. Finally, Zn generally constrains the explosion energy of the progenitor but that we only have an upper limit. Given the low upper limit, and considering the bulk of halo stars at $\mbox{[Fe/H]}\sim-3.5$, it could be argued that the true Zn abundance is not much below the numerical value of our upper limit determination. As such, for the fitting purposes, we treat our upper limit as a measured abundance albeit with a larger error bar of 0.3\,dex. The top panel in Figure~\ref{sn_fit} shows our abundances overlaid with the nucleosynthesis yields of the 29.5\,M$_\odot$ and $10.0\times10^{51}$\,erg model.
Next, we statistically checked on the robustness of the fitting results by generating 10,000 abundance patterns for J1808$-$5104, by re-sampling the $\log\epsilon (\mbox{X})$ values from Table~\ref{Tab:abund} with fixed uncertainties of $\sigma = 0.2$\,dex for all species except for those that have larger uncertainties already (i.e., C, O, Zn). Running the {\sc{starfit}} code for each re-sampled pattern (and determining its respective best-fit model) results in 20 best-fitting models with a range of parameters. For simplicity, we chose to ignore any best fit models if only 1-10 realizations favored those. Overall, 62\% of the 10,000 patterns are matched best by the model with 29.5\,M$_\odot$ and $10.0\times10^{51}$\,erg (i.e. the same no-fallback model as found for the original abundance pattern above). The remaining realizations ($\sim 38\%$) are best fit by a variety of different models, with stellar masses ranging from M= 10.2 to 38\,M$_\odot$ and explosion energies of E~=0.6 to $10.0\times10^{51}$\,erg.
The results are shown in the bottom panel of Figure~\ref{sn_fit}, where we show the abundances with the $\sigma = 0.2$\,dex uncertainties overlaid with the 20 best fitting models, many of which yield very similar patterns.
We conclude that J1808$-$5104 most likely formed in an environment that experienced the enrichment by a massive Population\,III hypernova with a high explosion energy and little to no fallback. This is a similar result as what has been found from many the analysis of other other similar metal-poor stars, adding to the body of evidence that the first stars were predominantly massive in nature.
This origin scenario is supported by two additional lines of evidence. First, the Sr and Ba abundances of J1808$-$5104 are extremely low, $\mbox{[Sr/H]}\sim-4.7$ and $\mbox{[Ba/H]}\sim-4.5$. These low values are typical for stars found in ultra-faint dwarf galaxies and also some of the classical dwarfs. Furthermore, the star has $\mbox{[Sr/Ba]}= -0.17$. This value is not far removed from what is typical for the (main) $r$-process, $\mbox{[Sr/Ba]}\sim-0.4$. A limited $r$-process, with $\mbox{[Sr/Ba]}>0.5$ \citep{Frebel2018}, can clearly be ruled out, so can the $s$-process ($\mbox{[Sr/Ba]}<-1$).
Figure~\ref{fig:sr_ba} shows [Sr/Ba] as a function of [Ba/Fe] for ultra-faint dwarf galaxies stars overplotted with halo metal-poor stars (red squares and diamons) adopted from \citet{Yong2013} and \citet{barklem05}, respectively\footnote{Literature data collection of ultra-faint dwarf galaxy stars taken from \url{https://github.com/alexji/alexmods}}. Black points represent ultra-faint dwarf galaxy stars with Sr and Ba measurement, downward arrows represent stars with upper limits on Sr and/or Ba abundances. Plotting [Sr/Ba] against [Ba/Fe], J1808$-$5104 is found below the main trend set by metal-poor halo stars, in a region that is characteristically populated by stars in the dwarf galaxies. This all suggests J1808$-$5104's neutron-capture elements to possibly be provided by one supernova or explosive event only. Following arguments laid out in \citet{ji16b}, a level of $\mbox{[Ba/H]}\sim-5$ is reached by the yields of one supernova if the gas mass into which the yield is diluted into is 10$^6$\,M$_\odot$. This adds confidence to the scenario that the star formed in a sparse system with only one or few SNe progenitors. This is also already indicated by the low [Fe/H] of the star, which is suggestive of a 10$^6$\,M$_\odot$ birth cloud (and assuming a canonical Fe yield of 0.1\,M$_\odot$). It is thus indicative that stars with $-4.5\lesssim{\metal}\lesssim-4.0$ and
$0\lesssim$~[C/Fe]~$\lesssim0.7$ all formed in similar
environments that already experienced some degree of chemical
homogeneity, while also showing clear signs in the form of low neutron-capture abundances of only one, or at most a small number of progenitor stars.
\subsection{Kinematic Signature}
Investigating the long-term orbital history of J1808$-$5104 can add a new dimension to comprehensively probe its origin. Detailed space motion for J1808$-$5104 can be derived by combining astrometric information obtained by Gaia DR3 \citep{Gaia_DR3} and the systemic radial velocity (RV = 14.8 \,km\,s$^{-1}$ at phase = 0; see Figure~\ref{orbital_period}). To perform this investigation in a statistical way, we generate 10,000 realizations of the celestial positions ($\alpha$, $\delta$), proper motions ($\mu_{\alpha}$, $\mu_{\delta}$), and the systemic RV value using a normal distribution and associated uncertainties.
We then assume that the Sun is located at R$_\odot =8.178 \pm 0.013$\,kpc from the Galactic center \citep{Gravity_Collaboration2019}, $z_{\odot}= 20.8 \pm 0.3$\,pc above the Galactic plane, and has peculiar motion $U_{\odot} =11.1 \pm 0.72$\,km\,s$^{-1}$ \citep{Bennett2019}, $V_{\odot}= 12.24 \pm 0.47$\,km\,s$^{-1}$, and $W_{\odot}= 7.25 \pm 0.36$\,km\,s$^{-1}$ \citep{Schonrich2010}. We take V$_{LSR}$ = $220$ km\,s$^{-1}$ \citep{Kerr1986}. For each realization, we calculate Galactocentric coordinates ($X,Y,Z$), rectangular Galactic ($U,V,W$), and cylindrical Galactocentric coordinates ($V_{R}, V_{\phi}, V_{z}$), as described in \citet{Mardini2022}. We also compute orbital parameters (Z$_{max}$, r$_{apo}$, r$_{peri}$, eccentricity), for the past 8\,Gyr using our time-varying galactic potentials, \texttt{ORIENT}\footnote{\url{https://github.com/Mohammad-Mardini/The-ORIENT}} \citep[for more details, we refer the readers to ][]{Mardini_2020}.
Figure~\ref{fig:orient_orbits} shows the projections of the long term orbital evolution of J1808$-$5104 in various planes, for one of the 10,000 realizations, using the \texttt{ORIENT} potential~$\#483868$. The X-Y plane suggests that J1808$-$5104 is on a quasi-circular orbit (e~=~$0.22 \pm 0.01$). Furthermore, the R-$|Z|$ plane suggests that J1808$-$5104 currently (cycles where the colour tends toward blue, that is short look back time) resides in the Galactic thin disk, orbiting with a radius of $\sim 8$\,kpc from the Galactic center. While at a long look back time, the star might travel 3\,kpc above and below the plane of the present-day Galactic disk (cycles where the colour tends toward red).
The Galactic model used to evolve the orbit has time-varying potential derived from best-fitting a subhalo in a large scale cosmological simulation that is similar in some metrics to the present-day Milky-Way \citep{Mardini_2020}. While the model is a composition of an NFW sphere and a Miyamoto-Nagai disk at every given time, the seven free parameters are time-varying. That includes the orientation of the disk, which has two free angles and is oriented such that it coincides with the X-Y plane at the present-day. The particular \texttt{ORIENT} model has a disk that is inclined by up to $22^\circ$ relative to present day, this largest inclination occurs at a look back time of 6\,Gyr or at redshift of $z \sim 0.6$. The large $Z$-values seen in Figure~\ref{fig:orient_orbits} around that time therefore reflect mostly the inclination of the disk itself: even if the star were on a perfectly planar orbit, its $Z$ ordinate would reach values as large as 3\,kpc (as this is the approximate Galactocentric distance of 8\,kpc times the sine of the maximum inclination angle)\footnote{A right triangle with one point at the centre of the Galaxy, another point at the star at its apocentre, and the third point in the present-day Galactic plane under the star. In this convention, the r$_{apo}$ and Z$_{max}$ are the sides of the triangle across from the disk's inclination ($\alpha$), so $\sin (\alpha)$ = Z$_{max}$/r$_{apo}$ by definition.}.
In addition, and for comparison purposes, we performed another backward orbital integration for J1808$-$5104 using \texttt{galpy}\footnote{\url{https://docs.galpy.org/en/v1.8.0/}} and its invariable Galactic potential \texttt{MWpotential2014} \citep{Bovy2015}.
It is important to recall here that the size and the mass of the Milky Way components in \texttt{MWpotential2014} are assumed to remain constant with time. Using such an idealized potential would not mimic the realistic formation and evolution history of the Milky Way which itself was built from smaller accreted satellites \citep[e.g.,][]{hierarchical1,hierarchical2,hierarchical3}. Figure~\ref{fig:orbits} shows the same projections as in Figure~\ref{fig:orient_orbits}. Apparently, the static orbits do not change significantly with redshift and the phase-space coordinates of J1808$-$5104 at high redshifts do not extend to higher Galactocentric distances either.
The main difference between Figures~\ref{fig:orient_orbits} and \ref{fig:orbits}, or the integration in time-varying and time-static potential respectively, is that no conserved quantities exist in the former case. In addition to the variation in the model's disk orientation mentioned above, other quantities vary as well, some quite significantly over the integration period. For example, the disk's mass increases at approximately constant rate from $\sim 4\times 10^{10} \,\mathrm{M}_\odot$ at a look back time of 8 Gyr to $\sim 7\times 10^{10} \,\mathrm{M}_\odot$ at a look back time of 1.5 Gyr (after which it does not vary significantly). Orbits integrated with \texttt{ORIENT} models may therefore demonstrate irregular behaviour compared to those integrated with \texttt{galpy}, depending on the model's specific cosmic history.
Finally, we probe the thin disk membership of J1808$-$5104 using the diagnostic tool developed in \citet{Mardini2022}, based on stellar actions and velocities, to qualitatively assign individual stars to one of the traditional Galactic components. All of the J1808$-$5104 10,000 realizations suggest that the star to be well confined to the thin disk.
To learn about the overall origin scenario of J1808$-$5104, two possible pathways can be considered that are based on the fact that the star has been located in the thin disk for billions of years. (1) J1808$-$5104 could have originated in a satellite galaxy that was accreted into the disk and fully disrupted at early times in the formation history of the Milky Way. (2) J1808$-$5104 could have been one of the earliest in-situ stellar births as part of the formation of the primordial thin disk which came together from small building blocks.
The fact that the star has a very old age of $\sim13.5$\,Gyr \citep{Schlaufman2018} needs to be taken into account when addressing these scenarios.
Regarding scenario (1), some theoretical models of the formation of the Galactic disk predict an old thin disk population to be built up from satellite(s) debris \citep[for example, see figure~8 in][]{Abadi2003}. However, the contribution of the satellite heavily depends on its orbit and the level of the dynamical friction \citep{Statler1988}. In this picture, the core of the satellite should be dense enough to survive tidal disruption up to the time that it circularized its orbit within the disk (i.e. interacts strongly with the disk and deposit significant fraction of its stars). The still only low fraction of the discovered metal-poor ({\metal} $<-3$) stars with thin disk-like kinematics \citep[e.g.,][]{Bensby2014,Sestito2019,Carter2020,Cordoni+2020,Matteo2020,Venn2020, Mardini2022} furthermore suggests that this scenario likely does not explain the origin of J1808$-$5104.
The second scenario then requires the absence of dynamical interactions with the spiral arms and/or merging satellite(s) to not heat up the orbit of J1808$-$5104 (i.e. increase e and Z as a function of time). We investigated this scenario by tracing back the orbital history of each one of the 10,000 realizations obtained with \texttt{ORIENT}, to check whether J1808$-$5104 can maintain its thin disk kinematics during the past 8\,Gyr. Figure~\ref{fig:orient_test} shows e, r$_{apo}$, and Z$_{max}$ for 200 realizations (each data point represents one realization)\footnote{Our version containing results of the entire 10,000 realizations is very crowed. Therefore, we show the results for 200 randomly selected realizations.}, of five different \texttt{ORIENT} potentials (each symbol represents one model), at four different cosmic times (each color represents one cosmic time). Notice that Z$_{max}$ is calculated with respect to the model's disk orientation at a given cosmic time. All of these data points holds thin disk kinematics. Thus, it appears physically quite possible for J1808$-$5104 to survive these dynamical interactions and maintain its thin disk kinematics over billions of years.
In summary, our chemical abundances and kinematics results suggest, paired with the old age of the system \citep[$\approx 13$\,Gyr][]{Schlaufman2018}, that J1808$-$5104 is the most primitive thin disk star known. It likely formed at the earliest epoch of the hierarchical assembly of the Milky Way, and would thus be an ancient member of the primordial thin disk.
\section{Conclusions}\label{sec:Conclusions}
In this paper, we present a comprehensive chemo-dynamical analysis of the most metal-poor thin disk star 2MASS~J18082002$-$5104378. We provide further five radial-velocity (RV) measurements based on Magellan/MIKE high resolution spectra. These RV measurements suggest that J1808$-$5104 is in a binary system. The system has an orbital period of $P = 34.7385_{-0.2}^{+0.2}$\,days and line-of-sight velocity of RV = 14.8 \,km\,s$^{-1}$.
We report on the first detection of the Ba\,{II} line at 4554\,\AA\ for J1808$-$5104. The observed chemical pattern suggest that J1808$-$5104 exhibits mild enhancements in the $\alpha$-elements, and no enhancements in either carbon ($\mbox{[C/Fe]} = 0.38$ $\pm 0.10$) or neutron-capture elements ([Sr/Fe]= $-0.87$ $\pm 0.10$ and [Ba/Fe]= $-0.70$ $\pm 0.10$); indicating that J1808$-$5104 was formed in chemically primitive cloud that experienced relatively few enrichment events. We compare the light elements abundance pattern to theoretical yields of Population\,III adopted from \cite{heger_woosley10}. The best fit model suggest a progenitor with stellar mass of 29.5\,M$_\odot$ and explosion energies 10$\times 10^{51}$\,erg. In general, the comparison suggest a fallback with no mixing supernovae to be responsible for the chemical enrichment of J1808$-$5104.
We also perform a comprehensive study of the possible orbital evolution of 10,000 J1808$-$5104-like stars using our time-dependent galactic potential the \texttt{ORIENT} and the diagnostic tool developed in \citet{Mardini2022}. The results show that all of the J1808$-$5104-like stars maintain a quasi-circular orbits, Z$_{max} < 1$ and still bound to the Galaxy. In general, these orbits exclude the possibility that J1808$-$5104 has an accretion origin, and suggest it being member of the primordial thin disk.
\section*{Acknowledgements}
This work is supported by Basic Research Grant (Super AI) of Institute for AI and Beyond of the University of Tokyo. M.k.M. acknowledges partial support from NSF grant OISE 1927130 (International Research Network for Nuclear Astrophysics/IReNA). A.F. acknowledges support from NSF CAREER grant AST-1255160 and NSF grant AST-1716251. A.C. is supported by a Brinson Prize Fellowship at the University of Chicago/KICP. The work of V.M.P. is supported by NOIRLab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. I.U.R.\ acknowledges support from NSF grant AST~1815403/1815767 and the NASA Astrophysics Data Analysis Program, grant 80NSSC21K0627. This work made use of the NASA's Astrophysics Data System Bibliographic Services.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
\bibliographystyle{mnras}
\bibliography{references}
\bsp %
\label{lastpage} |
Title:
A Unified Catalog-level Reanalysis of Stage-III Cosmic Shear Surveys |
Abstract: Cosmological parameter constraints from recent galaxy imaging surveys are
reaching $2-3\%$-level accuracy. The upcoming Legacy Survey of Space and Time
(LSST) of the Vera C. Rubin Observatory will produce sub-percent level
measurements of cosmological parameters, providing a milestone test of the
$\Lambda$CDM model. To supply guidance to the upcoming LSST analysis, it is
important to understand thoroughly the results from different recent galaxy
imaging surveys and assess their consistencies. In this work we perform a
unified catalog-level reanalysis of three cosmic shear datasets: the first year
data from the Dark Energy Survey (DES-Y1), the 1,000 deg$^{2}$ dataset from the
Kilo-Degree Survey (KiDS-1000), and the first year data from the Hyper
Suprime-Cam Subaru Strategic Program (HSC-Y1). We utilize a pipeline developed
and rigorously tested by the LSST Dark Energy Science Collaboration to perform
the reanalysis and assess the robustness of the results to analysis choices. We
find the $S_{8}$ constraint to be robust to two different small-scale modeling
approaches, and varying choices of cosmological priors. Our unified analysis
allows the consistency of the surveys to be rigorously tested and we find the
three surveys to be statistically consistent. Due to the partially overlapping
footprint, we model the cross-covariance between KiDS-1000 and HSC-Y1
approximately when combining all three datasets, resulting in a $1.6-1.9\%$
constraint on $S_8$ given different assumptions on the cross-covariance.
| https://export.arxiv.org/pdf/2208.07179 |
\hbadness=10002
\vbadness=10002
\author[Emily P. Longley et al.]{
Emily P. Longley,$^{1}$
Chihway Chang,$^{2,3}$
Christopher W. Walter,$^{1}$
Joe Zuntz,$^{4}$
\newauthor
Mustapha Ishak,$^{5}$
Rachel Mandelbaum,$^{6}$
Hironao Miyatake,$^{7,8}$ \newauthor
Andrina Nicola,$^{9}$
Eske M. Pedersen,$^{10}$
Maria E.\ S.\ Pereira\,$^{11}$
Judit Prat,$^{2,3}$ \newauthor
J. S\'{a}nchez,$^{12}$
Lucas F. Secco,$^{3}$
Tilman Tr{\"o}ster,$^{13}$
Michael Troxel,$^{1}$
Angus Wright,$^{14}$\newauthor
The LSST Dark Energy Science Collaboration
\\
$^{1}$Department of Physics, Duke University, Durham NC 27708, USA\\
$^{2}$Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL 60637, USA \\
$^{3}$Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA \\
$^{4}$Institute for Astronomy, University of Edinburgh, Edinburgh EH9 3HJ, United Kingdom \\
$^{5}$Department of Physics, The University of Texas at Dallas, Richardson, TX 75080, USA \\
$^{6}$Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, USA \\
$^{7}$Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),
Nagoya University, Nagoya, 464-8602, Japan \\
$^{8}$Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study (UTIAS), \\ The University of Tokyo, Chiba 277-8583, Japan \\
$^{9}$Department of Astrophysical Sciences, Princeton University, Peyton Hall, Princeton, NJ
08544, USA \\
$^{10}$Department of Physics, Harvard University, 17 Oxford street, Cambridge, MA 02138, USA \\
$^{11}$Hamburger Sternwarte, Universit{\"a}t Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany \\
$^{12}$Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA \\
$^{13}$Institute for Particle Physics and Astrophysics, ETH ZГјrich, Wolfgang-Pauli-Strasse 27, 8093 ZГјrich, Switzerland \\
$^{14}$Ruhr-University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB),\\ German Centre for Cosmological Lensing, 44780 Bochum, Germany
}
\section{Introduction}
Weak (gravitational) lensing refers to the subtle coherent distortion of galaxy shapes due to the bending of light in the gravitational fields sourced by the mass distribution between the galaxy and the observer. Tomographic cosmic shear measures these distortions in bins of redshift and gives a picture of the Universe's growth of structure and expansion with time. Cosmic shear is particularly sensitive to the total matter density today, $\Omega_{\rm m}$, and the normalization of the matter fluctuations on $8h^{-1}$Mpc scales, $\sigma_{8}$. It is usually quoted via the quantity $S_{8}\equiv \sigma_{8}\sqrt{\Omega_{\rm m}/0.3}$, which approximates the most constrained direction in this parameter space using the weak lensing power spectrum \citep{Jain1997}.
Cosmology from weak lensing with galaxy imaging surveys has reached an exciting milestone. The ``Stage-III'' and ``Stage-IV'' classification was introduced in the Dark Energy Task Force report \citep{Albrecht2006} as different phases of dark energy experiments. Stage-III refers to the dark energy experiments that started in the 2010s and Stage-IV refers to those that start in the 2020s, taking full advantage of further technological advances. Current surveys have been publishing their intermediate results, showing that the cosmological constraint on the $S_{8}$ parameter from weak lensing is reaching a similar level as that from cosmic microwave background (CMB) experiments, and will reach this in Stage-IV. In particular, the three largest surveys today -- the Dark Energy Survey \citep[DES,][]{Flaugher2015}, the Kilo-Degree Survey \citep[KiDS,][]{deJong2015} and the Hyper Suprime-Cam Subaru Strategic Program \citep[HSC-SSP,][]{Aihara2018a} -- have all shown exquisite measurements and cosmological constraints from just using a subset of their final data, conducted through blinded analyses. The results have shown an intriguing difference is seen in the parameter $S_{8}$ when comparing all the galaxy survey and the CMB experiments -- galaxy imaging surveys tend to prefer a lower $S_{8}$ value compared to CMB experiments \citep{Hikage2019,Hamana2018,Planck2018,Asgari2021,Amon2021,Secco2021,HamanaRevision}.\footnote{We note that in recent work by \cite{Galli2022} the experiments are less discrepant when adopting extension model $\Lambda\textrm{CDM}+b$ which includes modeling of baryon clumping.} However, this difference is not at a level of significance to indicate a distinct tension.
Thus far, the commonly adopted $\Lambda$CDM cosmological model, in which the universe is dominated by cold dark matter (CDM), and the accelerated expansion is driven by a cosmological constant $\Lambda$, has been successful in explaining growth of structure. However, if the tension between low and high redshift experiments is found to be statistically significant, it could be an indication of new exciting physics, and evidence of a breakdown of the model. But, on the other hand, it could also be a sign of unknown systematic errors in either the galaxy or the CMB results.
Given the complementarity of the DES, KiDS and HSC-SSP data characteristics, and the nearly independent analysis pipelines, comparing the three surveys under unified code and analysis choices is an extremely strong test for any systematic effects. The redundancy of data products and analysis approaches provides one of the most powerful ways to check the robustness of a cosmology result. For example, \citet{Joudaki2020} showed that the different approach used in KiDS and DES for photometric redshift calibration could shift either surveys' $S_{8}$ constraints by a small amount. In \citet{Asgari2020}, they explore mitigating baryon feedback uncertainty in a joint KiDS and DES analysis. Also, \citet{Troxel2018} showed that correcting for the survey geometry in the covariance matrix could improve the goodness-of-fit for both KiDS and DES. Finally, as shown in \citet{Doux2021} and \citet{Asgari2021}, the use of different cosmic shear estimators and scale cuts could explain the difference seen in two analyses from the same dataset in both HSC and KiDS. Most of these analyses focus on a few specific effects and datasets, making it rather challenging to state conclusively whether cosmic shear results are consistent with CMB constraints from {\it Planck} \citep{Planck2018}.
The LSST Dark Energy Science Collaboration (DESC) has developed a program over the years re-analyzing Stage-III data with DESC pipelines. \citet{Chang2019} was the pilot work of reanalyzing published weak-lensing catalogs using a common pipeline and unified analysis choices. The authors reanalyzed the cosmic shear studies from four galaxy imaging surveys: the Deep Lens Survey \citep[DLS,][]{Jee2016}, the Canada-France-Hawaii Telescope Lensing Survey \citep[CFHTLenS,][]{Joudaki2017}, the DES Science Verification (SV) data \citep[DES-SV,][]{Abbott2015}, and the 450 deg$^{2}$ KiDS data \citep[KiDS-450,][]{Hildebrandt2017}. First, they attempted to reproduce the published results, and, through that process discovered subtle issues in each pipeline. Some examples include being overly aggressive in terms of including small scales, an error in defining the angular bins, and an outdated choice of model for the nonlinear power spectrum. They next studied the impact on the cosmological constraints when unifying three specific analysis choices: angular scale cuts, model parameters/priors, and the covariance model. They showed that once these analysis choices were unified, the cosmological constraints appeared much less consistent than the published results and the relative constraining power appear to change significantly too. This work was a sobering reminder of how sensitive cosmological constraints are to the various analysis decisions we make, even after the galaxy catalogs are generated (which is on its own extremely challenging tasks). It also highlights the importance of transparent, independent cross-checks amongst different experiments, in order to identify issues in the pipelines and analysis choices of previous results. Additionally, unifying analysis choices and priors allows for a correct comparison between the surveys' results and allows us to quantify their agreement, which cannot be properly computed with the current disparate choices.
Since \citet{Chang2019}, a longer-term pipeline framework has been established in DESC for a range of analyses using the software package \textsc{Ceci},\footnote{\url{https://github.com/LSSTDESC/ceci}} which is built on the modular workflow engine \textsc{Parsl}\footnote{\url{https://parsl-project.org}} \citep{Babuji2018}. In particular, \textsc{TXPipe}\footnote{\url{https://github.com/LSSTDESC/TXPipe}} (Prat, Zuntz et al. {\em in prep}) is the measurement pipeline designed for measuring various two-point functions for large-scale structure cosmology. \textsc{TXPipe} is designed to be a single code-base that collects all functionalities under a common structure. The code is also modular, transparent, and well-tested. \textsc{TXPipe} is under constant active development and will be used as the main measurement code in this work. We expect the pipeline to remain the primary ``catalog to two-point measurement'' code up till and continuing into LSST operations. In the coming years, this type of reanalysis work will be extremely valuable in preparation for science with LSST.
The main goal of this paper is to reanalyze the cosmic shear studies carried out by three Stage-III surveys: the first year of DES data \citep[hereafter DES-Y1,][]{Troxel2017}, the first year of HSC-SSP data \citep[hereafter HSC-Y1,][]{Hamana2018,HamanaRevision}, and the 1000 deg$^2$ KiDS data \citep[hereafter KiDS-1000,][]{Asgari2021}. Ultimately we would like to understand if the three datasets are consistent and if so, what is the combined constraint. The shear catalogs used for all three studies are public and will serve as the input to our analysis. We do not attempt to reevaluate the photometric redshift estimates and redshift distributions but encourage this for future work as a valuable test of DESC photo-z infrastructure as well as a chance to compare photometric redshift approaches across surveys. Similarly we do not re-measure the shears of galaxies but encourage future analyzes to perform an image-level reanalysis to give valuable insight into shear measurement and calibration. We adopt the real-space two-point correlation functions as our fiducial statistic, which at the time of this analysis is the statistic that is available to compare to public results for all three surveys. An assessment of the consistency of the surveys with different statistics would be an interesting exercise for future work. Following a similar logic as \citet{Chang2019}, we first attempt to reproduce the published results, both in terms of the data vector and the cosmological constraints. Next we test the sensitivity of the cosmological constraints to various analysis choices that are made by each survey. We then present results from the three datasets using a set of unified analysis choices and evaluate the consistency between them. The final step will be to combine the datasets that are consistent with each other and evaluate the consistency of the final result with CMB constraints from {\it Planck}. This work represents the first use of \textsc{TXPipe} on real data, which poses an opportunity to stress-test the infrastructure. We aim for this process to provide guidance for LSST and highlight any areas of the ``catalog to cosmology'' software and methodology that needs to be further developed.
The paper is organized as follows. In Section~\ref{sec:data} we describe the basic characteristics of the three datasets that we use in this work, as well as the basic information of the three cosmic shear analyses we will be investigating. In Section~\ref{sec:analysis} we provide a brief overview of the different elements of the analysis: the theoretical model, the measurement pipeline, and the inference code. In Section~\ref{sec:fiducial} we compare our reanalysis with published results, including comparison of the data vector and the cosmological constraints.
In Section~\ref{sec:priors} we study the sensitivity of each survey result to the choice of priors on the cosmological parameters. Similarly in Section~\ref{sec:scale_cuts} we study their sensitivity to the treatment of small-scale baryonic model uncertainties. In Section~\ref{sec:unify} we perform a unified analysis with all three datasets assuming the same analysis choices in terms of priors on the cosmological parameters, intrinsic alignment model and small-scale treatment. We evaluate the consistency between the different datasets and combine them to compare with CMB constraints from {\it Planck}. Finally, we conclude in Section~\ref{sec:conclusion}.
\section{Stage-III Cosmic Shear Analyses}
\label{sec:data}
Since the first detection of cosmic shear \citep{Kaiser2000,Bacon2000,Wittman2000,VanWaerbeke2000}, the field has seen a rapid growth. In particular, a number of large surveys have delivered cosmic shear results with competitive cosmological constraints in the past few years \citep{Heymans2013,Becker2015,Jee2016,Hildebrandt2017,Joudaki2017,Troxel2017,Hikage2019,Hamana2018,Asgari2021,Amon2021,Secco2021}, while recent and future surveys will deliver data in much larger volumes and better quality.
We focus on the reanalysis of three cosmic shear studies using three independent Stage-III surveys: \citet{Troxel2017,Hamana2018,Asgari2021}. These correspond to the latest version of publicly available shear catalogs from DES, HSC-SSP and KiDS at the time when our analysis began. We show in Figures~\ref{fig:footprints}\footnote{This plot was made using the code \textsc{cartosky} (\url{https://github.com/kadrlica/cartosky})
} and \ref{fig:nz} the footprints and redshift distributions of the three datasets. Overall, we note that the footprints of the three datasets are largely non-overlapping except three of the six HSC-Y1 fields; GAMA09H, GAMA15H and WIDE12H with KiDS-1000 North. The redshift distributions span a similar range in DES-Y1 and KiDS-1000, and the HSC-Y1 survey is about $0.5z$ deeper.
We now describe in more detail below the characteristics of each dataset as well as an overview of the cosmic shear analyses carried out in \citet{Troxel2017,Hamana2018} and \citet{Asgari2021}. Some of the important information from each survey is summarized in Table~\ref{survey_summary}.
\subsection{DES-Y1}
The Dark Energy Survey (DES) first-year dataset consists of observations from DES between 2013 and 2014 as described in \citet{Drlica-Wagner2017}. The survey used
the Dark Energy Camera \citep{Flaugher2015} on the Cerro Tololo Blanco 4m telescope with five filter bands ($grizY$).
The DES-Y1 cosmology analysis from weak lensing was presented in \citet{Troxel2017}. The work shows consistent results between two independent galaxy shape catalogs whose details and validation were recorded in \cite{Zuntz2017}. In this work we use the catalog produced by the \textsc{Metacalibration} \citep{Huff2017,Sheldon2017} shear measurement pipeline run on the $riz$ bands, which contains 26.1 million galaxies and covers an area of 1321 deg$^{2}$. The $riz$ $5\sigma$ limiting magnitude and seeing of this dataset are $\sim24.0$ and $0.96\arcsec$. The algorithm was run in the fast Bayesian fitting framework \textsc{ngmix} \citep{Sheldon2014} fitting each galaxy to a Gaussian (convolved with a PSF model) to determine an estimate of its ellipticity. Images are then artificially sheared and the ellipticity is re-measured to calculate a response matrix that is used to calibrate the shear measurement. The self-calibration scheme additionally accounts for selection effects by performing a similar calculation with and without a given selection applied.
We use the same redshift distribution estimates as \citet{Troxel2017} which are described in detail in \cite{Hoyle2017} for tomographic binning and $n_{i} (z)$ distributions. Galaxies are assigned a tomographic bin based on the mean of the photo-z posterior estimate derived using the Bayesian photometric redshift code \citep[BPZ,][]{Benitez2000} and redshift distributions are estimated from stacking MC draws from the photo-z posteriors.\footnote{In \citet{Malz2018} a more mathematically robust method for treating photo-z posteriors, is introduced to account for the assumptions of stacking photo-z posteriors in the methods used by the surveys considered in this paper. We do not re-derive the n(z)'s in this work.} All catalogs are publicly available\footnote{\url{https://des.ncsa.illinois.edu/releases/y1a1}}.
The analysis pipeline used in \citet{Troxel2017} is based on the software package \textsc{CosmoSIS} \citep{Zuntz2014}, which is the same cosmology inference framework we use in this paper.
\begin{table*}
\caption{Summary of the basic characteristics of the three cosmic shear analyses that we aim to reanalyze in this paper. We quote the surveys' effective number density as $n_{\rm eff} = \frac{1}{A_{\rm eff}}\frac{(\sum{w_{i}})^{2}}{\sum{w_{i}^{2}}}$ for each tomographic bin. Similarly, we list the standard deviation of the galaxy shapes, $\sigma_{e}$ for each bin, as the quadrature sum of the measurement error and the shape noise \citep[following the definition in][]{Heymans2012}. We note there are alternative definitions of these quantities, namely \citet{Chang2012} which includes the purely shape noise component and \citet{Joachimi2020} which includes the contribution from the shape calibration. Redshift ranges are listed for the tomographic bin edges. The last row lists the final data vector length after scale cuts.}
\label{survey_summary}
\small\centering
\begin{tabular}{||l l l l||}
\hline
& \multicolumn{1}{c}{DES-Y1} & HSC-Y1 & KiDS-1000 \\ [0.5ex]
\hline
Reference & \citet{Troxel2017} & \citet{Hamana2018} & \citet{Asgari2021} \\
\hline
Area (deg$^{2}$) & 1321.0 & 136.9 & 1006.0 \\
\hline
Tomographic bins & [0.20, 0.43, 0.63, 0.90, 1.30] & [0.30, 0.60, 0.90, 1.20, 1.50] & [0.1, 0.3, 0.5, 0.7, 0.9, 1.2] \\
\hline
$\sigma_{e}$ & [0.26, 0.29, 0.27, 0.29] & [0.27, 0.27, 0.29, 0.32]\footnote{5} & [0.27, 0.26, 0.27, 0.25, 0.27] \\
\hline
$n_{\rm eff}$ (arcmin$^{-2}$) & [1.52, 1.55, 1.63, 0.83] & [5.78, 5.85, 4.46, 2.61] & [0.62, 1.18, 1.85, 1.26, 1.31] \\
\hline
$[\theta_{\rm min}, \theta_{\rm max}]$ (arcmin) & [2.5, 250] & [0.316, 316] & [0.5, 300] \\
\hline
Number of angular bins & 20 & 31 & 9 \\
\hline
Data vector length & 227 & 170 & 225 \\
\hline
\multicolumn{4}{l}{\footnotesize $^{7}$ We note that we quote $\sigma_{e}$ for HSC using the shear definition $(|e|=(a-b)/(a+b))$, whereas they adopted the distortion} \\
\multicolumn{4}{l}{in the original analysis $(|e|=(a^2-b^2)/(a^2+b^2))$. In this definition the values are [0.41, 0.42, 0.43, 0.45].} \\
\label{table:survey_properties}
\end{tabular}
\end{table*}
\subsection{HSC-Y1}
The Hyper Suprime-Cam Subaru Strategic Program (HSC SSP) first-year dataset contains observations from between March 2014 and April 2016 taken in 6 disjoint regions (named XMM, GAMA09H, WIDE12H, GAMA15H, VVDS, and HECTOMAP) with the Subaru telescope \citep{Aihara2018b}. The data processing pipeline used by HSC \citep{Bosch2018} is a customized prototype version of
the Rubin Observatory’s LSST Science Pipelines
and thus provides valuable insight into the future data products of LSST.
The HSC-Y1 cosmic shear analyses was performed with both a power spectrum analysis described in \citet{Hikage2019} and real-space measurements described in \citet{Hamana2018} and \citet{HamanaRevision}. For both studies, the shape catalog measurements are validated through an extensive set of tests described in \citet{Mandelbaum2018b}. The catalog covers an area of $136.9$ deg$^{2}$ and contains $13.1$ million galaxies. These catalogs are made public through the S16A public data release described in \citet{Aihara2018a}.\footnote{\hfill\url{https://hsc-release.mtk.nao.ac.jp/doc/index.php/s16a-shape-catalog-pdr2/}}
Shape measurements were estimated using the $i$-band images using the re-Gaussianization PSF correction method \citep{Hirata2004}. The mean $i$-band seeing and $5\sigma$ limiting magnitude are $0.58\arcsec$ and $\sim 26$. For the weak-lensing catalog a magnitude cut of $i<24.5$ is applied.
Our catalog corresponds to the same weak-lensing cuts that were applied in \citet{Hikage2019} and \citet{Hamana2018}.\footnote{We note that the number of galaxies listed in Table:~\ref{table:survey_properties} are different than those in \citet{Hamana2018} and \cite{Hikage2019}. We have clarified with the authors that the numbers listed here are correct and the sample used in this paper is identical to that in their work.} Six photo-z codes were tested for the \cite{Mandelbaum2018a} catalog; their details are described in \cite{Tanaka2018}. Following their recommendation we perform tomographic binning using the \texttt{Ephor AB best} photo-z point estimate. We use the same redshift distribution estimates as \citet{Hikage2019} and \citet{Hamana2018}, described in \citet{Tanaka2018}, which were estimated as a histogram of COSMOS $30-$band photo-z's, re-weighted such that the distributions in a self-organizing map constructed from $grizy$ colors reflect that of the source sample. The shapes are calibrated using the values and procedure described in \citet{Mandelbaum2018b}.
\subsection{KiDS-1000}
The Kilo-Degree Survey (KiDS) 1000 deg$^{2}$ dataset represents the most recent data of the three used in this work and was made available in the fourth data release by the KiDS collaboration\footnote{\url{http://kids.strw.leidenuniv.nl/DR4/lensing.php}} \citep{Kuijken2019}. The images were taken on the OmegaCAM CCD Mosaic of the VLT Survey Telescope \citep[VST,][]{Kuijken2015,deJong2015,Kuijken2019}. The data release additionally includes nine-band near-infrared photometry ($ugriZYJHK_{s}$) based on imaging from the fully overlapping VISTA Kilo degree INfrared Galaxy Survey \citep[VIKING;][]{Edge2013}.
Cosmic shear constraints from KiDS-1000 are found in \citet{Asgari2021}. The weak lensing sample consists of two fields (North and South) that total an area of 1006 deg$^{2}$. The galaxy shape measurement process and validation tests are described in \citet{Giblin2021}. The self-calibrating $lens$fit software \citep{Miller2013,FenechConti2017} was used for shape measurements from the $r$-band images, which has a mean seeing of $0.7\arcsec$ and $5\sigma$ limiting magnitude of $\sim25.0$. Photo-z point estimates for tomographic binning are determined from the peak of the posterior produced by running BPZ on the nine-band photometry. The redshift distribution estimation is described in detail in \citet{Hildebrandt2021} and is based on a re-weighted spectroscopic reference catalog using a self-organizing map (SOM). The SOM process determines the ``gold'' sample where sources that do not lie in reference color space are cut, resulting in a final sample of 21.2 million galaxies.
In \citet{Asgari2021} the cosmic shear constraints from three statistics are considered, namely two-point correlation functions (used in this work), complete orthogonal sets of E/B-integrals (COSEBIs), and band power spectra. Each are linear transformations of the observed cosmic shear two-point correlation function. They adopt the COSEBIs analysis as their fiducial results, since COSEBIs have the advantage of separating the E and B-modes of the cosmic shear signal. COSEBIs can be calculated from two-point correlation functions computed in a large number of finely spaced angular bins. In their work they show the COSEBIs, band powers and two-point correlation function results are consistent. There have also been two analyses of the KiDS-1000 data using the pseudo-Cl approach \citep{Loureiro2021,Troster2022}.
\section{Analysis}
\label{sec:analysis}
\subsection{Theoretical Background}
The two-point correlation function of galaxy shapes, $\xi_{\pm}(\theta)$ \citep{Bartelmann2001}, is a common statistic used to extract weak lensing information. Assuming the flat-sky approximation, these two-point functions are connected to the lensing power spectrum $C(\ell)$ via
\begin{equation}
\xi^{ij}_{\pm}(\theta) = \frac{1}{2\pi}\int d\ell \, \ell J_{0/4}(\theta \ell) \, C^{ij}(\ell),
\label{eq:xipm}
\end{equation}
where $J_{0/4}$ are the 0th/4th-order Bessel functions of the first kind. The $i$ and $j$ indices specify the two samples of galaxies (or in the case of $i=j$, the same galaxy sample) from which the correlation function is calculated. Usually these samples are defined by a certain redshift selection. Under the Limber approximation \citep{Limber1953,Loverde2008} and in a spatially flat universe,\footnote{For a non-flat universe, one would replace $\chi$ by $f_{K}(\chi)$ in the following equations, where $K$ is the universe's curvature, $f_{K}(\chi)=K^{-1/2}\sin(K^{1/2}\chi)$ for $K>0$ and $f_{K}(\chi)=(-K)^{-1/2}\sinh((-K)^{1/2}\chi)$ for $K<0$.} the lensing power spectrum encodes cosmological information through
\begin{equation}
C^{ij}(\ell) = \int_{0}^{\chi_{H}} d\chi \frac{q^{i}(\chi)q^{j}(\chi)}{\chi^2} P_{\rm NL}\left( \frac{\ell + 1/2}{\chi}, \chi \right),
\label{eq:Cl}
\end{equation}
where $\chi$ is the radial comoving distance, $\chi_{H}$ is the distance to the horizon, $P_\text{NL}$ is the nonlinear matter power spectrum, and $q(\chi)$ is the lensing efficiency defined via
\begin{equation}
q^{i}(\chi) = \frac{3}{2} \Omega_{\rm m} \left( \frac{H_{0}}{c}\right)^{2} \frac{\chi}{a(\chi)} \int_{\chi}^{\chi_{H}}d\chi ' n_{i}(\chi') \frac{dz}{d\chi'} \frac{\chi' - \chi}{\chi'},
\label{eq:lensing_efficiency}
\end{equation}
where $\Omega_{\rm m}$ is the matter density today, $H_{0}$ is the Hubble parameter today, $a$ is the scale factor, and $n_{i}(\chi)$ is the redshift distribution of the galaxy sample $i$.
We note that although we focus on the $\xi_{\pm}$ statistics, several alternative cosmic shear statistics have been used in the literature aside from Equation~\eqref{eq:xipm}. These include the Fourier space lensing power spectrum \citep[i.e. Equation~\eqref{eq:Cl}, see][]{Nicola2021,Camacho2021} and the Complete Orthogonal Sets of E/B-Integrals \citep[COSEBIs, see][]{Schneider2010, Asgari2017}. A comprehensive analysis of the different two-point estimators can be found in \citet{Asgari2021}.
\subsection{Modeling Systematic Effects}
In addition to the background model, we account for a number of observational and astrophysical systematic effects described below.
\subsubsection{Intrinsic Alignment}
When galaxies form near the same large-scale structure, their shapes can be coherently distorted by the gravitational field. Additionally, background galaxies lensed by large-scale structure can have correlated shapes with those that formed in the structure's gravitational field. Additionally, the evolutionary processes and galaxy mergers can induce intrinsic alignments between objects.
This Intrinsic Alignment effect (IA) causes their shapes to be correlated as a function of proximity, systematically affecting the weak lensing signal. A commonly used IA model in cosmic shear analyses is the nonlinear alignment model \citep[NLA,][]{Hirata2004,Bridle2007,Joachimi2011}. The model assumes the IA power spectrum scales with the nonlinear matter power spectrum and includes the contribution from the correlations of intrinsic galaxy shapes that have evolved in the same local field, dubbed the ``intrinsic shear -- intrinsic shear (II) term'', and the ``gravitational shear -- intrinsic shear (GI) term'', which accounts for the correlation between galaxies that are lensed by a structure and galaxies that are intrinsically aligned with the same structure, see, e.g., the review \cite{TroxelIshak2014} and references therein. In our adopted model, the IA power spectrum scales with the nonlinear matter power spectrum by $F[\chi (z)]$, given by
\begin{equation}
F[\chi (z)] = A_{\rm IA}C_{1}\rho_{\rm crit} \frac{\Omega_{\rm m}}{D_{+}(z)} \left( \frac{1+z}{1+z_0} \right) ^{\eta},
\end{equation}
where $A_{\rm IA}$ is the amplitude parameter, $C_{1} = 5 \times 10^{-14} h^{-2}M_{\odot}^{-1} \text{Mpc}^{3}$, $\rho_{\rm crit}$
is the critical density at $z = 0$, and $D_{+}(z)$ is the linear growth factor normalized to unity at $z = 0$ \citep{Bridle2007}. For HSC-Y1 and DES-Y1 the redshift term was allowed to vary. For DES-Y1, the pivot redshift of $z_0 = 0.62$ was adopted, which corresponds to the mean of the redshift sample, the typical choice for NLA. HSC-Y1 also adopted $z_0 = 0.62$. We check that changing this to the mean of their redshift distribution (deeper than DES-Y1) does not change the results.
The KiDS-1000 analysis did not include the redshift-dependent power-law in their fiducial analysis.
\subsubsection{Photo-z Systematics}
The weak lensing signal is most sensitive to the mean redshift of the galaxy sample, and thus a suitable approximation for our purposes is to adopt a model for the mean value as a nuisance parameter, as was done for previous surveys. We model the uncertainty on this quantity with the nuisance parameter $\Delta z_{i}$ that shifts the measured $n(z)$ for each bin $i$ such that
\begin{equation}
n_{i}(z) = n_{\rm obs,i}\left(z - \Delta z_{i}\right).
\end{equation}
It is possible that accounting for uncertainty in the width of these distributions, at the precision level of e.g. LSST, could have an impact on results, but this is beyond the scope of this paper. For the KiDS-1000 paper the shift parameters are correlated (due to the SOM formalism of their redshift calibration) so the priors for these quantities are correlated. We keep this fiducial choice for the analysis for the KiDS-1000 priors, and keep the HSC-Y1 and DES-Y1 priors uncorrelated.\footnote{In theory, the $\Delta z_{i}$ redshift parameter shifts could be correlated between surveys if the redshift calibration was done with overlapping spectroscopic samples. In our unified analysis we assume this is independent, but Garc\'ia-Garc\'ia ({\em in prep}) suggests that the effect on cosmological constraints is minimal.}
\subsubsection{Shear Calibration Uncertainty}
The calibration procedure for shear measurement has an associated uncertainty. The residual impact on the observed shear $\gamma_{\textrm{observed}}$ within the weak regime is often modeled with both multiplicative $m_{i}$ and additive components $c_{i}$ \citep{Huterer2006,Heymans2006,Bridle2007}, that scale the true shear $\gamma_{\textrm{true}}$ as
\begin{equation}
\gamma_{\textrm{observed}} = \gamma_{\textrm{true}} (1+m_{i}) + c_{i},
\end{equation}
where $m_i$ and $c_i$ are constant terms within the $i$th redshift bin. Physically, systematics such as residual effects from PSF modeling can depend on galaxy properties and therefore not be constant within a tomographic bin, in which case additional terms can be used to model the uncertainty. However, the dominant terms are often modelled in this convention. In our analysis, we adopt the approaches used by each survey to account for this uncertainty, as the individual surveys have extensively validated their shear calibration scheme and derived these models for their uncertainty:
\begin{itemize}
\item In DES-Y1 each redshift bin $i$ is assigned a multiplicative shear calibration nuisance variable $m_{i}$ that is independent for each bin and marginalized over in the parameter estimation stage. The Gaussian priors for each bin are identical but each variable is allowed to vary independently. Additionally, in their systematics analysis, DES-Y1 found a nonzero residual additive shear component, which they correct for by subtracting the mean shear in each tomographic bin.
\item In HSC-Y1, a single redshift-independent free parameter $m_0$ is used to account for the multiplicative shear calibration uncertainty for all redshift bins (so the multiplicative shear calibration is $100\%$ correlated between the redshift bins). The parameter is marginalized over in the parameter estimation stage. In an additional chain, they account for the residual additive shear by marginalizing over an additive $c_{0}$ term that is consistent across bins, however this was not found to affect the final constraints.
\item In KiDS-1000, the multiplicative shear calibration uncertainty is accounted for in the covariance matrix according to \citet{Asgari2021}. Additionally, they subtract the weighted mean ellipticity from each tomographic bin. A constant term $c_0$ models the residual additive shear uncertainty (which is assumed to be redshift independent).
\end{itemize}
In the HSC analysis, two additional nuisance parameters were introduced to account for residual uncertainty in the PSF calibration, that can additively bias the observed shear \citep[see][for details]{Mandelbaum2018a,Hamana2018}. The PSF modeling and conditions are individual to each survey, so we maintain this choice and do not unify this modeling.
\subsubsection{Small-Scale Modeling}
\label{small-scale}
At small-scales, the effects of baryons causes the matter power spectrum to become nonlinear.
Previous studies have generally adopted two approaches in order to mitigate the effects of potential biases that come from inaccuracies in modeling of the nonlinear power spectrum at small scales. The first approach (adopted by DES-Y1 and HSC-Y1) is to use a fixed nonlinear power spectrum (\textsc{HALOFIT}) and remove angular scales that can be contaminated at a certain level by baryon effects from the fit. The particular implementation from DES-Y1 was as follows: First, they calculated theoretical data vectors $\xi_{\pm}$ with baryon contamination by scaling the nonlinear power spectrum as
\begin{equation}
P_{\rm NL}(k,z) = \frac{P_{\rm DM+Baryon}}{P_{\rm DM}}P_{\rm NL}(k,z),
\end{equation}
where the ``Dark Matter (DM) + Baryon'' power spectrum is from the OverWhelmingly Large Simulations project \citep[OWLS,][]{Schaye2010,vanDaalen2011} AGN simulation and the DM power spectrum is from the OWLS dark-matter only simulations. The OWLS-AGN case is one of the more extreme in terms of baryonic effects of similar simulations, and thus helps to characterize a conservative cut.
Next, they compared these contaminated data vectors with the uncontaminated ones and determine the scale cut by requiring that the two not differ beyond $2\%$.
The approach by HSC-Y1 was similar, albeit slightly more lenient in terms of contamination,
adopting a cut at a roughly $5\%$ level based on the feedback model from \citet{Harnois2015}.
For the first approach, we implement a procedure that is modified from the DES-Y1 approach and used in the more recent DES cosmic shear analysis using the first 3 years of data \citep[DES-Y3,][]{Krause2021,Amon2021,Secco2021}. The new method takes into account the relative systematic effect of the baryon modeling to the overall survey's uncertainty. We describe the method below.
Similar to that done in DES-Y1, we take a simulated fiducial data vector $\xi_{\pm}$ for each survey and use the $P_{\rm DM+Baryon}$ and $P_{\rm DM}$ ratios from OWLs to contaminate a data vector with baryonic effects. We then look at the $\Delta \chi^{2}$ between the two for each tomographic bin pair, $i;j$,
\begin{equation}
(\xi_{\rm \pm \mathrm{Baryon}}^{i,j}-\xi_{\pm \mathrm{Fiducial}}^{i,j})^{t} \textbf{C}_{i,j}^{-1}
(\xi_{\pm \mathrm{Baryon}}^{i,j}-\xi_{\pm \mathrm{Fiducial}}^{i,j})
< \frac{\Delta \chi^{2}}{N},
\end{equation}
as a function of angular scales that are included, where $\textbf{C}_{i,j}^{-1}$ is the inverse covariance matrix for the tomographic bin. Following the DES-Y3 approach, we exclude scales for each survey where the total $\Delta \chi^{2}>0.5$, split between $N$ tomographic bins. We then adopt this cut for an original and contaminated data vector, run an MCMC chain, and confirm
the difference in the $S_8$ constraint
is $<0.2 \sigma$. We do this for both the individual surveys, and the combined survey constraint. The second step is mainly a sanity check -- in all cases it does not actually change the scale cuts. We note that this procedure is not exactly the same as DES-Y3 due to the difference in modelling choices, but provides a good framework to uniformly determine scale cuts across the three surveys. The scale cuts should lead to approximately the same level of bias assuming a certain baryonic contamination.
The second approach \citep[used by KiDS,][]{Hildebrandt2017,Asgari2021}
adopts the model implemented in
\textsc{HMCode}, which is an augmented variant of the Halo Model with physically-motivated parameters fit to N-body simulations. \textsc{HMCode} parameterizes the effect of baryonic feedback with a halo bloating parameter $\eta_{0}$ and the amplitude of the halo mass-concentration relation $A_{\rm baryon}$ \citep{Joachimi2020}. In their configuration the bloating parameter can be related to the amplitude parameter via
\begin{equation}
\eta_{0} = 0.98 - 0.12A_{\rm baryon}.
\end{equation}
We adopt this convention when implementing the \textsc{HMCode} model. The uncertainty on this feedback effect on small scales is captured by marginalizing over this parameter with a top-hat prior. A scale cut is still imposed, in particular on $\xi_{-}$ which is more sensitive to nonlinear effects at small scales. However, more data, in particular for $\xi_{+}$ is used.
\subsection{Covariance Matrix}
\label{sec:cov}
For this analysis we adopt the public covariance matrices used by the three surveys. DES-Y1 adopt an analytical joint-probe covariance described in \citet{Krause2017}. KiDS-1000 similarly adopt a joint-probe analytical covariance, as described in \citet{Joachimi2020}.
HSC-Y1 used 2268 realizations of mock HSC catalogs to directly compute a numerical covariance that accounts for their particularly complicated survey geometry \citep[see][for details]{Shirasaki2019}. Following the HSC-Y1 survey's decision,
a calibration factor of ($N_{r}-N_{d}-2)/(N_{r}-1)$ is applied to the inverse covariance, where $N_{r}=2268$ is the number of mock realizations, and $N_{d}=170$. This calibration accounts for biases that arise from covariances from numerical realizations \citep{Hartlap2007}. Additionally, the simulations used to compute the covariance can have shears that are underestimated due to the finite thickness effect described in \citet{Shirasaki2019} and Appendix B of \citet{Tanaka2018}. We account for this in the inverse covariance in the same method as the authors by applying a factor of $(1/0.92)^{2}$ (this factor was discussed with the author in private communication).
\subsection{Modeling Pipeline: \textsc{CosmoSIS}}
\label{sec:cosmosis}
For our likelihood pipeline we use the cosmology likelihood code \textsc{CosmoSIS}\footnote{\url{https://bitbucket.org/joezuntz/cosmosis/wiki/Home}} package \citep{Zuntz2014}. The code utilizes the Boltzmann and background integrator \textsc{CAMB}\footnote{\url{http://camb.info}} to model the linear matter power spectrum \citep{Lewis2000,Howlett2012}. The nonlinear matter power spectrum is modeled by either \textsc{HMCode} \citep{Mead2016}, implemented in \textsc{PyCamb} within \textsc{CosmoSIS}, or \textsc{HALOFIT} \citep{Smith2003,Bird2012,Takahashi2012}, implemented in \textsc{CosmoSIS}. The projection to $C_{l}$ and $\xi_{\pm}$ space uses the Limber approximation.
\subsection{Measurement Pipeline: \textsc{TXPipe}}
\label{sec:txpipe}
Motivated by the enormous stream of data that will be available from the LSST, \textsc{TXPipe} was developed in DESC to produce the necessary data vectors for a cosmology analysis. This pipeline was validated with the mock galaxy catalogs described in Prat et al. ({\em in prep}) where they show that the input cosmology for the simulation can be reproduced. The pipeline is designed to run efficiently on a large set of data and is structured in different stages.
Each of these stages is wrapped as a Python class, specifying input and output files required for each, and launching them using the \textsc{Ceci} library and executable which automatically interfaces them to workflow management frameworks.\footnote{\url{https://github.com/LSSTDESC/Ceci}} This work is the first to use \textsc{TXPipe} on real data and is an important milestone on DESC's readiness for LSST data.
We input the public galaxy shape catalogs from each of the surveys. We do not attempt to reproduce their photometric redshift estimates or redshift distributions, but future studies along these lines would be a useful exercise of the pipeline, and to give insight into different algorithms performances on characteristically different surveys. Additionally, we note that the previous surveys' cosmological analyses were conducted blindly, typically achieved by varying the shear values in the original catalogs by a random, unknown number. For this work, we use the original unblinded catalogs, to directly compare our results to the public data vectors.
\subsection{Likelihood and Inference}
\label{sec:likelihood}
We use a Monte Carlo Bayesian likelihood analysis to sample the cosmological and nuisance parameter space. We assume a Gaussian likelihood $L$, related to the parameters $\textbf{p}$, data $\textbf{D}$, inverse covariance matrix $\textbf{C}^{-1}$ and model $\textbf{M}$ by
\begin{equation}
-2 \ln L(\textbf{D}|\textbf{p}) = (\textbf{D}-\textbf{M}(\textbf{p}))^{t}\mathcal{\textbf{C}}^{-1}(\textbf{D}-\textbf{M}(\textbf{p})).
\label{eq:likelihood}
\end{equation}
To calculate the cosmology constraints we use \textsc{Cosmo}SIS. Cosmological constraint plots are shown using the software package C\textsc{hain}C\textsc{onsumer} \citep{Hinton2016}.\footnote{\url{https://samreay.github.io./ChainConsumer/}} We set the \texttt{kde} value in C\textsc{hain}C\textsc{onsumer} to 1.5. The outer and inner contours represent the $68\%$ and $95\%$ confidence levels respectively. For our fiducial chains we use the sampler \textsc{Multinest} implemented in \textsc{Cosmo}SIS. But we refer the reader to \cite{Lemos2022} for a discussion on how this sampler can underestimate errors in the posterior by up to $\sim10\%$, as well as lead to inaccuracies in the computed evidence. We do not expect this level of uncertainty to affect the results of this paper.
\subsection{Consistency Metrics}
\label{sec:metric}
In order to assess both whether the derived model is a good fit to the data, and whether the posteriors for different datasets are consistent, we consider several metrics throughout this work.
First, to determine whether a model is a good description of the data, we use the goodness-of-fit (GoF) definition following notation in Equation~\eqref{eq:likelihood},
\begin{equation}
{\rm GoF} \equiv \chi^2/\nu = \frac{1}{\nu} (\textbf{D}-\textbf{M}(\textbf{p}))^{t}\mathcal{\textbf{C}}^{-1}(\textbf{D}-\textbf{M}(\textbf{p})),
\end{equation}
where $\nu$ is the degree of freedom, defined as the length of the data vector minus the effectively constrained parameters compared to the priors (calculated using the \textsc{tensiometer} package,\footnote{\url{https://github.com/mraveri/tensiometer}} which implements the definition in \citealt{Raveri2018}). With the $\chi^2$ one could also calculate the corresponding probability-to-exceed (p.t.e) to be
\begin{equation}
{\rm p.t.e} = 1 - {\rm CDF} (\chi^2, \nu).
\end{equation}
A low p.t.e. value implies that it is unlikely to have the data and model disagree to this level from pure statistical fluctuation.
Second, to assess whether two posteriors are consistent under the same model, we consider several metrics. A very simple approach would be to look at the 1D distance in a single parameter. This is in general not a robust method to determine definitive tension between datasets given the high-dimensional nature of most of the cosmological problems. However, in the case of cosmic shear, the primary parameter that carries the information is $S_{8}$ and part of $\Omega_{\rm m}$, \citep[see e.g. Fig 17 of][]{Secco2021}, and thus comparing the 1D posteriors of $S_8$ and $\Omega_{\rm m}$ gives us intuition into the effects of these changes on the results. We primarily use this as a measure of e.g. how the posterior moves with different analysis choices in Sections~\ref{sec:fiducial}, \ref{sec:priors} and \ref{sec:scale_cuts}. For parameter $\textbf{p}$ constrained by dataset 1 and 2, the distance (in number of $\sigma$'s) between the mean $p$ constraints is calculated via
\begin{equation}
\Delta \textbf{p}_{12} \equiv \frac{\bar{\textbf{p}}_{1}-\bar{\textbf{p}}_{2}}{\sqrt{\sigma^2(\textbf{p}_{1})+\sigma^2(\textbf{p}_{2})}},
\end{equation}
where $\bar{\textbf{p}}_{1,2}$ and $\sigma(\textbf{p}_{1,2})$ are the mean and standard deviation of the marginalized 1D posterior of parameter $\textbf{p}$ constrained by dataset 1 and 2.
In \cite{Lemos2021} several metrics are explored in the context of assessing tension metrics between DES-Y3 and Planck. We adopt two metrics described in \cite{Lemos2021} in the context of assessing tension metrics between DES-Y3 and Planck discussed in this work.
\begin{itemize}
\item \textbf{Bayesian Suspiciousness} ($S$): We select this method because of its robustness to wide uninformative priors, as we adopt for this analysis. The metric accounts for the mutual prior dependence for the combined surveys by subtracting the dependence on the prior volume. Specifically,
\begin{equation}
\log S = \log R - \log I,
\end{equation}
where $I$ is the information ratio, which quantifies the gain in information from the posterior to the prior, and $R$ is the commonly used Bayes ratio. We use the python package \textsc{anesthetic} \footnote{\url{https://github.com/williamjameshandley/anesthetic}} to implement this metric \citep{anesthetic}.
\item \textbf{MCMC parameter difference} This method is described in \cite{Raveri2021}, and is one of a number \citep[e.g.][]{Lin2017,Raveri2018,Lin2019} of metrics based on an estimate of the probability of a parameter difference between two experiments. The metric is computed from a kernel density estimate (KDE) of the parameter difference posterior, and corresponds to the probability of a parameter difference. We compute this statistic over all shared parameters between the two posteriors. It is particularly advantageous to adopt this metric in addition to the suspiciousness metric, as unlike the suspiciousness, it is robust to posteriors that are highly non-Gaussian. For this metric we use the \textsc{tensiometer} code.
Both metrics can be represented in terms of a p-value or $n-\sigma$. We adopt the threshold $p>0.01$ as indicating that there is no sufficient evidence for disagreement of surveys.
\end{itemize}
We note that all the metrics adopted assume statistical independence between the datasets.
\section{Comparison with Published Analyses}
\label{sec:fiducial}
In this section we compare our reanalysis with those published in \citet{Troxel2017,Hamana2018,Asgari2021,HamanaRevision}. The intent here is to match as closely as possible the analysis process in the publications, but using independently developed software whenever possible. This provides a very strong test of the robustness of the published results. We first examine the measurement of the data vector in Section~\ref{data_vector} as well as the covariance from \textsc{TXPipe} compared to the published results. Next in Section~\ref{sec:txpipe_comparison_cosmo} we compare the posterior of our cosmological inference using \textsc{CosmoSIS} with the published chains.
\subsection{Data Vector}
\label{data_vector}
In Figure~\ref{fig:datavec} we show the comparison between the two-point shear correlation function measurements from \textsc{TXPipe} compared to the published results. Overall we are able to reproduce the mean results for the two-point measurements. We find small differences which can be attributed to several identified features in the different pipelines.
First, we note that the two-point measurement codes used in the previous surveys have settings that can lead to small approximations. Namely this is the \texttt{binslop} parameter in the \textsc{TreeCorr} package (used by KiDS-1000 and DES-Y1) and the \texttt{OATH} parameter in the \textsc{Athena} package \citep[used in HSC,][]{athena}. This is an approximation to speed up computation that allows for a small threshold of angular bin precision when placing the galaxy pairs into bins of angular separation. For our analysis we set this to zero which results in galaxy pairs being placed in the exact angular bins specified. We note that the effect of this is most apparent in the KiDS-1000 comparison. This is because the two-point correlation functions were computed in a large number of angular bins with a high \texttt{binslop}$=1.5$ \citep{Joachimi2020}\footnote{\url{https://github.com/KiDS-WL/Cat_to_Obs_K1000_P1}} setting and then binned into the nine coarser bins. This was done so that the same data could be used to compute the alternative COSEBIS measurements. These differences are small and random (they do not introduce a systematic offset) so we do not expect this to affect the resulting cosmology. To confirm that we are able to reproduce the results of previous surveys with \textsc{TXPipe}, we compare the results of the published data vector and \textsc{TXPipe} data vector with our likelihood pipeline. The results of this are discussed in Section~\ref{sec:txpipe_comparison_cosmo}.
In our two-point measurement comparison one small systematic difference was found between the DES-Y1 published results and our pipeline. This was determined to come from an error in the original analysis in \citet{Troxel2017} that caused the mean shear subtraction value to be less than intended. This is fixed in our pipeline as well as the current DES pipeline. The effect of this change is small (and most apparent on the larger scales where the uncertainties are higher) so we do not expect this to affect the constraints.
\subsection{Cosmological Constraints}
\label{sec:txpipe_comparison_cosmo}
In Table~\ref{table:survey_priors} we show the priors adopted for the surveys for each of the cosmological and nuisance parameters that we adopt in this analysis. For each survey we match the modeling choices made by the published results (nonlinear modeling choice, priors, IA modeling, and scale cuts). One exception is our sampling in ln$A_{s}$ rather than $S_{8}$ for KiDS-1000 (discussed in more detail in this section). The goal of this comparison is to assess the consistency of both \textsc{TXPipe} and our likelihood pipeline to the published analyses.
\begin{table*}
\caption{Priors for cosmological parameters and nuisance parameters used for the cosmological inference. Brackets indicate top-hat priors with the given bounds, while parenthesis indicate Gaussian priors with given $(\mu,\sigma)$. The first three columns show the survey choices for DES-Y1, HSC-Y1 and KiDS-1000 respectively. We note that for KiDS-1000, instead of sampling in $S_8$, we instead sample in $A_s$ and use the priors in \citet{Hildebrandt2017}, with $\ln(As \times 10^{10}):$ [1.7, 5.0].}
\label{table:survey_priors}
\centering
\begin{tabular}{||l l l||}
\hline
DES-Y1 & HSC-Y1 & KiDS-1000 \\
\hline
\multicolumn{3}{c}{\textsc{Cosmological Parameters}} \\
\hline
$A_{s} \times 10^{9}$: [0.5,5.0] & $\log_{10}\left(A_{s} \times 10^{9}\right)$: [-1.5,2.0]
& $S_{8}=\sigma_{8}\left(\Omega_{\rm m}/0.3\right)^{0.5}:[0.1,1.3]$\\
$\Omega_{\rm m}: [0.1,0.9]$
& $\Omega_{\rm c}: [0.01,0.9]$
& $\Omega_{\rm c}h^{2}: [0.051,0.255]$ \\
$\Omega_{\rm b}: [0.03,0.07]$
& $\Omega_{\rm b}: [0.038,0.053]$
& $\Omega_{\rm b}h^{2}: [0.019,0.026]$ \\
$h: [0.55,0.9]$
& $h: [0.64, 0.82]$
& $h: [0.64, 0.82]$\\
$n_{s}: [0.87,1.07]$
& $n_{s}: [0.87,1.07]$
& $n_{s}: [0.84,1.1]$\\
$\Omega_{k}: 0.0$
& $\Omega_{k}: 0.0$
& $\Omega_{k}: 0.0$ \\
$\Omega_{\nu}h^{2}: [0.0006,0.01]$
& $\sum{m_{\nu}}: 0.06 eV $ & $\sum{m_{\nu}}: 0.06 eV $\\
\hline
\multicolumn{3}{c}{\textsc{Astrophysical Nuisance Parameters}} \\
\hline
$A_{\rm IA}: [-5.0, 5.0]$
& $A_{\rm IA}: [-5.0, 5.0]$
& $A_{\rm IA}: [-6.0, 6.0]$ \\
$\eta_{\rm IA}: [-5.0, 5.0]$
& $\eta_{\rm IA}: [-5.0, 5.0]$
& $A_{\rm baryon}: [2.0, 3.13]$\\
$z_{0}: 0.62$
& $z_{0}: 0.62$ \\
& &$z_{0}: 0.62$ \\
\hline
\multicolumn{3}{c}{\textsc{Observational Nuisance Parameters}} \\
\hline
$\Delta z_{1}: (0.1,1.6)$
& $\Delta z_{1}: (0.0,0.0374)$ & $N(\mu,C)$ \\
$\Delta z_{2}: (-1.9,1.3)$
& $\Delta z_{2}: (0.0,0.0124)$ \\
$\Delta z_{3}: (0.9,1.1)$
& $\Delta z_{3}: (0.0,0.0326)$ \\
$\Delta z_{4}: (-1.8,2.2)$
& $\Delta z_{4}: (0.0,0.0343)$ \\
$m_{1..4}: (0.012,0.023)$
& $m_{0}: (0.0,0.01)$
& $c_{0}: (0,2.3\times 10^{-4})$\\
& $\alpha: (0.029,0.01)$\\
& $\beta: (-1.42,1.11)$ \\
\hline
\end{tabular}
\end{table*}
We now compare the cosmology results of our likelihood pipeline to the published results. In Figure~\ref{published_comparison} we show for the three datasets, the results of constraints in the $S_{8}$-$\Omega_{\rm m}$ plane from 1) our data vector + our likelihood pipeline (\textsc{TXPipe} DV); 2) the public data vector + our likelihood pipeline (Public DV); and 3) the publication (Public Chain). The comparison of 1) and 2) is a test that the differences described in Section~\ref{data_vector} do not affect significantly the cosmological constraints, and the comparison of 2) and 3) is a test for any difference between our likelihood pipeline and that used in the published results.
Overall, we find that the data vectors produced from \textsc{TXPipe} give consistent values for the well constrained $S_8$ parameter with the published data vectors for all three data sets. However, we also found small shifts in less well constrained parameters including $\Omega_{\rm m}$. In the following, we will mainly focus on the comparison in $S_8$. We will also point out changes in $\Omega_{\rm m}$, keeping in mind that it is intrinsically less well-constrained and more sensitive to noise.
For the comparison of 2) and 3), or the comparison of the likelihood pipelines, we detail below for each of the datasets separately.
\begin{itemize}
\item For DES-Y1, we find that our likelihood code gives a constraint in $S_8$ very consistent with the published results, with the public data vector and \textsc{TXPipe} data vector constraints differing by $<0.1\sigma$ with that from the public chain.
There is a shift in $\Omega_{\rm m}$ at $\sim$0.4$\sigma$. This parameter is less well-constrained, and is more sensitive to variation due to noise between runs of the sampler, in particular for the \textsc{Multinest} settings that were used in DES-Y1. By running multiple chains with the DES-Y1 \textsc{Multinest} settings, we confirm that there can be up to $\sim$0.5$\sigma$ shifts in $\Omega_{\rm m}$ solely from sampler noise. In our analysis, we run with a lower tolerance for this value and the variation in chains is less than when run with less stringent parameters (see \citealt{Lemos2022} for more details, and the end of this section for further discussion).
This test provides confirmation that the difference caused by the mean-shear subtraction issue does not change the cosmology results as demonstrated by the agreement between the \textsc{TXPipe} and published data vectors.
\item For HSC-Y1, we find very good agreement for $S_8$, with our results differing from the public chain by $<0.1\sigma$. The $\Omega_{\rm m}$ constraints are also very consistent, differing by $<0.1\sigma$. This indicates both \textsc{TXPipe} and our likelihood pipeline are consistent with the HSC pipelines. One subtlety that is present is the choice of approximation in the method of comparison between the theory and data. For our implementation of \textsc{CosmoSIS}, we assume that when comparing the data points with a theoretical model, we evaluate the theory at angular positions that correspond to a fixed angular bin definition for the data vector. \textsc{TXPipe} implements the pair-weighted mean of the angular bins whereas the HSC analysis used the area-weighted mean of the angular bin. Our comparison finds consistent results between the public data vector and the \textsc{TXPipe} data vector, and we have further checked that the results do not change when the \textsc{TXPipe} data vector is evaluated with area-weighted means. The lack of bias between these comparisons is likely further explained by the fact that the angular binning for HSC is relatively narrow, and the effect is larger for coarser bin sizes.
\item For KiDS-1000, we are reproducing the published $S_{8}$ (within $<0.05\sigma$) and $\Omega_{\rm m}$ (within $<0.1\sigma$). We note for the public data vector in this analysis we adopt the method of \citet{Asgari2021} which integrates the theoretical prediction over the width of the angular bin. As mentioned previously, \textsc{TXPipe} implements the pair-weighted mean of the angular bins instead of integrating the theory over the bin \citep{Krause2017}. We find this adoption to be unbiased compared to the published results. Because of this, we continue to adopt the pair-weighted mean for all subsequent analysis. An additional subtlety comes from the different parameterization of the cosmological parameters used for KiDS-1000 -- they sampled in $S_8$ instead of the $\Omega_{\rm m}$ and $A_{s}$ parameterization in our pipeline. To accommodate this we have used the $A_{s}$ priors from an earlier KiDS analysis \citep{Hildebrandt2017}, setting the priors on $\ln(A_{s} \times 10^{10})$ to be a flat tophat at [1.7, 5.0]. As described in \citet{Joachimi2020}, we do not expect this to cause a bias in the mean $S_{8}$ posterior because it is well constrained compared to the prior.
\end{itemize}
Performing this comparison, we found a few issues with the \textsc{Multinest} sampler. Primarily, we find that the less well-constrained parameters, e.g. $\Omega_{\rm m}$ could have varying posteriors at up to the $\sim0.5\sigma$ level when adopting certain values for the tolerance settings. For this analysis, we adopt the settings used by \citet{Asgari2021}, which gives good results in terms of stability between chains. However, we note the results of \cite{Lemos2021} that show that even with tighter tolerance settings this sampler can underestimate the uncertainty by $\sim10\%$.
Having validated the results from each of the publications using our independent pipeline, in the following sections we investigate the sensitivity of these constraints to different analysis choices. The two most significant class of analysis choices that could be made at this stage are: 1) the choice of priors on cosmological parameters and the model for astrophysical nuisance parameters (intrinsic alignment in the case of cosmic shear), and 2) the choice of how one treats the uncertainty in the small scale modeling. We discuss them in Sections~\ref{sec:priors} and \ref{sec:scale_cuts} respectively.
\begin{table}
\center
\caption{Priors for cosmological parameters and nuisance parameters used for the unified cosmological inference. Brackets indicate top-hat priors with the given bounds. Note that the $A_{\textrm{baryon}}$ parameter is only varied in the \textsc{HMCode} scenario.}
\label{table:unified_priors}
\begin{tabular}{|| c||}
\hline
Unified Analysis\\
\hline
\multicolumn{1}{c}{\textsc{Cosmological Parameters}} \\
\hline
$\log_{10}\left(A_{s} \times 10^{9}\right)$: [-1.5,2.0]\\
$\Omega_{\rm m}: [0.05,0.95]$ \\
$\Omega_{\rm b}: [0.03,0.07]$ \\
$h: [0.55,0.9]$\\
$n_{s}: [0.84,1.1]$\\ $\Omega_{k}: 0.0$ \\ $\Omega_{\nu}h^{2}: [0.0006,0.01]$\\
\hline
\multicolumn{1}{c}{\textsc{Astrophysical Nuisance Parameters}} \\
\hline
$A_{\rm IA}: [-6.0, 6.0]$ \\
$\eta_{\rm IA}: [-5.0, 5.0]$ \\
$A_{\rm baryon}: [2.0, 3.13]$ \\
$z_{0}: 0.62$
\end{tabular}
\end{table}
\section{Priors on Model Parameters and Intrinsic Alignment}
\label{sec:priors}
As described in Section~\ref{sec:likelihood}, priors for all the model parameters are incorporated when running the inference pipeline. This includes priors for the cosmological parameters as well as astrophysical and observational nuisance parameters. The priors for the model parameters adopted by each survey is shown in Table~\ref{table:survey_priors}.
For the cosmological parameters we wish to constrain (for cosmic shear, this is primarily $S_8$ but also there is some sensitivity to $\Omega_{\rm m}$), it is important that the priors are wide enough so they do not inform the constraints. However, in the parameter space we work in, there could be implicit priors on derived parameters that impact the constraints indirectly. One example shown in \citet{Chang2019} is that the priors on $h$ could indirectly affect $\Omega_{\rm m}$, which would then propagate into $S_8$. As this is unavoidable, it is important to at least unify the priors when making comparisons between two datasets, which is what we will ultimately do in Section~\ref{sec:unify}.
In addition, different analyses choose different models for astrophysical and observational systematic effects. For observational systematic effects (e.g. photometric redshift uncertainties, shear calibration uncertainties), it does not make sense to unify the modeling between the different experiments when comparing since the datasets are different and the different teams have individually characterized them. On the other hand, for astrophysical systematic effects (e.g. IA, baryonic effects), it is reasonable to unify the modeling approach if the basic properties of the galaxy samples are not drastically different between the surveys. In the unified analysis in Section~\ref{sec:unify} we will use a consistent IA model for all three datasets. For modeling of baryonic effects, we separately discuss the treatment of small scales in Section~\ref{sec:scale_cuts}.
As can be seen in Table~\ref{table:survey_priors}, the model parameter priors adopted by DES-Y1 and HSC-Y1 primarily differ in $A_s$, $h$ and the neutrinos. The HSC-Y1 analysis samples $A_{s}$ in log space and the range is much larger than that of DES-Y1 (it translates to 0.03$\times 10^{-9}$ -- 100$\times 10^{-9}$)\footnote{In the DES-Y1 multiprobe analysis \citep{Abbott2019} a range of $[0.5, 10.0]$ is used for $10^{9}A_{s}$.}. The $h$ prior is slightly narrower for HSC-Y1. In addition, both KiDS-1000 and HSC-Y1 fix the neutrino mass in their fiducial analysis, whereas DES-Y1 allows it to vary. Aside from these differences, the approaches are similar, adopting wide priors for the cosmological parameters with the same IA treatment ($z$-dependent NLA) and identical priors for these model parameters. The KiDS-1000 approach differs in their choice to sample in $S_{8}$ instead of $A_s$ and they adopt the $\Omega_{\rm c}h^{2}$ and $\Omega_{\rm b}h^{2}$ parameterization. Additionally, they do not adopt the redshift-dependent power-law in their fiducial IA modeling\footnote{In \citet{Asgari2021} they test adding this to the modeling and did not find evidence for a redshift evolution in the KiDS sample.}. To test how changing the cosmological parameter priors and IA treatment affects each survey, we now use a set of common choices listed in the last column of Table~\ref{table:unified_priors}.
For our unified cosmological analysis we choose to adopt top-hat priors in the $\Omega_{\rm m}$, $\log_{10}(A_{s})$, and $h$ parameterization. Our primary goal is to unify the choices between the surveys, and therefore the choice of prior and parameterization is somewhat arbitrary. However, we generally aim to err on the side of caution in terms of informing the posterior for the parameters of interest. For this reason we choose the bounds for our unified choice to correspond to the widest adopted for each of the previous surveys.
In Appendix~\ref{sec:sample_as} we show the effect of sampling $A_s$ in logarithmic and linear space, and show that the prior range we adopt for $\log_{10}(A_s)$ is flat in the parameter range of interest for $S_{8}$. For this reason we choose to sample in $\log_{10}(A_s)$ instead of $A_s$. As mentioned previously, because this prior is wide compared to the constraint, we do not expect this choice of $\log_{10}(A_s)$ vs. $S_{8}$ to affect the results. The constraint on $\Omega_{\rm m}$ on the other hand is more comparable to the width of the priors, and therefore we sample in this parameter of interest to be the least informative in the posterior.
Also as explained above, we keep any additional systematic nuisance parameters (i.e. shifts in mean redshift, multiplicative bias) the same as each survey's original settings. In terms of intrinsic alignment modeling, we adopt a $z$-dependent NLA IA model for all three surveys.
The $S_8$-$\Omega_{\rm m}$ constraints produced from the unified prior+IA treatment scheme are shown in Figure~\ref{fig:unified_priors}. For reference, the ``Survey Choice'' constraints are shown in grey on the same plots, which correspond to the ``\textsc{TXPipe} DV + \textsc{CosmoSIS}'' contours in Figure~\ref{published_comparison}. That is, the only difference between the two contours in the same panel of Figure~\ref{fig:unified_priors} are coming from the change in cosmological parameter priors and IA treatment listed in Table~\ref{table:survey_priors}.
We find that for DES-Y1 and HSC-Y1; there is a small shift in the $S_8$ constraint when we unified the cosmological priors and IA treatment of $0.15\sigma$ and $0.21\sigma$ respectively.
The change in $\Omega_{\rm m}$ for DES-Y1 is noticeable, there is a $0.5\sigma$ shift and the constraint is $\sim$40$\%$ wider. This change is primarily driven by the change in the prior on $A_s$. Though the survey and unified $\Omega_{\rm m}$ priors are flat, the degeneracy between $A_{s}$ and $\Omega_{\rm m}$ means the prior space in the original DES-Y1 bounds are not completely flat. The new $A_{s}$ bounds are flat in $\Omega_{\rm m}$ (see Appendix~\ref{sec:sample_as}) and the new bounds widen the constraint on $\Omega_{\rm m}$.
For KiDS-1000 we find unifying the IA and prior choices gives an $S_{8}$ constraint that is lower than the survey choices by $\sim$0.13$\sigma$. In Appendix~\ref{appendix:IA} we isolate the effect of IA and prior choice individually.
We see a shift in $\Omega_{\rm m}$
from the combined effect of the redshift dependent IA model and priors analysis. We attribute this to the degeneracy between the IA amplitude, $\Omega_{\rm m}$ and the unified $A_s$ prior.
The constraint on $\Omega_{\rm m}$ is $\sim$0.5$\sigma$ higher. Additionally the $\Omega_{m}$ constraint is wider by $10\%$. Overall, the relatively small shifts in the $S_{8}$ constraints are encouraging for the robustness of the results to the analysis choices. The $\Omega_{\rm m}$ constraint is less well constrained compared to the priors, making it more sensitive to the choices. We encourage caution when quoting this parameter because of this, but note that this effect is likely less relevant for more tightly constraining datasets.
\section{Effect of Unifying Small-Scale Treatment}
\label{sec:scale_cuts}
Figure~\ref{fig:small_scale_treatment} shows the resulting constraints for the three surveys when unifying the small scale treatment to the two approaches. For comparison, we again overlay the corresponding ``\textsc{TXPipe} DV + \textsc{CosmoSIS}'' contours. For both the $\Delta\chi^{2}$ cut approach and \textsc{HMCode} approach, we include smaller scales in the data vectors for DES-Y1 and HSC-Y1 than was used in their fiducial analyses. The chosen cuts for the $\Delta\chi^{2}$ compared to the published choices are shown in Appendix~\ref{appendix:scale_cuts}. For the \textsc{HMCode} cut we adopt the KiDS-1000 fiducial choice of $0.5\arcsec$ for $\xi_{+}$ and $4.0\arcsec$ for $\xi_{-}$. For KiDS-1000 we are excluding more scales in the $\Delta\chi^{2}$ cut approach, than their fiducial choice, which matches the \textsc{HMCode} cut. In particular, we note that the systematic tests that were performed by the previous analyses were not used to validate the data vector at these set of scales.
Overall, as expected, we find that for the $\Delta\chi^{2}$ cut approach, the KiDS-1000 results shows the largest change; while for the \textsc{HMCode} approach, DES-Y1 and HSC-Y1 change more significantly. Also, in general, the \textsc{HMCode} allows us to use data on smaller scales, making the overall constraints in $S_8$ about $15\%$ tighter for DES-Y1, $30\%$ tighter for HSC-Y1, compared to the public analyses. Adopting the $\Delta\chi^{2}$ cut approach for KiDS-1000 widens the $S_{8}$ constraint by $\sim$40$\%$ compared to the \textsc{HMCode} cut. Finally, for KiDS-1000, the unified treatment with \textsc{HMCode} is by definition the same as the survey choice, which is shown by the identical contours in the bottom right panel of Figure~\ref{fig:small_scale_treatment}. In Appendix~\ref{appendix:chi2_hmcode} we explore the effects of adopting the $\Delta\chi^{2}$ cut and \textsc{HMCode}.
We next examine each set of contours more carefully. We find that in general, the shift in the $S_8$ constraints when changing the small-scale treatments is very significant. For DES-Y1, HSC-Y1 and KiDS-1000, there is a +0.35, +0.30, -0.10 $\sigma$ shift in $S_8$ when we change from the survey choice to the unified $\Delta\chi^2$ cut. For the HMCode small-scale treatment, we find shifts of -0.20, -0.03, and 0 $\sigma$. This finding is interesting as it implies that including the small scales with the \textsc{HMCode} approach is consistent with the $S_{8}$ values when adopting the more conservative approach. We note that this statement is difficult to make using simulations as \textsc{HMCode} is based on fits to a certain set of simulations. With three independent set of data here, however, it gives empirical support to the robustness of the small-scale treatment with \textsc{HMCode}, at least at the statistical power of these three datasets. There is a noticeable change in the $\Omega_{\rm m}$ constraints for DES-Y1 and HSC-Y1 when switching to the \textsc{HMCode} approach -- the constraints become significantly tighter (26\%, 40\%) and lower in their absolute value by 0.53$\sigma$ and 0.95$\sigma$ respectively. This change is however much less pronounced in the KiDS-1000 contours. We note that one possibility is that the covariance matrices for DES-Y1 and HSC-Y1 are not well-validated on the small scales since they were not used in the published results, and $\Omega_{\rm m}$ is more sensitive to these subtleties as it is not well constrained. We do not have sufficient information here to come to a definite conclusion.
\section{Unified Analysis}
\label{sec:unify}
With the understanding of the impact of the individual analysis choices, we now look at the combined effect when we unify all analysis choices considered here across the three surveys -- priors on cosmological parameters and IA model (unified treatment as listed in Table~\ref{table:unified_priors}), and small-scale treatment (either the $\Delta\chi^{2}$ cut or \textsc{HMCode}).
We emphasize again that it is expected the shear calibration parameters, photo-z bias and PSF systematic parameters are survey-specific and as such we adopt the survey's choice for these components. Additionally, we caution the reader that the systematic tests that were performed individually for the surveys were adopted with a target precision in mind, and are not necessarily valid for the additional data that we include in this analysis. We do not attempt to reevaluate these analyses for the data.
\subsection{Individual Unified Constraints}
Figure~\ref{fig:unified_choices} shows the $\Omega_{\rm m}$-$S_8$ constraints for the unified choices (middle and right panels) compared to the published results (left panel) using the three datasets. We present two unified choices using two different approaches for small-scale treatments. We note that for DES-Y1 and HSC-Y1, we are including smaller scales than were originally tested in the previous analyses. Overall, we find the relative relation between the three contours does not change significantly compared to the published results. Below we discuss the small differences we do observe, in terms of shifts compared to the published contours.
First we compare the {constraints from the $\Delta\chi^2$ scale cut approach, shown in the middle panel, to the constraints of these parameters for the fiducial published analyses which are shown in the left panel} of Figure~\ref{fig:unified_choices}. From our previous results, we find that $\Omega_{\rm m}$ is sensitive to the choice of priors. This is most noticeable in the change for DES-Y1, where the posterior extends to large $\Omega_{\rm m}$ values. Overall, this parameter is not well constrained compared to the prior at the level of the individual surveys. Overall, for KiDS-1000, the contours became less constraining while HSC-Y1 contours became more constraining — this primarily comes from the fact that the unified $\Delta\chi^{2}$ cut effectively resulted in using less small scales for KiDS-1000 and more scales for HSC-Y1. The mean of the $S_8$ constraint shifts by -0.62$\sigma$, -0.64$\sigma$, and 0.46$\sigma$ going from the public chain to the unified analysis with $\Delta\chi^2$ cut for DES-Y1, HSC-Y1 and KiDS-1000, respectively. The constraining power increased by $5\%$ and $20\%$ for DES-Y1 and HSC-Y1, and decreased by $50\%$ for KiDS-1000.
It is also interesting to look at the IA constraints in the unified analysis. Unlike the cosmological constraints, we expect these could differ between the surveys, as differences in the selection of the galaxies could result in samples with different properties. We show in Figure~\ref{fig:IA_constraints}, the IA parameter constraints for the $\Delta\chi^2$ cut and HMCode case respectively. We find that the DES-Y1 and KiDS-1000 IA constraints are fairly consistent, with weak positive detection on the amplitude of IA, or $A_{\rm IA}$ (with $\sim$60$\%$, $\sim$80$\%$ and $\sim$70$\%$ of the marginalized posterior greater than zero for DES-Y1, HSC-Y1 and KiDS-1000 respectively). Out of the datasets, HSC-Y1 has a slightly stronger detection of $A_{\rm IA}$. We find that the parameters are consistent between two unified modeling cases. None of the three datasets constrain $\eta$ very well. We note that the HSC-Y1 source sample is looking at fainter and higher redshift galaxies, so the qualitative difference in the IA constraints could be a result of this. In Appendix~\ref{appendix:IA_combined} we examine the effect on the combined constraint when we adopt separate IA parameters for each survey.
Next we compare the resulting constraints for the \textsc{HMCode} approach, shown in the right, with the published constraints in the left panels of Figure~\ref{fig:unified_choices}. We find tighter and lower constraints on $\Omega_{\rm m}$ for HSC-Y1 coming from change in the small-scale treatment as shown in Figures~\ref{fig:small_scale_treatment}. For DES-Y1 the change in priors resulted in a higher and less constraining $\Omega_{\rm m}$. For $S_8$, the relative relation between the mean of the constraints remain largely unchanged from the published results, with DES-Y1 and HSC-Y1 gaining some constraining power due to the use of smaller scales. The mean of the $S_8$ constraint shifts by $0.59\sigma$, $0.33\sigma$, and $0.17\sigma$ for DES-Y1, HSC-Y1 and KiDS-1000, respectively, going from the public chain to the unified analysis with \textsc{HMCode}. The constraining power increased for DES-Y1 and HSC-Y1 by $12\%$ and $29\%$ respectively, primarily driven by the inclusion of more scales, and KiDS-1000 decreased by $15\%$ with unified priors. In general the relatively small shifts in $S_{8}$ from the choice of small scale modeling is encouraging, and suggests that using more data can lead to tighter constraints without a large bias in the results. We note this conclusion would assume that there are not additional systematics that are present at the smaller scales.
\begin{table*}
\caption{Below we show, for the unified analyses across the three surveys, the $S_8$ constraints and the goodness-of-fit. We quote in terms of $\chi^{2}/$(d.o.f.-constrained parameters) and the resulting reduced $\chi^2$ (p-value). The top and bottom halves of the table show two different cases of treatment of the small scales in the data vectors. The top half removes the small scales based on a $\Delta\chi^2$ criteria while the bottom half uses \textsc{HMCode} to marginalize over uncertainties in baryonic physics that affect small scales.}
\label{tab:constraints_unified}
\centering
\begin{tabular}{l c c c}
\hline
Dataset & DES-Y1 & HSC-Y1 & KiDS-1000 \\
\hline
$\Delta\chi^2$ cut & & & \\
$S_{8}$& $0.754 ^{+0.031}_{-0.024}$ & $0.798^{+0.026}_{-0.024}$ & $0.751^{+0.029}_{-0.024}$ \\
$\chi^{2}$/D.O.F. & 257.08/(258-6.79) & 253.16/(213-7.59) & 197.26/(164-6.60)\\
Reduced $\chi^{2}$ (p-value) & 1.02 (0.386) & 1.23 (0.012) & 1.25 (0.0171)\\
\hline
\textsc{HMCode} & & & \\
$S_{8}$& $0.757^{+0.027}_{-0.021}$ & $0.812^{+0.021}_{-0.021}$ & $0.761^{+0.021}_{-0.019}$ \\
$\chi^{2}$/D.O.F. & 381.12/(380-7.28) & 431.60/(380-7.53) & 257.17/(225-7.28) \\
Reduced $\chi^{2}$ (p-value) & 1.02 (0.371) & 1.16 (0.019) & 1.17 (0.034) \\
\hline
\end{tabular}
\end{table*}
\begin{table}
\caption{Metrics for consistency between the ``Dataset 1'' and ``Dataset 2'' -- we show our metrics, ($\Delta S_{8}$, MCMC parameter shift, or Par Diff, and Suspiciousness), as described in Section~\ref{sec:metric}. The top and bottom halves of the table show two different cases of treatment of the small scales in the data vectors. The top half removes the small scales based on a $\Delta\chi^2$ criterion while the bottom half uses \textsc{HMCode} to marginalize over uncertainties in baryonic physics that affect small scales. We quote MCMC and Suspiciousness in terms of n$\sigma$ (p-value).} Our threshold for agreement between surveys is a p-value $>0.01$. Note that the tension metrics here assume independence between the datasets, and do not account for the cross-correlation due to the survey footprint overlap.
\label{tab:tension_pairwise}
\centering
\begin{tabular}{l c c c}
\hline
Dataset 1 & DES-Y1 & DES-Y1 & KiDS-1000 \\
Dataset 2 & HSC-Y1 & KiDS-1000 & HSC-Y1 \\
\hline
$\Delta\chi^2$ cut & & & \\
$\Delta S_{8}$ & 1.15$\sigma$ & 0.09$\sigma$ & 1.26$\sigma$ \\
Par Diff & 1.01$\sigma$ (0.31) & 0.04$\sigma$ (0.97) & 1.36$\sigma$ (0.17) \\
Suspiciousness & 0.31$\sigma$ (0.77) & 0.55$\sigma$ (0.58) & 1.25$\sigma$ (0.22) \\
\hline
\textsc{HMCode} & & & \\
$\Delta S_{8}$ & 1.67$\sigma$ & 0.15$\sigma$ & 1.71$\sigma$ \\
Par Diff & 1.24$\sigma$ (0.21) & 0.43$\sigma$ (0.67) & 0.88$\sigma$ (0.38) \\
Suspiciousness & 1.52$\sigma$ (0.15) & 0.56$\sigma$ (0.57) & 1.22$\sigma$ (0.22) \\
\hline
\end{tabular}
\end{table}
A summary of the final marginalized posterior mean and 68$\%$ confidence interval on $S_{8}$ for each of the unified chains described is listed in Table~\ref{tab:constraints_unified}, together with the goodness-of-fit for each of the constraints -- we find that all the results show acceptable goodness-of-fit values. In Table~\ref{tab:tension_pairwise} we list the tension metrics between each pair of the surveys. In general, we find that all metrics pass our criteria in that we do not deem any of them indicating inconsistency between each pair of datasets.
In all the combinations, the results of the tension metrics are highest between KiDS-1000 and HSC-Y1. Using the $\Delta\chi^{2}$ cut, we find a $1.36\sigma$ (0.17) and $1.25\sigma$ (0.22) based on the parameter difference and suspiciousness metric respectively. For the HMCode approach, we find $0.88\sigma$ difference (p-value 0.38) and $1.22\sigma$ (p-value 0.22) for the parameter difference and suspiciousness metric respectively. This means if the two datasets come from the same underlying cosmology, there is still at the least, a 17\% ($\Delta\chi^2$ cut), 38\% (HMCode) chance that a discrepancy at this level could appear due to statistical fluctuation, and in our pre-set threshold we view them as not inconsistent (see Section~\ref{sec:metric}). While the experiments are formally consistent under our criteria, we note that the $S_{8}$ value from HSC-Y1 remains relatively high compared to the other surveys. With this in mind, we proceed to combine the datasets. Additionally, we note that because these metrics were computed from chains that use the original published covariance, they assume independence between the datasets and do not account for the cross-correlation from the area overlap.
\subsection{Combined Constraints and Comparison with {\it Planck}}
To combine the three datasets, we first need to consider the fact that the datasets are not fully independent. In particular, as shown in Figure~\ref{fig:footprints}, half of the HSC-Y1 footprint overlaps with the northern footprint of KiDS-1000. The actual covariance is also complicated by the fact that the surveys do not fully overlap in redshift -- HSC-Y1 is somewhat deeper than KiDS-1000. A full quantification of the covariance between HSC-Y1 and KiDS-1000 is beyond the scope of this paper. Rather, we take an approximate treatment to bracket the constraining power when combining the three datasets. We assume that the northern footprint of KiDS-1000 is fully covariant with $\sim$half of the HSC-Y1 footprint. We could treat this as effectively removing the statistical constraining power of either the northern part of the KiDS-1000 dataset or half of the HSC-Y1 dataset. In practice, we model this by enlarging either the KiDS-1000 or the HSC-Y1 covariance by the corresponding ratio of sky area between the full footprint and the partial footprint, then combining the three datasets assuming the two dataset are independent. We take this approach with the aim of a simple method that should approximate the largest effect of the cross-covariance. To approximate the sky fraction we use \textsc{HEALPix}\footnote{\url{http://healpix.sourceforge.net}} \citep{Gorski2005, Zonca2019} with an $N_{\text{side}} = 4096$ to find a factor of $A_{\text{full}}/A_{\text{partial}} = 1.82$ for KiDS-1000 and $A_{\text{full}}/A_{\text{partial}} = 2.36$ for HSC.
We note that this does not account for the actual covariance in the data vector and therefore is an approximation. This approximation could be mitigated by, for example, cutting out the overlapping sources between the two survey samples. However, the resulting shear correlation function could be incorrectly calibrated by shear bias parameters that were computed for the entire survey sample. The $n(z)$'s are also computed on the full sample, and would therefore need to be evaluated on the subsample. In addition, the priors on the shear uncertainty, redshift uncertainty and any additional survey-specific systematics have been computed for the entire sample. Therefore they would need to be reevaluated if the cut sample was not fully representative of the original catalogs (a condition that might be met, but is not a priori expected due to potential variations in depth, seeing, etc. across the fields). Nevertheless, it should give a reasonable estimate of the constraining power of the three datasets combined.
\begin{table*}
\caption{When combining all three surveys under a unified analysis framework, the $S_{8}$ constraints and metrics for internal consistency. We quote $S_{8}$ in terms of the marginalized one-dimensional constraints mean and 68$\%$ confidence interval.} Same as Table~\ref{tab:constraints_unified}, the top and bottom half of the table shows two different cases of treatment of the small scales in the data vectors. The two columns assume two scenarios of the footprint overlap: first (second) column enlarges the HSC-Y1 (KiDS-1000) covariance when combining, approximating the scenario where we remove the overlapped HSC-Y1 (KiDS-1000) footprint.
\label{tab:constraints2}
\centering
\begin{tabular}{l cc}
\hline
Overlapped footprint & & \\
removed & HSC-Y1 & KiDS-1000 \\
\hline
$\Delta\chi^2$ cut & &\\
$S_{8}$& $0.777^{+0.016}_{-0.017}$& $0.785^{+0.015}_{-0.015}$ \\
$\chi^{2}$ / D.O.F. & 667.87/(635-10.89) & 620.02/(635-11.46)\\
Reduced $\chi^{2}$ (p-value) & 0.91 (0.95) & 1.00 (0.53)\\
\hline
\textsc{HMCode} & & \\
$S_{8}$& $0.783^{+0.012}_{-0.012}$ & $0.791^{+0.013}_{-0.013}$ \\
$\chi^{2}$ / D.O.F. & 830.78/(985-11.73) & 962.34/(985-12.14)\\
Reduced $\chi^{2}$ (p-value) & 0.85 (0.99) & 0.99 (0.59)\\
\hline
\end{tabular}
\end{table*}
Figure~\ref{fig:planck_combined} shows, for both the $\Delta\chi^2$ cut (left) and the \textsc{HMCode} (right) approach, the combined $\Omega_{\rm m}$-$S_8$ constraints for the two cases of the covariance treatment. We show the results for each combined case in the full parameter space in Appendix~\ref{appendix:combined_full}. We find fairly intuitive behaviors of the contours — the combined constraints are overall tighter and located where the three individual constraints overlap. When enlarging the KiDS-1000 covariance, the relative contribution from KiDS-1000 becomes lower and the combined $S_8$ value is moved higher due the relatively higher contribution of HSC-Y1. On the contrary, when enlarging the HSC-Y1 covariance we get a lower $S_8$ value due to the higher relative contribution of DES-Y1 and KiDS-1000.
Table~\ref{tab:constraints2} lists the constraint on $S_8$ for these different scenarios where we combine all three datasets, together with the goodness-of-fit for each case -- we find good goodness-of-fit values. Noticeably, we find that the combined constraints on $S_8$ are between 1.6 and 1.9\%. This is at a similar constraining level as the most constraining cosmic shear results today \citep{Amon2021,Secco2021}. This is expected as it combines the constraining power of three datasets that are already very tight individually. We caution the readers on a number of factors to consider when comparing these numbers with other cosmic shear analyses. First, as discussed above, there is an approximation in our treatment in the cross-covariance between two of three datasets. We have taken a rather conservative approach in assuming the overlapping footprint to be fully covariant. However, a full treatment of this cross-covariance could in principle yield different results. We refer the reader to Appendix~\ref{appendix:pairwise} which shows the pair-wise constraints from each survey combination. Note, the pairwise constraints, KiDS-1000+DES-Y1 and HSC-Y1+DES-Y1, do not include the area overlap estimation as these datasets are independent. Second, the analyses from \citet{Amon2021,Secco2021} have both adopted more complex IA models \citep{blazek2019} and included additional priors on the IA parameters through a technique called lensing ratio \citep{y3-shearratio}, making it hard to directly compare. Finally, our scale cuts were evaluated with the constraining power from individual surveys instead of the combined, suggesting that they could be insufficient in the combined case.
Nonetheless, this analysis has given some intriguing insights. In particular, we find that although at the individual survey level, $\Omega_{\rm m}$ was not generally well constrained compared to the prior,
the combined survey results in a
fairly well-constrained $\Omega_{\rm m}$ relative to the prior. The constraint generally trends to a lower value than the previous survey results, ranging at the lowest $\Omega_{\rm m}=0.248^{+0.023}_{-0.031}$ for the combined (remove KiDS-1000 overlap) \textsc{HMCode} case to highest with $\Omega_{\rm m}=0.304^{+0.030}_{-0.042}$ for the combined (remove KiDS-1000 overlap) $\Delta\chi^{2}$ cut case.
Taking the relatively conservative approach of the approximated constraints as brackets of the $S_{8}$ constraint from all three surveys, the $S_{8}$ means range $~0.777-0.791$. This is slightly higher, albiet consistent, with the DES-Y3 cosmic shear results, which found for their fiducial analysis $S_{8} = 0.759^{+0.025}_{-0.023}$ and for optimized scales $S_{8} = 0.772^{+0.018}_{-0.017}$ \citep{Amon2021,Secco2021}. Interestingly, our unified analysis still results in slightly lower $S_{8}$ values than Planck.
Given the potential concerns with the combined constraints, we refrain from making quantitative statements of the consistency with {\it Planck}. We do show, however, the 2D contours of the different scenarios of our combined results compared with the primary CMB constraints from the \textit{Planck} 2018 TT, TE, EE+lowE likelihood \citep{Planck2018}. We note that with our choice of priors (Table~\ref{table:survey_priors}), and in particular the choice to allow the neutrino mass parameter to vary, results in wider posteriors in the $\Omega_{\rm m}$-$S_{8}$ plane than the published \textit{Planck} 2018 results. Visually, we can see the expected behaviour of the four cosmic shear contours, with the ``$\Delta\chi^2$ cut/overlap KiDS'' closest to {\it Planck} and the ``\textsc{HMCode}/overlap HSC'' case furthest from {\it Planck}. Taken at face value, the apparent difference compared to {\it Planck} is roughly similar (in terms of a by-eye comparison in the $\Omega_{\rm m}$ vs. $S_{8}$ plane) to the levels of tension seen in current datasets. This relatively similar picture remains even though we have combined 3 independent datasets and the statistical uncertainties have decreased (approximately $2\%$ constraint on $S_{8}$). That is, the results are slightly lower than the Planck values, but there is no evidence of a significant obvious tension any greater than the $\sim$2$\sigma$ difference previously observed for cosmic shear. Therefore, our results do not dramatically change our view of the potential tension between cosmic shear and the primary CMB.
\section{Discussion and Conclusion}
\label{sec:conclusion}
We perform a systematic reanalysis of three published cosmic shear analyses \citep{Troxel2017,Hamana2018,Asgari2021} -- we attempt to reproduce the published results, compare them assuming different analysis choices, and eventually combine them. The reanalysis and examination of consistency between the datasets are important as the community considers potential tensions in the $\Lambda$CDM model where the constraints on $S_8\equiv \sigma_8 \sqrt{\Omega_{\rm m}/0.3}$ in recent lensing surveys appear systematically low compared to that inferred from measurements of the primary CMB anisotropies. Additionally, testing a unified framework to perform the cosmic shear measurements from the catalog level is a highly useful exercise in preparation for DESC's analysis with LSST.
In this work we start with the weak lensing shear catalogs provided by the three surveys (DES, HSC, KiDS) and perform the measurement of the shear two-point function using tools developed by the LSST Dark Energy Science Collaboration (DESC). We then perform cosmological inference while systematically unifying priors on the cosmological parameters, the intrinsic alignment model, and the treatment of the small scales. Overall we are able to explain the changes in the cosmological constraints coming from these analysis choices, and demonstrate the importance of being transparent in these analysis choices and cautious when comparing across surveys. Our final unified analysis finds no evidence for tension between the three datasets. We highlight some interesting findings in different parts of this unified analysis:
\begin{itemize}
\item Different choices in priors at the level of the differences between the various surveys’ choices can result in shifts in less well constrained parameters such as $\Omega_{\rm m}$, while $S_8$ remains very robust. This information is practical to consider, when interpreting for example the future constraints from LSST. The change in $\Omega_{\rm m}$ we see is primarily coming from the priors on $A_s$ which is degenerate with $\Omega_{\rm m}$. We also find that our results are sensitive to whether we sample $A_s$ in linear or logarithmic space.
\item Changing from a more conservative treatment of the small scales (removing small scales based on a $\Delta\chi^2$ cut) to a more aggressive one (modeling the small scales using \textsc{HMCode} and marginalizing over the model parameters), the constraint on $S_8$ becomes up to $30\%$ tighter. There is the most statistical gain for KiDS-1000 compared to DES-Y1 and HSC-Y1. This could be related to the small-scale covariance matrix in DES-Y1 and HSC-Y1, which were not validated on the small scales.
\item When unifying all analysis choices, we find for the $\Delta\chi^2$ cut approach, $S_8 = 0.754^{+0.031}_{-0.024}$ for DES-Y1, $S_8 = 0.798^{+0.026}_{-0.024}$ for HSC-Y1, and $S_8 = 0.751^{+0.029}_{-0.024}$ for KiDS-1000. HSC-Y1 both has the largest best-fit $S_8$ value and the tightest constraint.
\item When unifying all analysis choices for the \textsc{HMCode} case, we find $S_8 = 0.757^{+0.027}_{-0.021}$ for DES-Y1, $S_8 = 0.812^{+0.021}_{-0.021}$ for HSC-Y1, and $S_8 = 0.761^{+0.021}_{-0.019}$ for KiDS-1000. HSC-Y1 both has the largest best-fit $S_8$ value, and KiDS-1000 has the tightest constraint.
\item We examine the consistency between all pairs out of the three datasets with three consistency metrics and find no evidence for disagreement. The largest inconsistency is between KiDS-1000 and HSC-Y1, when using the \textsc{HMCode}, at $\sim$1.7$\sigma$.
\end{itemize}
Due to the complication of overlapping footprints, we take an approximate approach to combine the three datasets. We examine two scenarios for the small-scale treatment and two scenarios for accounting for the cross-survey covariance. With the ``$\Delta\chi^{2}$ cut'' scenario, we find $S_8=0.777^{+0.016}_{-0.017}$ ($S_8=0.785^{+0.015}_{-0.015}$) if we effectively remove part of the HSC-Y1 (KiDS-1000) footprint that overlaps with KiDS-1000 (HSC-Y1). With the \textsc{HMCode} scenario, we find $S_8=0.783^{+0.012}_{-0.012}$ ($S_8=0.791^{+0.013}_{-0.013}$) if we effectively remove part of the HSC-Y1 (KiDS-1000) footprint that overlaps with KiDS-1000 (HSC-Y1). The combined constraints shift by $\sim$0.3$\sigma$ each depending on the covariance method. Interestingly, we also do not find a large shift in the constraints between the two small-scale treatments, suggesting the small-scale model used in \textsc{HMCode} is not significantly different from what is in the data.
We caution the reader that these results contain several simplifications. Given the uncertainty in the combined results due to the overlapping footprint, we only perform a qualitative comparison of the combined results with the constraints from the primary CMB as measured by {\it Planck}. Roughly, the combined constraint result is fairly consistent with the current picture of the ``$S_8$ tension'' with individual datasets, though now the combined power of three datasets.
In addition to the comparison between the three datasets, this work also demonstrated that the DESC software package \textsc{TXPipe} can now be applied to Stage-III data. This is a crucial milestone in preparation for the LSST with Rubin Observatory. The statistical power from the LSST dataset will be unprecedented, and will allow us to test, to an extremely high precision, the validity of the $\Lambda$CDM model. As we prepare for the arrival of LSST data, a thorough understanding of Stage-III results will pave the way for a successful Stage-IV dark energy program. This work represents a continued effort in re-examining, digesting, and understanding the results from Stage-III cosmic shear surveys.
\section{Data Availability}
The \textsc{TXPipe} data products dervied and used in this project are available at \url{https://zenodo.org/record/6983861#.YvV2KS1h1pQ}.
\section{Acknowledgements}
This paper has undergone internal review by the LSST Dark Energy Science Collaboration, and we kindly thank the reviewers Rachel Mandelbaum, Tilman Tr{\"o}ster and Michael Troxel. We thank Catherine Heymans and Ben Giblin for assistance with the KiDS-1000 weak-lensing catalogs and reanalysis. We thank Takashi Hamana for assistance with the HSC-Y1 analysis and providing the full-scale HSC-Y1 covariance.
EPL and CW were supported by Department of Energy, grant DE-SC0010007. CC was supported by DOE grant DE-SC0021949.
RM acknowledges the support of the Department of Energy grant DE-SC0010118. HM was supported in part by MEXT/JSPS KAKENHI Grant Number JP20H01932. MESP is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2121 „Quantum Universe“ – 390833306. TT acknowledges support from the Leverhulme Trust. AHW is supported by an European Research Council Consolidator Grant (No. 770935).
The DESC acknowledges ongoing support from the Institut National de
Physique Nucl\'eaire et de Physique des Particules in France; the
Science \& Technology Facilities Council in the United Kingdom; and the
Department of Energy, the National Science Foundation, and the LSST
Corporation in the United States. DESC uses resources of the IN2P3
Computing Center (CC-IN2P3--Lyon/Villeurbanne - France) funded by the
Centre National de la Recherche Scientifique; the National Energy
Research Scientific Computing Center, a DOE Office of Science User
Facility supported by the Office of Science of the U.S.\ Department of
Energy under Contract No.\ DE-AC02-05CH11231; STFC DiRAC HPC Facilities,
funded by UK BEIS National E-infrastructure capital grants; and the UK
particle physics grid, supported by the GridPP Collaboration. This
work was performed in part under DOE Contract DE-AC02-76SF00515.
We finally wish to acknowledge the data sources for each of the surveys used in this paper:
\\
\textbf{DES-Y1:}
This project used public archival data from the Dark Energy Survey (DES). Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology FacilitiesCouncil of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inova{\c c}{\~a}o, the Deutsche Forschungsgemeinschaft, and the Collaborating Institutions in the Dark Energy Survey.
The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energ{\'e}ticas, Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgen{\"o}ssische Technische Hochschule (ETH) Z{\"u}rich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC), the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, The Ohio State University, the OzDES Membership Consortium, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University.
Based in part on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation.
\\
\textbf{HSC-Y1:}
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
This paper makes use of software developed for Vera C. Rubin Observatory. We thank the Rubin Observatory for making their code available as free software at http://pipelines.lsst.io/.
This paper is based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by the Subaru Telescope and Astronomy Data Center (ADC) at NAOJ. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CfCA), NAOJ. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical and natural significance in Hawaii.
\\
\textbf{KiDS-1000}
Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 177.A-3017, 177.A-3018 and 179.A-2004, and on data products produced by the KiDS consortium. The KiDS production team acknowledges support from: Deutsche Forschungsgemeinschaft, ERC, NOVA and NWO-M grants; Target; the University of Padova, and the University Federico II (Naples).
The contributions from the primary authors are as follows. E.P.L. worked out the reproduction of the published results and implemented the analysis choice testing and combined results. C.C. conceived of the project, guided the reanalysis and combined analysis, and contributed to the technical implementation. E.P.L. and C.C. wrote the paper with input from all authors. C.W. guided the reanalysis efforts and provided ideas for the unified and combined analysis. J.Z. is the developer of \textsc{TXPipe} and \textsc{CosmoSIS} which is the primary software used in this analysis.
\bibliographystyle{mnras}
\bibliography{sample.bib}
\appendix
\section{Angular Scale Cuts}
In Tables~\ref{table:published_scale_cuts} and \ref{table:chi2_scale_cuts} we compare the angular scale cuts for the survey choices and the $\Delta\chi^{2}$ scale cuts. The $\Delta\chi^{2}$ scale cuts results in the usage of more small scales than the survey choices for DES-Y1 and HSC-Y1, and less scales for KiDS-1000.
\label{appendix:scale_cuts}
\begin{table*}
\caption{Published Angular Scale Cuts: The angular scale cuts defined by the previous surveys. DES-Y1 used a threshold cut of $2\%$ baryon contamination based on the OWLS simulation, HSC-Y1 used a similar approach but adopted a $5\%$ cut and fixed the threshold for $\xi_{+}$ and $\xi_{-}$. They additionally cut out higher scales that are impacted by PSF systematics. KiDS-1000 modeled the nonlinear power spectrum with HMCode and went to smaller scales in their analysis. The resulting data vector lengths are 227, 225 and 170 for DES-Y1, HSC-Y1 and KiDS-1000 respectively.}
\label{table:published_scale_cuts}
\centering
\begin{tabular}{||l l l l||}
\hline
zbin & DES-Y1 & HSC-Y1 & KiDS-1000 \\
\hline
\\ $\xi_{+}$ / $\xi_{-}$ & & & \\
(1, 1) \\ & [7.2, 250.0] / [90.6, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(1, 2) \\ & [7.2, 250.0] / [71.9, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(1, 3) \\ & [5.7, 250.0] / [71.9, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(1, 4) \\ & [5.7, 250.0] / [71.9, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(1, 5) \\ & -- / -- & -- / -- & [0.5, 300.0] / [4.0, 300.0]\\
(2, 2) \\ & [4.5, 250.0] / [57.2, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(2, 3) \\ & [4.5, 250.0] / [57.2, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(2, 4) \\ & [4.5, 250.0] / [45.4, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(2, 5) \\ & -- / -- & -- / -- & [0.5, 300.0] / [4.0, 300.0]\\
(3, 3) \\ & [3.6, 250.0] / [45.4, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(3, 4) \\ & [3.6, 250.0] / [45.4, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(3, 5) \\ & -- / -- & -- / -- & [0.5, 300.0] / [4.0, 300.0]\\
(4, 4) \\ & [3.6, 250.0] / [36.1, 250.0] & [7.0, 56.0] / [28.0, 178.0] & [0.5, 300.0] / [4.0, 300.0]\\
(4, 5) \\ & -- / -- & -- / -- & [0.5, 300.0] / [4.0, 300.0]\\
(5, 5) \\ & -- / -- & -- / -- & [0.5, 300.0] / [4.0, 300.0]\\
\hline
\end{tabular}
\end{table*}
\begin{table*}
\caption{$\Delta\chi^{2}$ Unified Angular Scale Cuts: The angular scale cuts for the $\Delta\chi^{2}$ case, in which we cut out scales with a $\Delta\chi^{2} < 0.5$ between theoretical data vectors with and without baryon contamination, using the OWLS simulation. The published covariance matrices were used for calculating the $\Delta\chi^{2}$. The $\Delta\chi^{2}$ scale cuts results in the usage of more small scales than the survey choices for DES-Y1 and HSC-Y1, and less scales for KiDS-1000. The resulting data vector lengths are 258, 205 and 164 for DES-Y1, HSC-Y1 and KiDS-1000 respectively. Units are in arcmin. DES-Y1, HSC-Y1 used four tomographic bins, and KiDS-1000 used five tomographic bins. Both KiDS-1000 and HSC adopt cuts that are uniform for each $\xi_{+}$ and $\xi_{-}$, whereas DES-Y1 varies per bin based on a $2\%$ cut from baryon contamination. For the \textsc{HMCode} cut we adopt the KiDS-1000 fiducial choice of $0.5\arcsec$ for $\xi_{+}$ and $4.0\arcsec$ for $\xi_{-}$.}
\label{table:chi2_scale_cuts}
\centering
\begin{tabular}{||l l l l||}
\hline
zbin & DES-Y1 & HSC-Y1 & KiDS-1000 \\
\hline
z-bin \\ $\xi_{+}$ / $\xi_{-}$ & & & \\
(1, 1) \\ & [3.5, 250.0] / [35.6, 250.0] & [3.1, 56.0] / [20.11, 178.0] & [0.7, 300.0] / [0.7, 300.0] \\
(1, 2) \\ & [5.6, 250.0] / [44.8, 250.0] & [4.0, 56.0] / [31.8, 178.0] & [0.7, 300.0] / [0.7, 300.0] \\
(1, 3) \\ & [5.6, 250.0] / [56.4, 250.0] & [4.0, 56.0] / [31.8, 178.0] & [3.1, 300.0] / [13.2, 300.0] \\
(1, 4) \\ & [4.4, 250.0] / [44.8, 250.0] & [3.1, 56.0] / [25.3, 178.0] & [3.1, 300.0] / [13.2, 300.0] \\
(1, 5) \\ & -- / -- & -- / -- & [3.1, 300.0] / [13.2, 300.0]\\
(2, 2) \\ & [4.4, 250.0] / [35.6, 250.0] & [4.0, 56.0] / [31.8, 178.0] & [3.1, 300.0] / [26.9, 300.0]\\
(2, 3) \\ & [5.6, 250.0] / [56.4, 250.0] & [4.0, 56.0] / [31.8, 178.0] & [6.5, 300.0] / [54.7, 300.0] \\
(2, 4) \\ & [4.4, 250.0] / [44.8, 250.0] & [3.1, 56.0] / [31.8, 178.0] & [6.5, 300.0] / [54.7, 300.0] \\
(2, 5) \\ & -- / -- & -- / -- & [6.5, 300.0] / [54.7, 300.0] \\
(3, 3) \\ & [4.4, 250.0] / [56.4, 250.0] & [3.1, 56.0] / [31.8, 178.0] & [6.5, 300.0] / [54.7, 300.0]\\
(3, 4) \\ & [4.4, 250.0] / [56.4, 250.0] & [3.1, 56.0] / [25.3, 178.0] & [6.5, 300.0] / [54.7, 300.0] \\
(3, 5) \\ & -- / -- & -- / -- & [6.5, 300.0] / [54.7, 300.0 \\
(4, 4) \\ & [3.5, 250.0] / [35.6, 250.0] & [2.5, 56.0] / [20.11, 178.0] & [6.5, 300.0] / [54.7, 300.0] \\
(4, 5) \\ & -- / -- & -- / -- & [6.5, 300.0] / [54.7, 300.0] \\
(5, 5) \\ & -- / -- & -- / -- & [6.5, 300.0] / [54.7, 300.0] \\
\hline
\end{tabular}
\end{table*}
\section{Full parameter space for combined constraints}
\label{appendix:combined_full}
In Figure~\ref{fig:combinedsmallallpars} we show the full parameter space in the unified analyses HMCode case, and assuming fully overlapped footprint between the North half of KiDS-1000 and part of HSC-Y1, corresponding to the contours in Figure~\ref{fig:planck_compare}. In general, we do not see any surprising discrepancies in the other parameters between the two approaches of the small-scale treatment. We have similarly examined the full parameter space of the $\Delta\chi^{2}$ cut scenario and find similar results.
\section{Unified Intrinsic Alignment KiDS-1000}
\label{appendix:IA}
In this analysis we assess the change in the constraints for KiDS-1000 when unifying the cosmological priors and the IA modeling. In Figure~\ref{fig:kids_IA_appendix} we show the effect of just changing one of these choices on the KiDS-1000 constraints. We find a small shift upwards in $S_{8}$ of $\sim$0.15$\sigma$ when changing to a redshift dependent IA model. There is a $\sim$0.28$\sigma$ downwards shift with the unified priors adopted. We find unifying the IA and prior choice gives an $S_{8}$ constraint that is lower than the survey choices by $\sim$0.13$\sigma$.
\section{Intrinsic Alignment Combined Constraint}
\label{appendix:IA_combined}
In Figure~\ref{fig:IA_combined} we show a comparison between the combined constraint when using combined IA parameters for each of the surveys, and when using different IA parameters for each survey. We do not find a significant shift in the constraint ($<0.2\sigma$ for both $S_{8}$ and $\Omega_{\rm m}$) but find a slight increase in the uncertainty in both $S_{8}$ and $\Omega_{\rm m}$, (about $15\%$ and $5\%$ respectively) due to marginalizing over additional systematic parameters.
\section{Linear and logarithmic $A_s$ priors}
\label{sec:sample_as}
Previous analyses have differed in the sampling choice regarded $A_{s}$. The prior chosen for the unified analysis corresponds to the widest range between the surveys, which is the fiducial choice for HSC-Y1. Like their analysis we sample in $\log_{10}A_{s}$. DES-Y1, in comparison, sampled linearly in $A_{s}$. To test the effect of this choice we looked at the HSC-Y1 posteriors with identical range of $A_{s}$ sampling in logarithmic and linear space. The results are shown in Figure~\ref{fig:log_lin_As}.
We find that $S_{8}$ is not sensitive to the choice of logarithmic versus linear sampling in for $A_{s}$, however other parameters are sensitive to the choice, in particular $\sigma_{8}$, $A_{s}$ and $\Omega_{\rm m}$. We chose to keep the choice of $\mathrm{log}A_{s}$ for the unified analysis as it corresponds to the flattest prior in $S_{8}$, as shown in figure \ref{fig:log_lin_As_priors} which plots the priors for each parameter in each sampling space. In \citet{Sugiyama2020}, the authors explored an approach of reweighting the chains to achieve flatter priors in the parameters of interest ($\sigma_8$ in that work). KiDS-1000 avoids this issue in their analysis by sampling $S_{8}$ directly \citep{Asgari2021}, but found in \citet{Troester2021} that the choice of $A_{s}$ similarly does not affect the $S_{8}$ constraint. Interestingly, the combined results, which yield a much tighter constraints on $\Omega_{m}$ compared to the prior, do not seem to be largely effected by the $A_{s}$ versus $\mathrm{log}A_{s}$ prior choice.
\section{Pairwise Constraints}
\label{appendix:pairwise}
In Figure~\ref{fig:pairwise_constraints} we show the $S_8$-$\Omega_{\rm m}$ constraints for each pairwise combination for the surveys, for each of our two small-scale treatments, compared with the primary CMB probes of {\it Planck}. We have assumed independence between the surveys, which as mentioned in Section~\ref{sec:conclusion} is an approximation for the HSC-Y1 and KiDS-1000 combinations and we would like to emphasize the same caution as the results in Section~\ref{sec:conclusion} in the interpretation of the results, in particular refraining from quantitatively assessing tension with CMB results. Similarly to the full combined constraints, we do not see a large shift between the two small scale treatments. We find a slightly higher $S_{8}$ value for the DES-Y1/HSC-Y1 and HSC-Y1/KiDS-1000 constraints and slightly lower value when combining DES-Y1/KiDS-1000.
\begin{table}
\caption{The pairwise parameter constraints for the $\Delta\chi^{2}$ cut case and \textsc{HMCode} case.}
\label{fig:pairwise_constraints}
\centering
\begin{tabular}{l c c c}
\hline
Dataset 1 & DES-Y1 & DES-Y1 & KiDS1000 \\
Dataset 2 & HSC-Y1 & KiDS-1000 & HSC-Y1 \\
\hline
$\Delta\chi^2$ cut & & \\
$S_{8}$ & $0.793^{+0.017}_{-0.017}$ & $0.764^{+0.018}_{-0.018}$ & $0.784^{+0.018}_{-0.018}$ \\
\hline
\textsc{HMCode} & & \\
$S_{8}$ & $0.801^{+0.015}_{-0.015}$ & $0.770^{+0.015}_{-0.014}$ & $0.789^{+0.015}_{-0.015}$ \\
\hline
\end{tabular}
\end{table}
\section{$\Delta \chi^{2}$ Cut with HMCode}
\label{appendix:chi2_hmcode}
In Section~\ref{sec:unify} we examine two different choices for small-scale modeling; a $\Delta \chi^{2}$ cut with \textsc{Halofit} and a small-scale cut with \textsc{HMCode}. The 2D contour results of these choices for each survey are shown in Fig~\ref{fig:unified_chi2_hmcode}. The $S_{8}$ constraints and goodness-of-fit results are summarized in Table~\ref{tab:constraints_chi2_hmcode}. In addition to these tests, we looked at the results of adopting \textsc{HMCode} with a $\Delta \chi^{2}$ cut. In general, we did not find a significant change in the relative $S_{8}$, compared to the public analysis choices. For DES-Y1, HSC-Y1 and KiDS-1000, there is $-0.43$, $0.06$ and $0.09$ $\sigma$ shift in the $S_{8}$ constraint. The overall constraining power increases by roughly $10\%$ for DES-Y1 and $10\%$ for HSC-Y1 compared to the published analysis. The KiDS-1000 constraint decreases by roughly $30\%$ compared to the published analysis.
\begin{table*}%
\caption{Below we show, for the results for a unified analyses adopting the $\Delta \chi^{2}$ cut and \textsc{HMCode} across the three surveys, the $S_8$ constraints and the goodness-of-fit. We quote in terms of $\chi^{2}/$(d.o.f.-constrained parameters) and the resulting reduced $\chi^2$ (p-value).}
\label{tab:constraints_chi2_hmcode}
\centering
\begin{tabular}{l c c c}
\hline
Dataset & DES-Y1 & HSC-Y1 & KiDS-1000 \\
\hline
$\Delta\chi^2$ cut & & & \\
$S_{8}$& $0.760^{+0.034}_{-0.026}$ & $0.827^{+0.025}_{-0.025}$ & $0.767^{+0.028}_{-0.024}$ \\
$\chi^{2}$/D.O.F. & 256.30/(258-11.07) & 253.10/(213-6.98) & 197.49/(164-6.23) \\
Reduced $\chi^{2}$ (p-value) & 1.04 (0.33) & 1.23 (0.014) & 1.25 (0.017)\\
\hline
\end{tabular}
\end{table*}
|
Title:
Observation Scheduling and Automatic Data Reduction for the Antarctic telescope, ASTEP+ |
Abstract: The possibility to observe transiting exoplanets from Dome C in Antarctica
provides immense benefits: stable weather conditions, limited atmospheric
turbulence, and a night that lasts almost three months due to the austral
winter. However, this site also presents significant limitations, such as
limited access for maintenance and internet speeds of only a few KB/s. This
latter factor means that the approximately 6 TB of data collected annually must
be processed on site automatically, with only final data products being sent
once a day to Europe. In this context, we present the current state of
operations of ASTEP+, a 40 cm optical telescope located at Concordia Station in
Antarctica. Following a successful summer campaign, ASTEP+ has begun the 2022
observing season with a brand-new two-colour photometer with increased
sensitivity. A new Python data analysis pipeline installed on a dedicated
server in Concordia will significantly improve the precision of the extracted
photometry, enabling us to get higher signal-to-noise transit detections. The
new pipeline additionally incorporates automatic transit modelling to reduce
the amount of manual post-processing required. It also handles the automatic
daily transfer of the photometric lightcurves and control data to Europe.
Additionally, we present the Python and web-based systems used for selection
and scheduling of transit observations; these systems have wide applicability
for the scheduling of other astronomical observations with strong time
constraints. We also review the type of science that ASTEP+ will be conducting
and analyse how unique ASTEP+ is to exoplanet transit research.
| https://export.arxiv.org/pdf/2208.04501 |
\keywords{Exoplanets, Transit, TTV, Antarctica, TESS, ExoFOP, Photometry}
\section{Introduction}
\label{sec:intro} %
ASTEP (Antarctic Search for Transiting ExoPlanets) was initially conceived in 2006 as a photometric telescope to search for transiting exoplanets in dense fields of stars \cite{Fressin+2007}. It was installed at the Concordia Station, located on Dome C in Antarctica, at the end of 2009, and successfully began its operations in 2010 \cite{Daban+2010, Guillot+2015}. The telescope remained on site until 2013 exploiting the near-continuous winter polar nights and excellent weather conditions of the site \cite{Crouzet+2010, Crouzet+2018}. It led to the first ground-based observation of a secondary eclipse of an exoplanet in the visible \cite{Abe+2013} and the discovery of tens of transiting planet candidates \cite{Mekarnia+2016}. At that time, internet connection with the Concordia station was extremely limited, requiring the full-time presence of an astronomer on the site. Observations were saved on hard drives and fully analyzed in Europe the year after.
The return of ASTEP at the end of 2016 for a continuous observation of $\beta$~Pictoris \cite{Mekarnia+2017, Lagrange+2019, Kenworthy+2021A&A} required a different strategy, i.e., an automatic processing of the lightcurves and their transmission to Europe. Improvements in the internet connection with the Concordia station implied that an automatic transmission of limited amounts of data (of order $10~\rm MB/day$) was possible, using the HERMES system implemented by PNRA ({\it Programma Nazionale di Ricerche in Antartide}). A new server, adapted to the automatic processing of the $\sim 6$\,TB of data per season, was sent to Concordia with a dedicated \texttt{IDL} pipeline, based on aperture photometry\cite{Mekarnia+2017}. With the successful launch of \tess (Transiting Exoplanet Survey Satellite)\cite{Ricker+2015}, the ASTEP observation program naturally transitioned to a follow-up of transiting exoplanet candidates, focusing on those with long orbital periods (10-100+ days) and contributing to many discoveries\cite{Bouma+2020AJ, Dawson+2021, Dong+2021, Grieves+2021, Burt+2021, Kaye+2022, Wilson+2022, Mann+2022, Christian+2022, Dransfield+2022}.
The geographic location of ASTEP is extremely interesting in complement to several key exoplanetary space missions. Of course, the Antarctic polar night is highly favorable for an efficient observation of transiting planets, in particular for those with long orbital periods and for those with long transit durations (the two being correlated). The combination of observations from mid-latitude sites in e.g., Chile, and Antarctica is particularly efficient\cite{Fruth+2014}. However, the main asset is probably the fact that targets in the southern continuous viewing zones of the \tess, \textit{JWST} and \textit{Ariel} space missions are circumpolar and can be observed by ASTEP all the time. In counterpart, access to Antarctica is limited to about 3 months, between November and February, only a low-bandwidth internet connection is available, and the instrument must cope with temperatures ranging from about $-10^\circ$C to $-80^\circ$C. This makes operations with ASTEP intermediate between ground-based telescopes and space-based missions.
Starting in 2022, thanks to support from the University of Birmingham, ESA, INSU, and the Laboratoire Lagrange, a new camera box\cite{Crouzet+2020} could be installed, starting ASTEP+ observations in two colors. This required a new pipeline, but also an adaptation of the scheduling system to cope with the large number of potentially interesting targets to be observed.
\textcolor{black}{In this paper we will describe the current status of ASTEP+. First we show how different ASTEP+'s sensitivity to transiting planets is to more traditional observatories in Section~\ref{sec:long}. In Section \ref{sec:ObsSched} we outline our current scheduling needs and how they are shaped by our team's science goals. We also describe the systems we have designed to meet these needs and our plans for further development. In Section \ref{sec:pipeline} we provide a detailed description of our new automatic \texttt{Python} data analysis pipeline, as well as a quantitative comparison with the system it replaces. In Section \ref{sec:future} we map out in brief the possible future directions of the ASTEP+ project, and we conclude in Section \ref{sec:conclusions}.}
\section{Sensitivity to long period planets and long duration transits}\label{sec:long}
\textcolor{black}{Prior to the start of the 2020 observing season, ASTEP had been dedicating its exquisite resources first to a blind exoplanet survey of its own \cite{Mekarnia+2016}, and then to observations of the $\delta$-Scuti pulsator $\beta$~Pictoris \cite{Mekarnia+2017}. In early 2020, ASTEP joined TFOP (\tess Follow-Up Observing Programs) Sub-Group 1 (SG1 hereafter) to support photometric follow-up and confirmation of \tess planet candidates.}
\textcolor{black}{In the context of \tess follow-up, \astep has a lot to offer; in particular, \astep can truly excel when it comes to observing long transits in full, as well as the transits of long-period planets. The current list of TOIs (\tess Objects of Interest) has 5767 targets, of which 3053 are observable from the South\footnote{Taken from \url{https://exofop.ipac.caltech.edu/tess/view_toi.php}, correct on the 22th of June 2022.}. Of these, 285 have transits lasting at least five hours, and 186 have periods of at least 20 days. These 471 candidate planets fall perfectly in \astep's niche. }
\textcolor{black}{In order to quantitatively compare \astep's sensitivity to long transit/long period candidates, we created a synthetic sample of 2500 planet candidates. Each was given a transit epoch ($\rm t_0$) drawn randomly from \tess's first year, a period of between $\rm 20-200\,days$, and a transit duration between $\rm 5-15\,hours$. To keep the on-sky distribution of the sample as realistic as possible, we assigned to each a set of coordinates (right ascension, declination) of real TOIs as taken from the current TOI list described above. Using this sample, we simulated observability of full transits by \astep and three other SG1 observatories: SPECULOOS-South (SSO) \cite{Delrez2018} in Chile, South African Astronomical Observatory (SAAO)\cite{saao2013} in South Africa, and Hazelwood Observatory\cite{hazelwood2019} in Australia. The simulation was carried out using the \texttt{Python} package \texttt{astroplan}\cite{2018AJ....155..128M}, and observability was tested for a five-year period starting from the 01 January 2022.}
\textcolor{black}{The results of our simulation are presented in Fig. \ref{fig:obs_sim}, with the top row of panels showing the distribution of planets according to period and duration and the bottom panel showing the planets' distribution on the sky. There are two very clear conclusions to be drawn from the top panels; firstly, we can see that \astep is the only observatory that can \textit{consistently} observe full events for both infrequent and long duration transits. The other observatories have almost no observability above eight hours, while \astep maintains in excess of 20\% for some transits lasting over 12 hours. Secondly, we can see that while all observatories can observe \textit{some} long-period transits in full, their percentage observability is lower than \astep's, and where long periods intersect with long transits, there are no transits observable. We find that on average \astep is more than 8 times more likely to observe a full transit for long period planets, and for the duration bins where other telescopes have non-zero observability, \astep is almost 40 times more likely to catch a full event.}
\textcolor{black}{In the lower panels we can see how the sample of synthetic planets is spread on the sky, and we note that the bulk of \astep's zero-observability objects are concentrated in an area of the sky that is completely invisible to us. Circumpolar and very southern targets are high in the sky all year long, while more northern targets (above declinations of $-40^\circ$) rise during the austral summer and set before \astep's observing season begins. }
\textcolor{black}{In Fig. \ref{fig:comparison} we filter out all objects north of $-40^{\circ}$ and compare observability of the objects still invisible to \astep. In all panels, objects that \astep can observe at least one full transit for are plotted in green, while planets with zero observability are plotted as red circles. Crucially, we note that for all systems where \astep cannot observe a single transit in the next five years, only 2.3\% can be observed by a different observatory.}
\section{Observation Scheduling}
\label{sec:ObsSched}
The scheduling of our observations is heavily influenced by the type of observations we perform. At the moment, the main task of ASTEP+ is to confirm planetary candidates identified by ExoFOP (Exoplanet Follow-up Observing Programme), an international follow-up collaboration. Our lightcurves of transiting planets are necessary to validate \tess's planetary candidates for the following reasons:\vspace{-0.5em}
\begin{itemize}
\item \tess has a PSF (Point Spread Function) of about 30 arcseconds \cite{Ricker+2015}, meaning that many transit events are in fact deep eclipse produced by faint background objects diluted by a foreground bright star (which is then usually identified as a transiting planet candidate by the \tess automated pipelines);\vspace{-0.5em}
\item Sometimes, transit events identified by \tess are false alarms, caused by instrumental systematics or an unlucky coincidence of stellar activity;\vspace{-0.5em}
\item We can check odd and even transit events and verify their depth are consistent with one another\cite{Dransfield+2022}, as a way to rule out blended eclipsing binaries (in non visually resolved binaries);\vspace{-0.5em}
\item We measure the transit depth and compare it to \tess's and to other photometric bands in order to perform a {\it chromaticity check}\cite{Dransfield+2022}, another way to rule out blended eclipsing binaries, which works if the secondary component has a significantly different $T_{\rm eff}$ from the primary. For most of its operations ASTEP operated with a single filter, but recently we installed two cameras separated by a dichroic, allowing us to perform chromaticity check directly (see companion paper by Schmider et al.);\vspace{-0.5em}
\item We verify the transit ephemerides, notably the orbital period and time of transit;
\end{itemize}
Exoplanet transits are short compared to their orbital periods, and happen at specific times. This means that we need a scheduler able to handle those. In addition, new systems are identified by \tess with regularity, and some are removed (e.g. when we demonstrate the system is likely not a planet). Again here this forces our scheduler to adapt to new situations on a weekly basis.
While every night can be filled with observations of \tess Objects of Interests (TOIs), we remain scientists with interests in specific types of planets. This places further restrictions on the type of scheduler. First we prioritise systems aligned with our science goals (be they confirmed TOIs, non-confirmed TOIs, or unrelated systems altogether). Because this rarely fills the schedule, we also select TOIs opportunistically but also with a prioritisation that aligns with our preferences (see Section \ref{sec:long}.)
In addition, we also perform filler observations for a few hours a night to create long timeseries. Typically this involves observing $\beta$~Pictoris to search for a transit of the Hill sphere of its directly imaged planets\cite{Kenworthy+2021}, monitoring J0600, a system with a candidate circumplanetary ring transiting a star (Kenworthy et al. in prep), and now some flaring stars.
Finally, our team is international, with partners in France, the United Kingdom and the Netherlands. We are not able to meet physically to decide targets, therefore requiring a web-based tool.
\subsection{Early solutions}
\textcolor{black}{Upon joining ExoFOP at the start of 2020, ASTEP gained access to the \texttt{\tess Transit Finder}\cite{TTF} (\texttt{TTF} hereafter), a web-based tool that allows observers to check for upcoming transits of targets of interest. It can also be used to search for all transits occurring in a given date range, which allowed the team to get a feel for the different categories of transiting candidates \tess was producing.}
\textcolor{black}{Throughout ASTEP's first observing season with \tess, the \texttt{TTF} output was saved to \texttt{.csv} files and then scored using an \texttt{IDL} routine according to how well they aligned with our niche (Crouzet et al. in prep). Airmass plots were then generated to highlight when in the night the transits would happen, and the information was saved to \texttt{.pdf} files which were then circulated to the team in advance of our weekly target selection meetings. Targets selected for observation each night were recorded in a Google Doc\cite{gsuite} which was shared with the team.}
\textcolor{black}{Both elements of this initial system were limited. The generation of the \texttt{.pdf} files required that each team member check the Google Drive for new files on a weekly basis. Our use of a Google Doc to record the upcoming schedule also rapidly became time consuming; additionally, the important information contained therein was vulnerable as the whole team had access. Finally, manual input of information meant that small inaccuracies could cause us to miss important observations.}
\textcolor{black}{The earliest incarnations of the system we use now harnessed the G-Suite\cite{gsuite} \texttt{Python} APIs (Application Programming Interfaces) to read and write information to a Google Sheet instead of a Google Doc. G-Suite provides excellent collaborative working tools for international teams such as ours, but limiting vulnerability of underlying data while ensuring everyone could access crucial information became a top priority. We also found that time could be saved generating the airmass plots by making use of the \texttt{TTF} API rather than manually downloading the output.}
\textcolor{black}{One final limitation we encountered was that most our transit observations were opportunistic. Moving forward, the ambition was to define cohesive observing programs of scientific interest to the team.}
\subsection{Current Systems}
\textcolor{black}{The system we have implemented since the start of the 2021 observing season consists of three components: a Google Sheet; a \texttt{Python} toolkit; and a website. Each of these elements and their key functions are described in the sections that follow. The interconnections between all three elements of the toolkit are summarised in Fig.~\ref{fig:toolkit}.}
\subsubsection{Generating the data: \texttt{Python} toolkit}
\label{sec:tools}
\textcolor{black}{As described above, one of ASTEP's key goals that emerged from the 2020 observing season was to define a list of targets that were of particular scientific interest to us. With an ever-growing list of candidates emerging on a monthly basis from \textit{TESS}, there was a need for a simple way to filter through the list of TOIs (\textit{TESS} Objects of Interest) to find those best suited to our unique context. This motivated the development of our \texttt{TESS\_target\_list} tool, written in \texttt{Python} and used via a \texttt{Jupyter Notebook} with an \texttt{ipywidgets} GUI. This simple tool makes use of intuitive sliders and dropdown lists to make target lists based on user-selected parameters. We present a screenshot of the \texttt{TESS\_target\_list} tool in Fig.~\ref{fig:target_lists}.}
\textcolor{black}{With pre-selected target lists now in place, a vast amount of scheduling can be done ahead of time. Computing the timings of upcoming transits using linear ephemerides is trivial, however the \texttt{Python} package \texttt{astroplan} \cite{2018AJ....155..128M} does this with elegance and simplicity. Not only can it compute the timings of upcoming events for eclipsing systems, both stellar and planetary in nature, but it will also output observability of said events for different observatories.}
\textcolor{black}{Leveraging both \texttt{astroplan} and the Google Sheets \texttt{Python} API, we developed the scheduler: \texttt{schedule\_ahead}. This software reads the target list directly from a `Master Target List' located on a Google Sheet (see Fig.~\ref{fig:toolkit}) and imports the stellar coordinates and orbital ephemerides for each of the targets, provided they have an `Observe' status of \textit{`Active'}. User inputs are an observatory (\astep, in this case), a start date, and a number of days to schedule for; these are combined with the system parameters from the target list and fed into \texttt{astroplan}. Only events where at least one of ingress and egress are observable are then output. The user then has the option of pushing the outputs to the `Schedule' Google Sheet and the `Web Schedule'; the latter will be described in Section \ref{sec:website}. }
\textcolor{black}{Before running the scheduler each week, there is the option to update the transit ephemerides of candidates in our target list by checking the \texttt{TTF}. This ensures that refinements resulting from other teams' observations are included, but crucially it tells us if a candidate has been retired as a false positive, i.e. it has been conclusively shown not to be a planet.}
\textcolor{black}{Our nights are very seldom filled perfectly with pre-scheduled events; in fact there are often multiple events overlapping in one night and a decision has to be made about what should be observed (see Section \ref{sec:website}). Additionally, the schedule often has large gaps (see Fig.~\ref{fig:web_sched}) that can be filled with our pre-selected filler targets or opportunistic observations of transits. In order to facilitate the latter, we use a custom \texttt{Python} script to search the \texttt{\tess Transit Finder}\cite{TTF} for transits of \tess candidates within a given date range. The results of the search are then scored according to how well each target suits our scientific interests, and airmass plot data are computed for the top 30 events using \texttt{astropy}\cite{astropy:2013,astropy:2018}. These data, along with system parameters of interest, are stored in the Google Sheet. This script therefore completely replaces the \texttt{IDL} script which generated the \texttt{.pdf} files.}
\subsubsection{Holding the data: Google Sheet}
\label{sec:gsheet}
\textcolor{black}{The team's Google Sheet holds all the underlying data generated by the \texttt{Python} toolkit, and updated and read by the website, as shown in Fig.~\ref{fig:toolkit}: the target list, the schedule, the airmass plot data, and the team's thoughts. }
\textcolor{black}{The Master Target List contains host star parameters, transit ephemerides and \texttt{TTF} meta-data (such as observing notes) for each target we have chosen to observe. The data for this sheet are generated by our \texttt{TESS\_target\_list} tool, and updated by the ephemerides checking script before running the scheduler each week. Targets are grouped into observing programs and are each given an `Observe' status: \textit{`Active'}, \textit{`Standby'} or \textit{`Retired'}. This flag is read by the scheduler and can be updated manually or programmatically.}
\textcolor{black}{The observation schedule is saved in two worksheets; one is human readable with observation and event timings in ISO format, while the other is intended to be read by the graphical web schedule and has timings in UNIX format. The former is also used to display the schedule in tabular form on the website.}
\textcolor{black}{As described above, the data for airmass plots of potentially interesting targets are generated via a \texttt{Python} script; these data are then stored in their own worksheet to be read by the website. The information contained therein is not intended to be human readable, therefore the worksheet is not formatted and has no headers. Data from previous weeks are cleared once observations have taken place to make the Google Sheet load quickly.}
\textcolor{black}{Once the data for the airmass plots have been generated in advance of a weekly meeting, the upcoming events can be viewed on the website on the target selection page. In order to facilitate fruitful discussions during the meeting, there is a box where team members can add their preferences and thoughts for upcoming scheduling. These thoughts are stored on the Google Sheet to be displayed in the appropriate place on the website.}
\subsubsection{Displaying the data: The website}
\label{sec:website}
\textcolor{black}{For effective collaborative working in an international team, it is essential that everyone can access all the information in a convenient way. In addition, Google Sheets are handy but can be easily altered mistakenly. With this in mind, we developed a website for the \astep team using \texttt{HTML, CSS} and \texttt{JavaScript}.}
\textcolor{black}{The essential feature the website provides is access to data (see Fig.~\ref{fig:toolkit}), and this is done via DataTables\footnote{\url{https://datatables.net}}, a \texttt{JavaScript} package for interactive \texttt{HTML} tables. Information from each of the Google Sheets is fed in as \texttt{JSON}. As a result team members can access fully paginated, searchable and sortable \texttt{HTML} tables. These tables can also be exported in four formats; a screenshot of the schedule as displayed in a DataTable is presented in Fig.~\ref{fig:dt_sched}}.
\textcolor{black}{In the hope of making the website experience as user-friendly as possible, we developed an interactive graphical view of the schedule using the \texttt{JavaScript} plotting package Highcharts\footnote{\url{https://www.highcharts.com}}. In Fig.~\ref{fig:web_sched} we present a screenshot of this schedule view. The green bars represent the available observing time each night, defined as the time between the start of the evening civilian twilight and the end of the morning civilian twilight, where the sun is at $6^{\circ}$ below the horizon. Scheduled events are shown as purple bars, which change to red for any portion of the event that is not during the night. Hovering over an event with the mouse displays the target name, the event timings, the observing program the target belongs to, the airmass at the time of mid-transit, and the percentage of moon illumination. All of these features allow for simple choices to be made when events clash.}
\textcolor{black}{As described in Sections \ref{sec:tools} and \ref{sec:gsheet}, we sought to incorporate all the most useful elements of the airmass plot \texttt{.pdf} files into our new system. To this end, we included the Target Selection page in the website, which displays the airmass plots for the top 30 scored transiting candidates from the \texttt{TTF}. We also see all the pre-scheduled events, allowing us to choose which observations allow us to make best use of the available time each night. Opportunistic observations can then be scheduled directly from this page and sent to the Google Sheet, removing any need for manual input. }
\subsection{Future Direction}
\textcolor{black}{During the 2021 observing season the version of the Google Sheets API we were using was unexpectedly retired, causing our entire toolkit to stop working. Upgrading to the new version of the API took approximately two weeks, during which time we had to revert to manual target selection. With this in mind, we aspire to phase out our use of G-Suite in future. This will be achieved by storing our data in databases as they can be easily written to and read by \texttt{Python} and web-based applications. The use of databases will therefore eliminate the need for a Google Sheet to store our data and the Sheets API to read/write information.}
\textcolor{black}{A further limitation which we will seek to address is that the full \texttt{Python} toolkit is locally stored and run on one team member's computer. This will be improved in future by using a \texttt{Python}-based web framework such as Django\footnote{\url{https://www.djangoproject.com}} or Flask\footnote{\url{https://flask.palletsprojects.com/en/2.1.x/}} to fully embed the toolkit in the website. This will allow all of the scripts to be run by anyone, from anywhere, at any time.}
\section{Data Analysis Pipeline}
\label{sec:pipeline}
\textcolor{black}{With our science goals shifting towards the competitive and fast-moving field of \textit{TESS} follow-up, the need for a fast automatic pipeline has grown. It's especially important for the pipeline data products to be as close as possible to the format needed for submission to ExoFOP in order to minimise the time spent in post-processing. Additionally, we want our data products to be versatile enough that we are able to extract additional information from our images without the need for full reprocessing, such as lightcurves of other stars detected in the field. Finally, as our team has grown to include collaborators from other astronomical sub-fields, there has been a growing need to produce one flexible data product that can be handled by others without the need for context-specific coding experience.}
\textcolor{black}{In this section, we will first describe in brief the \texttt{IDL} pipeline that has served \astep for the past six years, followed by a detailed description of the new \texttt{Python} pipeline in Section \ref{sec:prose}.}
\subsection{\texttt{IDL} Pipeline}
\textcolor{black}{The \texttt{IDL} data process pipeline, only briefly described here (see Abe et al. 2013\cite{Abe+2013} for a complete description) is a custom \texttt{IDL} code using classical aperture photometry routines from the well-known \texttt{IDL} astronomical library. Each science exposure frame is bias subtracted, dark corrected and the astrometric solution is computed using reference stars from the UCAC4 catalogue. Photometric lightcurves of about 1,000 stars are then performed through $10$ fixed circular apertures radii. The optimal calibrated lightcurve is then extracted using a set of
comparison stars.}
\textcolor{black}{This pipeline has served its purpose well and has allowed \astep to contribute to many publications since we joined TFOP in 2020\cite{Bouma+2020AJ,Dawson+2021,Burt+2021,Kaye+2022} plus lead our own discovery paper \cite{Dransfield+2022} with several others in preparation (Abe et al.in prep., Schmider et al. in prep., Triaud et al. in prep.). However, the cost of \texttt{IDL} and its declining usage means that maintenance of the pipeline is becoming highly specialised knowledge. Thus the need for a pipeline which is fast, modular and in a popular programming language: \texttt{Python}.}
\subsection{\texttt{Python} Pipeline}
\label{sec:prose}
\textcolor{black}{Rather than starting to develop the pipeline from scratch, we chose to build the pipeline around an astronomical data processing package written with \tess follow-up in mind: \prose{}.}
\textcolor{black}{\prose{}\footnote{\url{https://github.com/lgrcia/prose}} is an open-source \texttt{Python} package dedicated to astronomical image processing \cite{Garcia2022}. By featuring a wide range of pre-implemented processing blocks (from source detection to photometric extraction), it provides a framework to quickly assemble instrument-agnostic pipelines that are modular and easy to maintain. \prose{} is supplemented by convenient tools to manage and share the products of astronomical observations, such as the automatic generation of \tess follow-up reports, making it an ideal base for the new ASTEP+ pipeline. }
\textcolor{black}{In the following sections, describe the blocks and functionalities that form the core of the ASTEP+ pipeline, including custom modifications from the base \prose{} package. For a more comprehensive description of \prose{}, we direct the reader to the work where the package was first presented \cite{Garcia2022} and its online documentation\footnote{\url{https://lgrcia.github.io/prose-docs/build/html/index.html}}}
\subsubsection{Image Calibration}
\textcolor{black}{
We based ASTEP+'s pipeline on the \prose{} default photometric pipeline, which can be decomposed in two steps: The calibration and alignment of all images to produce a high-SNR stack image of the complete observation; And the extraction of the fluxes from the brightest stars in the field, using a wide range of size-varying apertures. The calibration sequence starts with the selection of a reference image, on which \textit{n} reference stars are detected, later used to align the rest of the science images on. Then, each raw image sequentially go through bias, dark and flat calibrations, before being trimmed for overscan pixels. For each image, the \textit{n} brightest stars are detected and compared to the reference ones in order to compute the affine transformation to the reference image. This is done using \texttt{twirl} \footnote{\url{https://github.com/lgrcia/twirl}} (a simplified \texttt{Python} implementation of \texttt{Astrometry.net}\cite{lang2010}). Finally, a stack image is created from the aligned images (transformed using bi-linear interpolation), while the unaligned calibrated images are saved to perform aperture photometry on.}
\subsubsection{Aperture Photometry}
\textcolor{black}{Two sequences are used in \prose{} to perform aperture photometry on the calibrated images. The first sequence sets the number of stars to be detected on the stack image to 1000. This ensures the detection of the target star even in crowded fields using the DAOFindStars block based on a \texttt{photutils}\footnote{\url{https://photutils.readthedocs.io/en/stable/index.html}} implementation of DAOPHOT. An elliptical two-dimensional Moffat model of the stack effective PSF is fitted in order to scale the photometric aperture radii. A total of forty circular apertures is used to extract the flux of all the detected stars in the following sequence. The position of the apertures on each calibrated image is calculated using the inverse transformation matrix computed in the calibration step. To correct for possible errors in these coordinates, the positions of the apertures are recomputed using the centroiding algorithm \texttt{Ballet}\footnote{\url{https://github.com/lgrcia/ballet}}, a convolutional neural network trained to predict accurately centroid positions. The flux is then extracted for each aperture and stored in a \texttt{.phot} file along with the values of systematic effects affecting the observation (position shifts on the detector, sky background, airmass and FWHM).}
\subsubsection{Plate Solving and Target Identification}
\textcolor{black}{\prose{} requires human intervention for the purpose of target selection on the stacked image; in order to automate this step we implement a local installation of \texttt{Astrometry.net}\cite{lang2010}. \texttt{Astrometry.net} is a robust and widely used system to provide blind plate solving of astronomical images. It works by extracting several patterns of four stars (asterisms) from query images, computing a hash code for each, and searching indexes for matching hash codes. The result is the WCS (World Coordinate System) header information providing the pointing, scale and orientation of the query image.}
\textcolor{black}{Index files have been produced for many astronomical surveys in various colours. To ensure the best coverage, providing the best chance of an astrometric solution, we use the full set of \textit{Gaia} and \textit{2MASS} index files.\footnote{Available for download from \url{http://data.astrometry.net}}}
\textcolor{black}{Unlike the \texttt{IDL} pipeline, the new pipeline only requires that we plate solve the stack image as the field rotation between each image is calculated by the \texttt{twirl} block by \prose{} during the calibration stage of the pipeline.}
\textcolor{black}{ASTEP+'s images are centered on the guide star, and its coordinates, together with the image pixel scale determined by the astrometric solution, are used to define a search cone. Under normal operation a call would be made to MAST (Barbara A. Mikulski Archive for Space Telescopes) via \texttt{astroquery}\cite{astroquery} to find all stars in this cone for a given catalog. In order to remove this need, we instead make use of a local version of the \textit{TESS} Input Catalog (TIC)\cite{TICv8}, saved in an \texttt{SQLite} database, to perform the search. The WCS header information is then used to transform celestial coordinates into pixel coordinates, and the closest aperture to the target is selected.}%
\textcolor{black}{The decision to move toward using the latest version of the TIC as our main reference catalog instead of UCAC4\cite{UCAC4} was motivated by its completeness. Version 8 of the TIC includes all UCAC4 sources, as well as {\it Gaia} DR2 sources and several other large catalogs, making it the most complete catalog currently available.}%
\subsubsection{Differential Photometry}
\textcolor{black}{Differential light curves are built from raw light curves using the algorithm presented in Broeg et al. 2005\cite{Broeg2005} (implemented in \texttt{Python} within \prose{}). This method consists of building an artificial comparison light curve using the weighted sum of all available stars in the field. The weight attributed to each star is computed through an iterative process that favours stars displaying lower variability and higher signal-to-noise ratio, more likely to feature systematic signals. The optimal aperture is then chosen as the one minimising the white noise estimated using the median standard deviation of points per five minute bins. All the light curves for each aperture are also stored in the \texttt{.phot} file allowing for a manual check if needed. }
\subsubsection{Lightcurve Modelling}
\textcolor{black}{With the \texttt{IDL} pipeline, the biggest interaction required in post-processing has always been the modelling of lightcurves after they arrived in Europe. However, there are many simple cases where the detected transit matches the ephemerides well; to save time for these objects, we have written a new module for \prose: \texttt{auto\_modelling}.}
\textcolor{black}{All our lightcurves are correlated with airmass and background sky level. Additionally, variations in the FWHM (the full width at half maximum) can introduce spurious signals to the data. For this reason, all lightcurves are detrended by fitting an order two polynomial of time simultaneously to airmass, FWHM and the background sky level. This is done at the same time as the transit modelling. There are sometimes additional signals in the data caused by motion of the telescope in the night ($\rm dx$ and $\rm dy$). Rather than detrend on these parameters, we reject any images where either of these parameters are greater than 5. }
\textcolor{black}{Transit fitting is implemented using \texttt{exoplanet},\cite{exoplanet} a flexible toolkit for modelling exoplanets using \texttt{PyMC3}\cite{pymc3} for MCMC (Markov Chain Monte Carlo) modelling. Priors on the transit parameters are taken from the image headers, while host star priors are drawn from the local TICv8 database. In cases where the stellar $\rm T_{eff}$, $\rm [Fe/H]$ and $\rm log g$ are available, quadratic limb darkening coefficients are also calculated using \texttt{PyLDTK}\cite{pyldtk}. These are then set as normal priors for the fit; where host data is not available, solar values of stellar mass and radius are used instead. In these cases uniform priors between $0-1$ are used for the limb darkening coefficients.}%
\textcolor{black}{The automatic lightcurve modelling is not always successful. The most common reasons for failure are missing host star parameters, ambiguous detections and non-detections of transits, and transit timing variations (TTVs). In all these cases, manual post-processing is carried out after files have arrived on the server in Europe.}
\subsubsection{Delivery of Data Products}
\textcolor{black}{As described in Section \ref{sec:intro}, one of the biggest limitations that comes with an Antarctic telescope is the very limited internet connection. This is of course the motivation for our automatic data processing on-site, but it also limits how we access the data products output by the pipeline.}
\textcolor{black}{The data products produced by the pipeline fall into two categories: lightweight and heavyweight. Lightweight data products include a \texttt{.png} image of the target lightcurve and a \texttt{.csv} file containing the target flux and systematics. These products are emailed to all team members immediately after the pipeline finishes running to facilitate rapid inspection. In cases where the automatic lightcurve modelling and detrending has been successful, these two products are all that is needed.}
\textcolor{black}{The largest data products are the .phot files containing the aperture photometry for up to 1000 stars in the field as well as all the metadata for the observation, and the stack image produced during the reduction. These files are sent to a local server in Concordia using the \texttt{Python} package \texttt{paramiko}\footnote{https://www.paramiko.org}; this server has a folder which synchronises with a corresponding folder on a server in Rome. Depending on the size of the files, they take between $\rm 12-24\,hours$ to arrive in Europe; they are then available for download for any necessary post-processing.}
\subsection{Results}
In this section we compare the outcomes of the two pipelines, making quantitative comparisons where appropriate.
\subsubsection{Noise and SNR comparison}
\textcolor{black}{Following the method described in Garcia et al. (2022)\cite{Garcia2022}, we compare the automatic lightcurves produced by the \texttt{Python} and \texttt{IDL} pipelines using four metrics: the binned white noise, the white and red noise following Pont et al. (2006)\cite{Pont2006}, and the transit signal-to-noise ratio (SNR). All lightcurves used for the comparison were detrended by fitting a polynomial in time to the airmass and background sky level only; we then modelled each transit by fitting a simple Keplerian orbit using \texttt{exoplanet}. In all comparison cases, we placed uniform priors of $\mathcal{U}(0-1)$ on quadratic limb darkening coefficients. Using the resulting transit models, we also compare the binned root mean square (RMS) scatter of the lightcurves, calculated as the standard deviation of the binned residuals. The results of our comparison are presented in Fig.~\ref{fig:lc_comps}.}
The left-hand panels of Fig.~\ref{fig:lc_comps} show a deep ($\rm 9\,ppt$) transit observed during the 2020 season. Upon visual inspection, we see that there is less scatter in the \prose{} lightcurve than in the \texttt{IDL} plot, resulting in almost $\rm 1.5\, x$ higher SNR detection. We also see the effects of the stellar limb darkening more clearly in the transit shape of the \prose{} lightcurve.
In the middle two panels of Fig.~\ref{fig:lc_comps} we present a detection of the shallow ($\rm 0.85\,ppt$) transit of TOI-282.01 (now HD\,28109 b\cite{Dransfield+2022}) observed during the 2021 observing season. This system has large TTVs (transit timing variations) and the detected transit ingress was $\rm \sim72\, mins$ later than expected. We can see that while the \texttt{IDL}
pipeline does not clearly detect the transit, the \prose{} lightcurve has binned RMS scatter smaller than the transit depth, allowing for a conclusive detection of the event despite the large TTV.
\textcolor{black}{In the right-hand panels of Fig.~\ref{fig:lc_comps} we present the lightcurves of TOI-270.02, as observed in May of the current (2022) season. This planet is part of a three planet system where all planets find themselves in commensurate orbits leading once again to TTVs. As TTVs can be used to estimate planetary masses, ASTEP+ has a crucial role to play by measuring precise transit timings when systems are no longer visible from other southern observatories. Fig.~\ref{fig:lc_comps} shows that the new pipeline can yield more precise timings through higher SNR detections of transits.}
\subsubsection{Processing time}
\textcolor{black}{One of the most important considerations for science carried out in Antarctica is energy usage. Concordia generates electricity using two diesel generators with a third available in case of emergencies or malfunctions. Fuel for the generators reaches the station by traverse: a convoy of tractors pulling containers on skis, which crosses $\rm \sim1300\,km$ of ice over the course of $\rm 10-12$ days.
Additionally, we must consider the greenhouse gas emissions resulting from computation. Each litre of diesel used by a generator emits $\rm 2.4-3.5$ kg of $\rm CO_2$\cite{diesel2012}. }
\textcolor{black}{On average, the \texttt{IDL} pipeline takes approximately $\rm 6$ seconds per image to run in full, while the new \texttt{Python} pipeline takes $\rm 1.6$ seconds. The addition of a second camera means that the pipeline will run for approximately twice as long, depending on the exposure times of the images in the respective colours. Even running the pipeline in full twice, our server will now spend half the time running intensive processes per day, therefore increasing CPU idle time. On average, power usage is increased threefold during computationally intensive processes when compared with idle time\footnote{\url{https://wccftech.com/review/intel-core-i9-9900k-8-core-cpu-z390-aorus-master-review/9/?beta=1}}, and with it the carbon footprint of the pipeline. }
\subsection{Limitations \& Future Work}
\textcolor{black}{The biggest limitaiton we currently face with the \prose{} pipeline is that we cannot yet process very defocused observations where the PSF (point spread function) is donut shaped. This does not present a huge problem in the current season as we have paused our observations of $\beta$~Pictoris, but should we return to observing this very bright target it will become necessary to defocus the telescope once again. When the pipeline is updated during the next summer service mission, we intend to test new defocused PSF modelling blocks in order to ensure these observations can be processed in future.}
\section{ASTEP+ in the Future}
\label{sec:future}
At the moment ASTEP+ is mainly pursuing the validation of \tess transiting exoplanet candidates. Most of the easy pickings have been detected, and \tess will increasingly produce long period planets, candidates on fainter stars, or small planets producing weaker events. Whilst ASTEP+ is well geared towards the validation of longer orbital periods, there will be few targets and events are few and far in-between. This means that the ASTEP+ project will need to evolve. We are currently investigating photometric monitoring in coordination with X-ray satellites such as XMM, Chandra, AstroSat, for the study of flaring stars\cite{Lalitha2020}. Of course, the monitoring of known transiting exoplanets will have to continue in order to refine the ephemerides and plan observations by programs such as JWST and Ariel as efficiently as possible \cite{Kokori+2022}.
Another activity we are keen on pursuing is to repeatedly detect the transits of planetary systems where at least two planets have orbital periods near an integer ratio. This configuration means their gravitational interaction are detectable (producing Transit Timing Variations, TTVs), and would allow us to measure the planet's masses such as done already in papers where ASTEP observations were crucial \cite{Dransfield+2022, Kaye+2022}. In addition, ASTEP+ has produced the first ground-based detection of a circumbinary planet transit (Triaud et al. in prep). To support this science, measuring Eclipse Timing Variations (ETVs) produced by the eclipsing binary stars at centre of such system is also a source of information about the planets. In both science cases, all transits/eclipses are useful. Our unique position near the pole and opposite from Chile means we can collect a high yield of events using similar arguments to those described in Section~\ref{sec:long}.
In addition, we are currently developing a new dedicated direct-drive mount adapted to Antarctic conditions. This should maximize observation efficiency and pointing stability, thereby producing higher quality lightcurves. With this mount, we intend to check whether some transients are detectable, and test a rapid response mode to detect the electromagnetic counterparts of Gravitational Wave events, or be used has to follow up observations where a transient detected by the Vera Rubin telescope (in Chile) has set as seen from their location, but is still visible from our location thanks to our proximity to the South Pole. We are also investigating the possibility to search for and detect interstellar asteroids within the frames obtained during exoplanet transit observations. The idea is to leverage the fact that our field of view is mainly out of the ecliptic. Anything that moves is less likely to be from the Solar system.
The highest potential of Astronomy at Concordia however lies with observations in the infrared, particularly in the K band. As for the high plateaus of Antarctica in general, Concordia is characterized by a high sky transparency, low water content and low thermal background that makes it one of the best sites on Earth for infrared Astronomy \cite{Burton+2016}. Combining observations in the visible with ASTEP+ to observations in the infrared with a new telescope of a similar aperture could provide a real breakthrough for the monitoring of exoplanetary atmospheres and exoplanets around low-mass stars and for the study of counterparts of gravitational wave events.
\section{Conclusions}
\label{sec:conclusions}
\textcolor{black}{In this work we have presented a suite of tools developed for scheduling time-domain astronomical observations. While these systems have been written with ASTEP+ in mind, they are easily adaptable for other observatories focused on observations with strong time constraints.}
\textcolor{black}{We have also presented a new automatic data analysis pipeline written in \texttt{Python} and built around \prose{}. The new pipeline produces lightcurves that on average have lower red and white noise, lower scatter, and therefore allow for transits to be detected with higher SNR compared with the system it replaces. Additionally, as the pipeline runs significantly faster we will also decrease our energy usage and carbon footprint resulting from computationally intensive reductions. This remains true even taking into account the increase in data generated by having two cameras instead of one.}
\textcolor{black}{Finally, we have outlined the future directions for the ASTEP+ project, including new collaborations and synergies at the forefront of time-domain astronomy. }
\acknowledgments %
This research is in part funded by the European Union's Horizon 2020 research and innovation programme (grants agreements n$^{\circ}$ 803193/BEBOP), and from the Science and Technology Facilities Council (STFC; grant n$^\circ$ ST/S00193X/1).
We acknowledge support from the European Space Agency (ESA) through the Science Faculty of the European Space Research and Technology Centre (ESTEC).
ASTEP and ASTEP+ have benefited from the support of the French and Italian polar agencies IPEV and PNRA, and from INSU, ESA through the Science Faculty of the European Space Research and Technology Centre (ESTEC), the University of Birmingham, the laboratoire Lagrange (CNRS UMR 7293) and the Universit\'e C\^ote d'Azur through Idex UCAJEDI (ANR-15-IDEX-01).
This publication benefits from the support of the French Community of Belgium in the context of the FRIA Doctoral Grant awarded to Mathilde Timmermans and Lionel J. Garcia.
MNG acknowledges support from the European Space Agency (ESA) as an ESA Research Fellow.
\bibliography{report} %
\bibliographystyle{spiebib} %
|
Title:
Environmental sub-MeV neutron measurement at the Gran Sasso surface laboratory with a super-fine-grained nuclear emulsion detector |
Abstract: The measurement of environmental neutrons is particularly important in the
search for new physics, such as dark matter particles, because neutrons
constitute an often-irreducible background source. The measurement of the
neutron energy spectra in the sub-MeV scale is technically difficult because it
requires a very good energy resolution and a very high $\gamma$-ray rejection
power. In this study, we used a super-fine-grained nuclear emulsion, called
Nano Imaging Tracker (NIT), as a neutron detector. The main target of neutrons
is the hydrogen (proton) content of emulsion films. Through a topological
analysis, proton recoils induced by neutron scattering can be detected as
tracks with sub-micrometric accuracy. This method shows an extremely high
$\gamma$-ray rejection power, at the level of $5 \times 10^7 ~
\gamma/\rm{cm}^2$, which is equivalent to 5 years accumulation of environmental
$\gamma$-rays, and a very good energy and direction resolution even in the
sub-MeV energy region. In order to carry out this measurement with sufficient
statistics, we upgraded the automated scanning system to achieve a speed of 250
g/year/machine. We calibrated the detector performance of this system with 880
keV monochromatic neutrons: a very good agreement with the expectation was
found for all the relevant kinematic variables. The application of the
developed method to a sample exposed at the INFN Gran Sasso surface laboratory
provided the first measurement of sub-MeV environmental neutrons with a flux of
$(7.6 \pm 1.7) \times 10^{-3} \rm{cm}^{-2} \rm{s}^{-1}$ in the proton energy
range between 0.25 and 1 MeV (corresponds to neutron energy range between 0.25
and 10 MeV), consistent with the prediction. The neutron energy and direction
distributions also show a good agreement.
| https://export.arxiv.org/pdf/2208.13366 |
\preprint{APS/123-QED}
\title{Environmental sub-MeV neutron measurement at the Gran Sasso surface laboratory with a super-fine-grained nuclear emulsion detector}
\author{T. Shiraishi}
\email{[email protected].}
\author{S. Akamatsu}
\affiliation{Department of Physics, Toho University, Chiba, Japan}
\author{T. Naka}
\affiliation{Department of Physics, Toho University, Chiba, Japan}
\affiliation{Kobayashi-Maskawa Institute, Nagoya University, Aichi, Japan}
\author{T. Asada}
\affiliation{UniversitГ degli studi di Napoli "Federico II", Napoli, Italy }
\affiliation{Istituto Nazionale di Fisica Nucleare, Napoli, Italy}
\author{G. De Lellis}
\affiliation{UniversitГ degli studi di Napoli "Federico II", Napoli, Italy }
\affiliation{Istituto Nazionale di Fisica Nucleare, Napoli, Italy}
\author{V. Tioukov}
\affiliation{Istituto Nazionale di Fisica Nucleare, Napoli, Italy}
\author{G. Rosa}
\affiliation{Sezione INFN di Roma, Roma, Italy}
\author{R. Kobayashi}
\affiliation{Graduate School of Science, Nagoya University, Aichi, Japan}
\author{N. D'Ambrosio}
\affiliation{Laboratori Nazionali dell'INFN di Gran Sasso, L'Aquila, Italy}
\author{A. Alexandrov}
\affiliation{UniversitГ degli studi di Napoli "Federico II", Napoli, Italy }
\affiliation{Istituto Nazionale di Fisica Nucleare, Napoli, Italy}
\author{O. Sato}
\affiliation{Institute of Materials and Systems for Sustainability, Nagoya University, Aichi, Japan}
\date{\today}
\section{Introduction}
Environmental neutrons are normally a background source for experiments searching for dark matter and neutrinoless double $\beta$-decay in underground laboratories. Therefore, the measurement of their properties including the relative abundance is particularly important for these searches. For a Weakly Interacting Massive Particle (WIMP)~\cite{WIMP,WIMP2} in the $1 - 10^4$~GeV/c$^2$ mass range, the Maxwell-Boltzman distribution of its velocity in the Milky Way galaxy corresponds to nuclear recoil energies in the $1 - 10$~keV range. Sub-MeV neutrons would produce nuclear recoils with similar energies and therefore their investigation is particularly important for the WIMP search.
Environmental neutrons in the sub-MeV region have not been directly measured owing to technical difficulties. In 1988, the measurement of environmental neutrons was carried out by A. Rindi {\it et al.} at the INFN Laboratori Nazionali del Gran Sasso (LNGS)~\cite{GS_neutron}. They used an $^3$He proportional counter, particularly suited for the measurement of thermal neutrons. However, this device detects protons produced by the neutron absorption reaction $^3$He($n$, $p$)T, once the neutrons are decelerated by a moderator. Therefore, a large systematic uncertainty is introduced in the energy resolution by the moderator, which prevents the energy reconstruction for sub-MeV neutrons. Moreover, the detector is not sensitive to the direction of the neutrons since the spatial resolution is not adequate.
We have developed a new direct detection method for neutrons with energies down to the sub-MeV domain~\cite{Neutron}, by using a super-fine-grained nuclear emulsion, called Nano Imaging Tracker (NIT)~\cite{NIT1,NIT2}. Owing to its unprecedented spatial resolution at the nanometric scale, this device provides the three-dimensional reconstruction of proton tracks induced by the neutron scattering, thus being sensitive to the sub-MeV neutron energy region and providing measurements of both the neutron energy and direction. Moreover, it provides a very high $\gamma$-ray rejection power and it is capable of detecting and measuring neutrons even in an environment with a high $\gamma$-ray rate.
This study is meant to demonstrate the capability of measuring the neutron energy and direction in the sub-MeV domain, by detecting those environmental neutrons at the LNGS surface laboratory.
In the first part of this work we report about the upgrade of the automated scanning system, to make it faster and collect a larger statistical sample. The performance of the system in the neutron detection was carefully measured by using monochromatic neutrons in the sub-MeV region. We then report the results of the environmental neutron measurements at the LNGS surface laboratory: the neutron flux and its directional distributions in the sub-MeV region are provided. Finally, we discuss the potential of this detection technique for future underground environmental neutron measurements and to search for proton recoils induced by light dark matter scattering.
\section{Detection Technique}
\label{sec:technique}
\subsection{Nano Imaging Tracker}
\label{subsec:nit}
NIT is a super-high resolution nuclear emulsion~\cite{NIT1,NIT2} developed for the NEWSdm experiment~\cite{NEWSdm}, designed to search for dark matter through the direct detection of the induced nuclear recoils, for the first time with a directional sensitive approach. NIT consists of AgBr:I crystals of several tens of nanometers dispersed in a medium made of gelatin and polyvinyl alcohol: each crystal acts as the sensor of charged particles. In this study, we used the NIT type with (70~$\pm$~10)~nm AgBr:I crystals, dispersed with a density of about 2000~crystals/$\umu$m$^{3}$, producing an overall mass density of (3.2~$\pm$~0.2)~g/cm$^{3}$.
NIT contains various nuclear targets such as Ag, Br, C, N, O, and H. For the neutron detection, hydrogen acts as the leading target given the larger recoil energy transfer. The hydrogen mass fraction is (1.75~$\pm$~0.30)\%.
The small size of AgBr:I crystals turns into a large energy deposition per unit length (few 10~keV/$\umu$m) required to sensitize the crystal. This makes NIT insensitive to electrons, except at their stopping point, and thus $\gamma$-rays do not provide a signal track. It makes this neutron detection approach $\gamma$-ray background free.
Nuclear emulsion is usually handled in the form of films, obtained by pouring an emulsion sensitive layer of up to several hundreds micrometers on a mechanical support, known as a base, made of plastic or glass. In this study, we used Cyclo Olefin Polymer (COP) as the base material, due to its low radioactivity from $^{238}$U and $^{232}$Th, and to its high light transmittance, particularly important in the observation at an epi-optical microscope. For the COP base, ZEONOR\superR by ZEON Corporation was selected. The maximum size of the COP base is 120~mm~$\times$~100~mm with a thickness of 2~mm. The NIT emulsion was purified with a 0.22~$\umu$m PES filter (Millex\superR -GP from the Merck company) to remove dust, and it was poured as a 65~$\umu$m-thick sensitive layer on a COP base of 100~mm $\times$ 80~mm size. A thin gelatin layer with 40~nm silver nanoparticles dispersed was applied to the top and bottom as a marker to recognize the emulsion layer.
The sensitization and development process of NIT is similar to that already described in a previous neutron study~\cite{Neutron}. However, due to the larger thickness used for this work, during the fixing treatment at room temperature, NIT samples were soaked for approximately 1.5 hours until the dissolution was confirmed by eye inspection. After this treatment, NIT get shrunk by a factor (0.61 $\pm$ 0.04) w.r.t.~the original thickness. This factor is accounted for during the analysis at the microscope.
\subsection{Three Dimensional Sub-Micrometric Tracking System}
\label{subsec:tracking}
For the NIT analysis, we have developed a three-dimensional sub-micrometric tracking method called Chain Tracking~\cite{Neutron}, by using the scanning system denoted as Post Track Selector (PTS)~\cite{PTS-2,PTS_DFT}, as shown in Fig.~\ref{fig:image_process}. The Chain Tracking is a proprietary 3D track reconstruction algorithm for the tomographic image acquired by the PTS. It first creates pairs of neighboring silver grains produced by the passage of charged particles, then recursively connects, with a chain-like structure, all patterns produced by other silver grains falling within the angular and position allowance. It finally selects the longest chain as a track. This enables automated analysis of tracks longer than 2~$\umu$m with a well-assessed detection efficiency. With this cut on the track length, $\gamma$-rays do not produce detectable tracks, because NIT is sensitive to electrons induced by $\gamma$-rays only at their stopping point. However, the chance coincidence of two $\gamma$-rays has to be considered during a long run when the $\gamma$-ray density increases. We made a dedicated $\gamma$-ray exposure by using an $^{241}$Am source with a density of $5 \times 10^7 ~ \gamma/\rm{cm}^2$, equivalent to the amount of environmental $\gamma$-rays integrated along 5 years. No evidence was found for track candidates induced by $\gamma$-rays which excluded the background from this source.
For all candidate tracks detected by the Chain Tracking, the coordinates (X, Y, Z) of the center of brightness for the two most distant developed silver grains are defined as start and end points, such that the 3D track range and direction are calculated thereafter.
We have upgraded the objective lens of the microscope. Indeed, in the previous setup, the 100$\times$ objective lens showed a pixel of 0.055~$\umu$m, an over sampling compared to the point spread of about 0.25~$\umu$m due to the diffraction limit. Table~\ref{tab:specification} shows the objective lens and camera used in the current microscope setup: this corresponds to a wider field-of-view (FOV) with a lower sampling pitch and a faster scanning speed. Furthermore, the image analysis in the current setup is performed by a GPU (GeForce RTX 2080 Ti) stream processing to accelerate image filtering, rather than by the CPU parallel processing. Consequently, the analysis speed of the PTS with the Chain Tracking system has achieved 250~g/year/machine, instead of 30~g/year/machine~\cite{Neutron}. In addition, the algorithm was upgraded and optimized to reduce the uncertainty on the 3D range measurement due to mis-connections in the automated analysis.
The depth of field determining the accuracy of this optical system in the direction perpendicular to the film surface (Z-direction) is approximately 0.3~$\umu$m. When acquiring a tomographic image in the Z-direction, the optical system moves at a speed of 0.3~$\umu$m/frame to perform continuous imaging with the camera. During scanning, the emulsion shrunk to approximately 40~$\umu$m, and 170 frames (equivalent to 51~$\umu$m) are acquired in the Z-direction for each FOV.
\begin{table}[htb]
\centering
\caption{Upgraded specification of PTS for the Chain Tracking system.}
\begin{tabular}{|c|c|c|} \hline
& Previous Work~\cite{Neutron} & Current System \\ \hline
Objective Lens & N.A. 1.45, 100$\times$ & N.A. 1.42, 66.8$\times$ \\
Camera Pixel Pitch & 5.5~$\umu$m & 7.0~$\umu$m \\
Pixel Resolution & 0.055~$\umu$m & 0.105~$\umu$m \\
Number of Pixels & 2048~$\times$~1088 & 2304~$\times$~1720 \\
Camera Frame Rate & 300~fps & 500~fps \\
FOV & 112~$\umu$m $\times$ 60~$\umu$m & 241~$\umu$m $\times$ 180~$\umu$m \\
Image Processor & CPU & GPU \\ \hline
Scanning Speed & \multirow{2}{*}{30} & \multirow{2}{*}{250} \\
(g/year/machine) & & \\ \hline
\end{tabular}
\label{tab:specification}
\end{table}
\section{Detector Calibration by Monochromatic Neutron}
\label{sec:calibration}
In this section, we describe the evaluation of detection performance using monochromatic sub-MeV neutrons generated from a fusion reaction at the National Institute of Advanced Industrial Science and Technology (AIST)~\cite{AIST}.
For the recoil protons detected by the Chain Tracking, the three-dimensional range $R$ [$\umu$m] and the scattering angle ${\theta}_{\rm Scat}$ are measured, and the correlation between the proton range and energy ${E}_{p}$ [MeV] in the NIT is approximated as it follows:
\begin{equation}
\label{eq:proton_energy}
{E}_{p} \approx 0.045 + 0.539 \times \sqrt{R} - 0.446 \times \sqrt[3]{R} \quad ({\rm MeV}).
\end{equation}
The neutron energy ${E}_{n}$ in elastic scattering with the proton can be derived from the following equation:
\begin{equation}
\label{eq:neutron_energy}
{E}_{n} = \frac{{(m_n + m_p)}^2}{4 m_n m_p} \frac{{E}_{p}}{{\rm cos}^{2}{\theta}_{\rm Scat}} \simeq \frac{{E}_{p}} {{\rm cos}^{2}{\theta}_{\rm Scat}},
\end{equation}
where we used the approximation $m_n \simeq m_p$, with $m_n$ and $m_p$ the neutron and proton masses, respectively.
In a previous work~\cite{Neutron}, we reported that the energy measurement through Eq.~\ref{eq:neutron_energy} showed an accuracy of $\Delta E_{n, {\rm FWHM}} / E_n = 0.42$ for 540 keV neutrons.
In this study, we have redone the calibration with monochromatic sub-MeV neutrons to check the effect on the measurement accuracy induced by the upgrade of the optical microscope, and to verify the accuracy obtained through the automated measurement by the Chain Tracking algorithm. In addition, NIT detector was kept at low temperature to suppress thermal noise and prevent the fading of latent image during long-term measurements. Therefore, we have also prepared a new neutron exposure to check the sensitivity of NIT films to protons at $-$26~\degreeC.
We used the monochromatic neutrons produced from the T($p$, $n$)$^3$He reaction by bombarding a tritium-titanium layer evaporated on a 0.5~mm~thick copper backing with a 1.7 MeV proton beam from the 4 MV Pelletron accelerator at AIST~\cite{AIST}. Neutrons emitted in this reaction at an angle of 0\degree~ have an energy $E_n = 880 \pm 20$ keV, and the total flux after 7.88 hours exposure was $(4.75 \pm 0.26) \times 10^7 ~n/{\rm cm}^2$ at the sample location (32 cm away from the neutron source), as measured by the BF$_3$ proportional counter. NIT films were placed in a way to have their surface parallel to the incoming neutrons. A portable cooling system using a Stirling cooler and a PID control system (see Appendix~\ref{sec:app_Cooling}) was used to keep the temperature stable at $-$26~\degreeCs during the exposure. Fig.~\ref{fig:AIST2019_setup} shows the setup used for the neutron exposure.
In order to evaluate the accuracy in the range measurement by the automatic Chain Tracking algorithm, a comparison track by track with manual measurements was performed, as shown in Fig.~\ref{fig:proton_range}. The automated measurement has an error of approximately 0.2~$\umu$m compared to the manual measurement, which turns into an uncertainty of approximately 20 keV for the proton energy, sufficient to explore the sub-MeV energy spectrum.
In order to evaluate the detection efficiencies, we have made a full simulation of the setup used for the 880 keV monochromatic neutron exposure. The simulation of the neutron propagation relies on Geant4 libraries: G4HadronElasticPhysicsHP and G4HadronPhysicsShielding for the neutron scattering model, and G4EmLivermore for the electromagnetic model. We have included in the simulation the description of all the surrounding materials close to the NIT sample, such as the sample mounting and the Stirling cooler. The neutron flux and its energy spectrum were simulated for each neutron emission angle, and the tracking pitch for recoil protons in the NIT was set at 0.1~$\umu$m.
In order to avoid the uncertainty associated with the neutron attenuation induced by the scattering, the comparison was done in the proximity of the neutron incident position on the NIT sample. The simulation was normalized to the data, accounting for the actual number of incoming neutrons during the exposure and to the analysed volume.
The number of detected recoil protons in the data was (6330~$\pm$~1280) events, in fair agreement with the predicted value of (5990~$\pm$~70) events. We have estimated for the data a statistical error of 1.3\% and an overall systematic uncertainty of 20.3\% due to the following contributions: 17.1\% to the hydrogen NIT content, 6.5\% to the NIT density, 5.5\% to the neutron fluence, and 6.6\% to the shrinkage factor affecting the actual analysed volume.
Fig.~\ref{fig:AIST_kinematics} shows a data/MC comparison of the measured kinematic variables: proton range ($R$), scattering angle ($\cos {\theta}_{\rm Scat}$), reconstructed neutron energy ($E_n$) and recoil-proton energy ($E_p$) in head-on collisions ($\cos {\theta}_{\rm Scat} > 0.98$). They show a very good agreement both in normalization and in shape. The small excess around $\cos {\theta}_{\rm Scat}$=1 is expected to be due to the scattering from some materials close to the beamline, which was not described in the simulation. Fig.~\ref{fig:AIST_kinematics}(c) reports the neutron energy reconstructed through the recoil-proton energy and the scattering angle, with a peak value at (864~$\pm$~46)~keV, consistent with the exposure energy. The obtained energy resolution is $\Delta E_{n, {\rm FWHM}} / E_n = 0.31$ for 880 keV neutrons, comparable to the value of 0.42 measured in the previous calibration run for 540 keV neutrons~\cite{Neutron}.
Since most of the protons are scattered at a small angle, the orientation of NIT film adopted in the exposure resulted in a higher detection efficiency. However, as described in Section~\ref{subsec:tracking}, since the accuracy in the Z coordinate is worse than for the other coordinates, a dependency of the detection efficiency is expected on the Z inclination (${\theta}_{\rm Z}$). This is particularly true for short range tracks. The estimated angular dependency of the detection efficiency is reported in Fig.~\ref{fig:AIST_eff}, separately for tracks with ranges within (red) and above (blue) 4~$\umu$m.
In order to bring the mis-identification of dust events to a negligible level, a displacement between start and end points of the track in the horizontal direction was required to be larger than 1~$\umu$m. This is reported hereafter as a 2D range cut. The detection efficiency gets lower for short vertical tracks once this cut is applied. The angular dependence of the detection efficiency in Fig.~\ref{fig:AIST_eff} was fitted with a Sigmoid function (dash dotted line).
\section{Neutron Measurement at the LNGS Surface laboratory}
We have conducted a run at the LNGS surface laboratory to measure environmental neutrons, given that $\gamma$-rays do not constitute a background in our analysis.
\subsection{Experimental Setup}
\label{subsec:setup}
NIT films were produced at the NEWSdm facility in Hall-F of the Gran Sasso underground. NIT emulsion was produced in the facility, poured on a COP base and dried for 1 day. After that, the films were dipped in a sodium sulfite solution of 0.0397~mol/L for their Halogen-Acceptor sensitization~\cite{HA} and dried for another day.
These samples prepared underground were transported to the surface laboratory and installed in a portable freezer box located outdoor, as shown in Fig.~\ref{fig:SurfaceRun_setup}. The thickness of the plastic containers was 4~mm for the outer container and approximately 2~cm for the portable freezer box.
Samples were installed for up to 29 days with stable temperature at $-$20\degreeCs to suppress the fading effect as described in Section~\ref{sec:calibration}.
Table~\ref{tab:setup} shows the details of the two samples used in the measurement. Sample 1 was exposed for two days while Sample 2 was kept for 29 days at the Gran Sasso surface laboratory. The preparation of both samples took 2 days and it was carried out in the underground laboratory. Sample 1 is considered as the reference to study the initial level of radioactivity integrated in the sample. In order to extract the neutron rate, in the analysis we subtract the rate measured in Sample 1 from the one measured in Sample 2 and consider 27 days as the exposure time.
\begin{table}
\caption{Details of the experimental setup.}
\begin{tabular}{|c||c|c|} \hline
& Sample 1 & Sample 2 \\ \hline \hline
Surrounding environment & \multicolumn{2}{c|}{Portable freezer box (outdoor)} \\ \hline
Altitude & \multicolumn{2}{c|}{1400 m} \\ \hline
Expected angle-integrated & \multicolumn{2}{c|}{} \\
flux of atmospheric & \multicolumn{2}{c|}{} \\
neutron in $0.25 - 10$~MeV & \multicolumn{2}{c|}{$9.0 \times 10^{-3}$ cm$^{-2}$ s$^{-1}$} \\
(assumed water fraction & \multicolumn{2}{c|}{} \\
in ground as 20\%)~\cite{EXPACS,EXPACS_ver4.0} & \multicolumn{2}{c|}{} \\ \hline
Operation temperature & \multicolumn{2}{c|}{$-20$ \degreeC} \\ \hline
Run start date & \multicolumn{2}{c|}{24 Nov. 2021} \\ \hline
Preparation time in & \multirow{2}{*}{2} & \multirow{2}{*}{2} \\
underground (days) & & \\ \hline
Exposure time (days) & 2 & 29 \\ \hline
Installation direction & \multicolumn{2}{c|}{Horizontal} \\ \hline
Analyzed area (cm$^2$) & 46.7 & 99.4 \\ \hline
Analyzed mass (g) & 0.65 & 1.35 \\ \hline
\end{tabular}
\label{tab:setup}
\end{table}
\subsection{Event Selection}
\label{subsec:selection}
In order to select neutron-induced proton recoil tracks, we require that both start and end points of the tracks are within the inner fiducial volume which excludes the 10~$\umu$m from the top and 5~$\umu$m from the bottom of the emulsion. This is meant to reject external $\alpha$-rays due to $^{222}$Rn from the air and from the $^{238}$U or $^{232}$Th radioactivity in the base materials. Events passing the fiducial volume cut are shown in Fig.~\ref{fig:classification} and are classified as Single-prong (a) or Multi-prong (b) events, according to the track multiplicity at the vertex.
The intrinsic radioactivity from the $^{238}$U and $^{232}$Th decay chains in the NIT were measured $\gamma$-ray by a germanium detector~\cite{Activity} to be 6~mBq/kg for $^{228}$Th and 0.8~mBq/kg for $^{226}$Ra, and most of the $\alpha$-rays produced show a multi-prong vertex. A typical example is the "Th star~\cite{Th_star}", emitting five $\alpha$-rays in the decay process from $^{228}$Th to $^{208}$Pb. Inelastic scattering events by high-energy neutrons are also observed as Multi-prong, with short-range recoil nuclei and spallation fragments.
In this study, we focused on neutron elastic scattering, and only Single-prong events are retained for the analysis. However, $\alpha$-rays might produce a Single-prong event when there is a contamination from $^{214}$Po (7.687~MeV) or $^{210}$Po (5.304~MeV). Their track ranges in NIT are approximately 43~$\umu$m and 24~$\umu$m, respectively (see Appendix~\ref{sec:app_alpha}, \ref{sec:app_MeV}). Therefore, in this study, we set an upper limit for track range of 14 $\umu$m, which corresponds to the proton energy of 1~MeV, and analyze only recoil protons of $2 - 14$~$\umu$m ($0.25 - 1$~MeV in proton energy). The background is therefore negligible in this region. Fig.~\ref{fig:neutron_spectrum} shows the detectable neutron energy spectrum, mostly in the range between 0.25 and 10 MeV, which reflects the cuts applied in the proton range measurement.
In addition, nitrogen contained in the NIT as a mass fraction of (3.7~$\pm$~0.3)\% also produce a small fraction of signal, because the $^{14}$N($n$, $p$)$^{14}$C reaction emits protons with an energy of 0.58~MeV (6.5~$\umu$m in track range) when thermal or epithermal neutrons are captured by nitrogen~\cite{N_np_C}.
\subsection{Result}
\label{subsec:result}
Fig.~\ref{fig:SurfaceRun_range_subMeV} shows the range distribution measured in Sample 1 and Sample 2. The number of detected events was (36~$\pm$~7)~events/g in Sample 1 and (336~$\pm$~16)~events/g in Sample 2, with a significant increase due to the exposure time, as expected. These events are essentially only protons produced in the neutron scattering, given the negligible background.
A MC simulation based on Geant4 was carried out to compare the neutron flux and energy spectrum originated by atmospheric muons.
We considered the neutron spectrum at the LNGS surface laboratory expected by the PARMA~\cite{PARMA} model using the cosmic ray spectrum prediction software of EXPACS~\cite{EXPACS,EXPACS_ver4.0}, which is published by a group of the Japan Atomic Energy Agency (JAEA).
The simulation accounts for a thickness of 4~mm of the container and of 2~cm of the portable freezer box. Neutrons were generated from outside the container considering the Zenith angular dependency predicted by the PARMA model.
\begin{comment}
\end{comment}
Fig.~\ref{fig:SurfaceRun_result} shows the measured distributions of the recoil proton energy ($E_p$), plane angle ($\phi$), and Zenith angle (cos$\theta_{\rm Zenith}$) in the data Sample 2 and the comparison with the MC simulation.
The data of Sample 1 were subtracted from Sample 2 to obtain an equivalent exposure of 27 days. The number of events in the proton energy range between 0.25 and 1 MeV was found to be (11.1~$\pm$~0.6(stat.)~$\pm$~2.4(sys.))~event/g/day in the data and (13.2~$\pm$~0.4)~event/g/day in the simulation. The number of detected event is consistent with the neutron flux predicted by the PARMA model, and the energy spectrum and directional distribution also show a good agreement. Consequently, we obtained the measured neutron flux as $(7.6 \pm 1.7) \times 10^{-3} \rm{cm}^{-2} \rm{s}^{-1}$ in the proton energy range between 0.25 and 1 MeV (corresponds to neutron energy range between 0.25 and 10 MeV).
\section{Prospects}
We plan to extend this measurement to higher energies. This requires to improve the microscope scanning speed, given the lower flux, and to reduce the background from $\alpha$-rays (see Appendix~\ref{sec:app_MeV}). The increase of the scanning speed will also allow to extend the measurement to the neutrons at LNGS underground where the flux is expected to be three orders of magnitude lower than on the surface. The high accuracy of the emulsion in topological analyses allows extending the analysed sample to events with multiple fragments, thus becoming sensitive also to inelastic neutron scattering, relevant for energies of a few hundred MeV.
Since this neutron detection technique uses the hydrogen content of the emulsion, it also paves the way to the search for low-mass DM. Even though DM masses below 1~GeV/c$^2$ are plausible for the galaxy formation mechanism, they have remained unexplored due to technical difficulties. Recently, Cosmic Ray boosted Dark Matters (CR-DMs), i.e.~DM accelerated by collisions with protons and helium nuclei in the galaxy, was suggested as one of the DM investigation methods~\cite{BDM1}. CR-DMs is a natural consequence of the DM interactions with nucleons, as foreseen by the standard WIMP model. They are predicted to have low mass and a speed higher than the escape velocity of the galaxy: their orientation should preferentially be from the galactic center because of their acceleration mechanism~\cite{BDM2}. By applying the neutron measurement technique with NIT, it is possible to search for low-mass dark matter such as CR-DMs with a directional sensitivity.
\section{Conclusion}
For the environmental neutron measurement, we first upgraded the sub-micrometric 3-dimensional tracking system and achieved an analysis speed of 250~g/year/machine. Then, we calibrated the performance of this system through the analysis of a sample exposed to monochromatic 880~keV neutrons at the temperature of $-$26~\degreeCs and we reported a very good agreement of all the kinematic variables relevant for the neutron elastic scattering. The neutron energy, reconstructed by the recoil proton energy and its scattering angle, was measured to be (864~$\pm$~46)~keV at the peak value, and its accuracy was $\Delta E_{n, {\rm FWHM}} / E_n = 0.31$ with the automated measurement accuracy.
We then performed the environmental neutron measurement at the LNGS surface laboratory. The neutron flux in the proton energy range between 0.25 and 1 MeV was measured to be $(7.6 \pm 1.7) \times 10^{-3} \rm{cm}^{-2} \rm{s}^{-1}$, in good agreement with the prediction by the PARMA model. The uncertainty of this measurement is mainly due to the systematic error associated with the hydrogen content in the films which should be measured with higher accuracy for future and more accurate neutron measurements.
We intend to extend this measurement at the LNGS surface laboratory in the MeV region, by lowering the background contamination from the $\alpha$-ray tracks in the production process and by increasing the statistics. We also plan to perform neutron measurements in the LNGS underground laboratory.
\section*{Acknowledgment}
This work was supported by the Japan Society for the Promotion of Science (JSPS) KAKENHI Grant Numbers JP18H03699, JP19H05806, and JP22J01541. This work was also carried out by the joint usage/research program of the Institute of Materials and Systems for Sustainability (IMaSS), Nagoya University.
This research was carried out in the frame of the STAR Plus Programme, financially supported by UniNA and Compagnia di San Paolo.
Monochromatic neutron source was supported by Dr. Tetsuro Matsumoto and Dr. Akihiko Masuda of the National Metrology Institute of Japan (NMIJ), the National Institute of Advance Industrial Science and Technology (AIST).
\section*{Appendix}
\appendix
\section{Low Temperature Control System}
\label{sec:app_Cooling}
We developed a portable cooling system to perform neutron exposure experiments at low temperature. We used SC-UD08 Stirling cooler manufactured by TWINBIRD CORPORATION, which has a cooling capacity of approximately 15 W at $-$100 \degreeCs and 60 W at $-$20 \degreeCs at maximum output, and its output can be controlled by a $1 - 5$ V input voltage. For the control of Stirling cooler, we also developed temperature control system using the SoC-FPGA (DE10-nano), as shown in Fig.~\ref{fig:cooling_system}(a).
First, a platinum resistance thermometer (P0K1.232.6W.B.007), which has good temperature characteristics at low temperatures, is used to convert the sample temperature to a resistance value, then the Wheatstone Bridge circuit converts resistance to voltage. This voltage is digitally converted by the AD converter (LTC2308) when a command is received from the CPU, and the data is written to the shared memory. After the digital conversion, the CPU accesses the data on the shared memory to monitor the current temperature, and determines the control voltage by PID control as described below and sends the data to the DA converter (MCP4921) to set the control voltage of the Stirling cooler.
In the PID control, the voltage for the Stirling cooler ($V_{\rm Control}$) is determined by the following equation with the target temperature $T_{\rm Target}$ and the temperature $T(t)$ at time $t$.
\begin{equation}
\label{eq:pid_control}
V_{\rm Control} = K_{p} \Delta T + K_{i} \int_{0}^{t} \Delta T dt + K_{d} \frac{dT(t)}{dt},
\end{equation}
where $\Delta T = T(t) - T_{\rm Target}$, and the coefficients $K_{p}$, $K_{i}$, and $K_{d}$ were obtained by the ultimate sensitivity method~\cite{PID} as 0.18, $7.8 \times 10^{-4}$, and 10, respectively. A series of feedback control by PID is performed at 5 seconds intervals.
Fig.~\ref{fig:cooling_system}(b) shows the actual temperature profile of the NIT sample during the 880 keV monochromatic neutron exposure at AIST.
\section{Accuracy of $\alpha$-ray energy measurement}
\label{sec:app_alpha}
To identify the $\alpha$-ray source, the energy calibration was performed using the "Th star" events found from Sample 2 of LNGS-run. 5-prong events can be easily identified as "Th star" since they are produced by the decay of $^{228}$Th to $^{208}$Pb during the run because of their long lifetime. In addition, these tracks can be used to calibrate the correlation between range and decay energy, because it is easy to determine which track corresponds to which decay. The correlation between measured range and decay energy is shown in Fig.~\ref{fig:alpha_calib} for decays fully contained within the emulsion. The following formula can be used to estimate the $\alpha$-ray energy ($E_{\alpha}$) in the range between 4 and 10 MeV.
\begin{equation}
\label{eq:alpha_energy}
{E}_{\alpha} \approx -2.111 + 1.511 \times \sqrt{R} \quad ({\rm MeV}).
\end{equation}
\section{MeV Region}
\label{sec:app_MeV}
As described in Section~\ref{subsec:selection}, in this study we have focused on the sub-MeV proton energy region to avoid $\alpha$-rays background in the MeV region. The actual range distribution in the MeV region is shown in Fig.~\ref{fig:SurfaceRun_range_MeV}.
As a demonstration run to estimate the amount of background, we had performed similar measurements inside a building at Nagoya University in Japan, where the neutron flux was lower than at LNGS, with 31 days exposure. In the demo-run, a peak at (24.3~$\pm$~0.4)~$\umu$m corresponding to an $\alpha$-ray energy of (5.30~$\pm$~0.08)~MeV was observed and it was identified as $\alpha$-rays from $^{210}$Po with a relatively long half-life. In principle, the MeV region can be analyzed by avoiding this region.
However, for the LNGS run reported in Fig.~\ref{fig:SurfaceRun_range_MeV}, we can observe different time-independent contributions in the range between 30 and 65 $\umu$m. We noticed that some of these tracks show low brightness of their reconstructed grains. Fig.~\ref{fig:SurfaceRun_alpha} shows the $\alpha$-ray energy distribution separated in two categories: normal (a) and low (b) brightness.
In the normal brightness track sample, $\alpha$-ray tracks from $^{214}$Po (daughter nucleus of $^{222}$Rn) were observed around (7.81~$\pm$~0.12)~MeV.
In the low brightness track sample, the distribution can be explained by assuming that they originate from $^{214}$Po contamination during the drying process.
Indeed, during drying, the AgBr:I crystals are more dispersed due to the higher water content which turns into a lower sensitivity to $\alpha$-ray tracks and, hence, to a lower brightness. Their length is also longer because of the lower mass density of the film in this phase. After drying, NIT films shrink in the Z-direction due to the evaporation of water, which turns into a wider range distribution.
We have already established a NIT production method with less contamination of $^{214}$Po at the LNGS underground laboratory which would allow performing neutron measurements with reduced MeV background in the future.
|
Title:
CORINOS I: JWST/MIRI Spectroscopy and Imaging of a Class 0 protostar IRAS 15398-3359 |
Abstract: The origin of complex organic molecules (COMs) in young Class 0 protostars
has been one of the major questions in astrochemistry and star formation. While
COMs are thought to form on icy dust grains via gas-grain chemistry,
observational constraints on their formation pathways have been limited to
gas-phase detection. Sensitive mid-infrared spectroscopy with JWST enables
unprecedented investigation of COM formation by measuring their ice absorption
features. We present an overview of JWST/MIRI MRS spectroscopy and imaging of a
young Class 0 protostar, IRAS 15398-3359, and identify several major
solid-state absorption features in the 4.9-28 $\mu$m wavelength range. These
can be attributed to common ice species, such as H$_2$O, CH$_3$OH, NH$_3$, and
CH$_4$, and may have contributions from more complex organic species, such as
C$_2$H$_5$OH and CH$_3$CHO. The MRS spectra show many weaker emission lines at
6-8 $\mu$m, which are due to warm CO gas and water vapor, possibly from a young
embedded disk previously unseen. Finally, we detect emission lines from [Fe
II], [Ne II], [S I], and H$_2$, tracing a bipolar jet and outflow cavities.
MIRI imaging serendipitously covers the south-western (blue-shifted) outflow
lobe of IRAS 15398-3359, showing four shell-like structures similar to the
outflows traced by molecular emission at sub-mm wavelengths. This overview
analysis highlights the vast potential of JWST/MIRI observations and previews
scientific discoveries in the coming years.
| https://export.arxiv.org/pdf/2208.10673 |
\title{CORINOS I: JWST/MIRI Spectroscopy and Imaging of a Class 0 protostar \source}
\author[0000-0001-8227-2816]{Yao-Lun Yang}
\affiliation{RIKEN Cluster for Pioneering Research, Wako-shi, Saitama, 351-0198, Japan}
\affiliation{Department of Astronomy, University of Virginia, Charlottesville, VA 22904, USA}
\author[0000-0003-1665-5709]{Joel D. Green}
\affiliation{Space Telescope Science Institute, Baltimore, 3700 San Martin Dr., MD 21218, USA}
\author[0000-0001-7552-1562]{Klaus M. Pontoppidan}
\affiliation{Space Telescope Science Institute, Baltimore, 3700 San Martin Dr., MD 21218, USA}
\author{Jennifer B. Bergner}
\affiliation{University of Chicago Department of the Geophysical Sciences, Chicago, IL 60637, USA}
\altaffiliation{NASA Sagan Fellow}
\author[0000-0003-2076-8001]{L. Ilsedore Cleeves}
\affiliation{Department of Astronomy, University of Virginia, Charlottesville, VA 22904, USA}
\author[0000-0001-5175-1777]{Neal J. Evans II}
\affiliation{Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA}
\author[0000-0001-7723-8955]{Robin T. Garrod}
\affiliation{Departments of Chemistry and Astronomy, University of Virginia, Charlottesville, VA, 22904, USA}
\author[0000-0002-4801-436X]{Mihwa Jin}
\affiliation{Astrochemistry Laboratory, Code 691, NASA Goddard Space Flight Center, Greenbelt, MD 20771}
\affiliation{Department of Physics, Catholic University of America, Washington, DC 20064, USA}
\author{Chul Hwan Kim}
\affiliation{Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea}
\author{Jaeyeong Kim}
\affiliation{Korea Astronomy and Space Science Institute, 776 Daedeok-daero, Yuseong-gu Daejeon 34055, Republic of Korea}
\author{Jeong-Eun Lee}
\affiliation{Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea}
\author[0000-0002-3297-4497]{Nami Sakai}
\affiliation{RIKEN Cluster for Pioneering Research, Wako-shi, Saitama, 351-0198, Japan}
\author[0000-0002-5171-7568]{Christopher N. Shingledecker}
\affiliation{Department of Physics and Astronomy, Benedictine College, Atchison, KS, 66002, USA}
\author{Brielle Shope}
\affiliation{Department of Chemistry, University of Virginia, 409 McCormick Rd, Charlottesville, VA, 22904, USA}
\author[0000-0002-6195-0152]{John J. Tobin}
\affiliation{National Radio Astronomy Observatory, 520 Edgemont Rd., Charlottesville, VA 22903, USA}
\author[0000-0001-7591-1907]{Ewine F. van Dishoeck}
\affiliation{Leiden Observatory, Leiden University, Netherlands}
\affiliation{Max Planck Institute for Extraterrestrial Physics, Garching, Germany}
\correspondingauthor{Yao-Lun Yang}
\email{[email protected]}
\section{Introduction}
In recent years, complex organic molecules (COMs), first detected in high-mass cores \citep{1985ApJS...58..341S,1986ApJS...60..357B,1987ApJ...315..621B}, have been routinely detected in the gas-phase in low-mass protostellar cores, suggesting extensive chemical evolution at the early stage of low-mass star formation \citep[e.g.,][]{1995ApJ...447..760V,ceccarelli2007extreme,2020ARAA..58..727J,2022arXiv220613270C}.
These low-mass cores are often called ``hot corinos'' \citep{2003ApJ...593L..51C,2004ASPC..323..195C,2004ApJ...615..354B}.
The COMs, commonly defined as organic molecules with six or more atoms \citep{2009ARAA..47..427H}, could be the precursors of pre-biotic molecules \citep[e.g.,][]{2020AsBio..20.1048J}. Solar system objects, such as comets, also show abundant COMs \citep{2019ARAA..57..113A}; and in some cases, the COM abundances match those measured in protostellar cores, hinting at a chemical connection from protostars to planetary systems \citep{2000AA...353.1101B,2019MNRAS.490...50D,2019ESC.....3.2659B} Thus, the origin of the rich organic chemistry in the protostellar stage is of great interest in characterizing the chemical environment of planet-forming disks.
Current models predict that a combination of gas-phase and ice-phase processes (i.e., `gas-grain chemistry') is responsible for COM formation in protostellar environments \citep[e.g.,][]{2008ApJ...682..283G,2014ApJ...791....1T,2018ApJ...869..165L,2018MNRAS.474.2796Q,2018ApJ...854..116S,2020ApJ...897..110A}. These models generally require a warm-up phase during which the elevated temperature enables efficient reactions via diffusion. In addition to the formation of COMs in the ice phase, gas-phase reactions following sublimation of simpler ice molecules may contribute to the production of several COMs \citep{2015MNRAS.449L..16B,2018ApJ...854..135S,2020MNRAS.499.5547V}. Laboratory experiments show that COMs can also be formed on icy surfaces even at low temperature \citep{2017ApJ...842...52F,2016MNRAS.455.1702C,2019ApJ...874..115B,2019ESC.....3..986Q}. Extended distributions of COMs in cold prestellar cores further suggest ongoing formation of COMs in the ice-phase \citep{2016ApJ...830L...6J,2017ApJ...842...33V,2020ApJ...891...73S,2022ApJ...927..213P}. To reconcile the presence of COMs at low temperature, a modified gas-grain chemical model that includes non-diffusive reactions at low temperature has been proposed \citep{2020ApJS..249...26J,2022ApJS..259....1G}.
Recent surveys show that gas-phase COM emission is common, but not ubiquitous, around Class 0/I protostars, with detection fractions around half \citep{2019MNRAS.483.1850B,2019ESC.....3.1564B,2020AA...635A.198B,2020AA...639A..87V,2021ApJ...910...20Y,2021AA...650A.150N,2022ApJ...929...10B,2022ApJ...927..218H}. It remains unknown why some sources show rich emission of gas-phase organics and others do not. It may be a true chemical effect, with some sources having low ice-phase COM reservoirs due to their environmental/evolutionary conditions. Another possibility is that COMs are only efficiently sublimated into the gas phase in a subset of sources.
Disk shadowing can effectively lower the temperature in the envelope, leading to inefficient desorption and thus low abundance of gaseous COMs, hence non-detection \citep{2022AA...663A..58N}.
Moreover, high dust optical depth could suppress the COM emission at sub-mm wavelengths \citep{2020ApJ...896L...3D,2022AA...663A..58N}.
Disentangling these scenarios requires an understanding of COM abundances in the ice phase. Therefore, mid-infrared spectroscopy of organic ice features offers an avenue to understand the origin and nature of complex molecule formation in protostars.
Outflows are ubiquitously associated with protostellar cores. The clearance of an outflow cavity and the accretion activity that is tightly related to outflows regulates the thermal structure of the envelope as well as the photochemistry along the cavity wall, thus affecting the abundance of COMs in both gas- and ice-phase \citep[e.g.,][]{2012AA...537A..55V,2014MNRAS.445..913D,2015MNRAS.451.3836D}. At mid-infrared wavelengths, rotationally excited H$_2$ lines and ionic forbidden lines trace the shocked gas and jets in outflow cavities \citep[e.g.,][]{2010AA...519A...3L}. Furthermore, ro-vibrational CO lines and water vapor emission at $\sim$4--6 \micron\ highlight the shocked gas at the base of outflows and/or at the disk surface, constraining the physical conditions of outflows and disks \citep[e.g.,][]{2011AA...533A.112H,2022AJ....164..136S}.
The CORINOS (COMs ORigin Investigated by the Next-generation Observatory in Space) program measures the ice composition of four isolated Class 0 protostars with JWST (program 2151, PI: Y.-L. Yang). The program aims to determine the abundances of ice species with radiative transfer and chemical modeling to constrain the formation and evolution of COMs. The full sample consists of two protostars whose gas-phase spectra are known to exhibit rich COM features, B335 and L483, and two protostars with little emission of gas-phase COMs, \source\ and Ser-emb 7 \citep{2009ApJ...697..769S,2016ApJ...830L..37I,2017ApJ...837..174O,2019ESC.....3.1564B,2019AA...629A..29J}. Each pair represents low- ($\sim1$ \lsun) and high-luminosity ($\sim10$ \lsun) protostars. This work presents initial results from the first observation of \source.
In this paper, we present JWST/MIRI observations of \source, highlighting several new mid-infrared ice features, likely associated with COMs, as well as emission lines and outflows detected in both spectroscopy and imaging. In Section\,\ref{sec:observations}, we describe our JWST/MIRI observing program and data reduction. In Section\,\ref{sec:ice}, we show the extracted 1D MRS spectra and identify absorption features in the spectra along with possible contributing ice species. Section\,\ref{sec:water_vapor} presents the detection of warm water vapor and CO emission, which may originate in a young protoplanetary disk. Section\,\ref{sec:outflows} shows the south-western outflow of \source\ in MIRI imaging and presents detected emission lines, most of which trace the outflows and jets. Lastly, in Section\,\ref{sec:conclusions}, we highlight the findings with this first analysis of JWST/MIRI spectra of a Class 0 protostar.
\subsection{\source}
\source\ (also known as B228) is a Class 0 protostar located in the Lupus I Molecular Cloud \citep{1989PASP..101..816H,2007ApJ...667..288C} at a distance of 154.9$^{+3.2}_{-3.4}$ pc \citep{2020AA...643A.148G}. It has a bolometric luminosity (\lbol) of 1.5 \lsun\ and a bolometric temperature (\tbol) of 68$\pm$27 K\citep{2018ApJ...860..174Y,2021AA...648A..41V}.
\source\ has drawn astrochemical interest because of its abundant warm carbon-chain molecules (CCMs), which suggests an active Warm Carbon-Chain Chemistry \citep[WCCC;][]{2009ApJ...697..769S} and chemical signatures of episodic accretion \citep[e.g.,][]{2013ApJ...779L..22J}. In the WCCC scenario, abundant CH$_4$ ice, which may form in the prestellar stage, is sublimated as the temperature increases due to accretion heating, leading to an elevated abundance of carbon carriers available for the formation of CCMs \citep{2008ApJ...672..371S,2008ApJ...674..984A}. High UV illumination at the prestellar stage, may explain abundant carbon-chain molecules in protostars \citep{2016AA...592L..11S}. On the other hand, only a few emission lines of complex organic molecules (COMs) have been detected despite its rich CCMs (Okoda et al. in prep.). The location of the envelope water snowline inferred from \hcop\ , as well as by detection of HDO, is larger than the current luminosity of \source\ \citep{2013ApJ...779L..22J,2016AA...595A..39B}, suggesting a higher luminosity in the last 100--1000 years, perhaps due to an accretion burst. Moreover, the ice features of \source\ were studied in the Spitzer ``c2d'' (Cores to Disks) survey, where common species, such as \water, CO$_2$, CH$_4$, and \methanol, were identified \citep{2008ApJ...678..985B,2008ApJ...678.1005P,2008ApJ...678.1032O,2010ApJ...718.1100B}.
\source\ is associated with a compact disk, although poorly constrained by observations. \citet{2017ApJ...834..178Y} estimated a centrifugal radius ($R_\text{c} = \frac{j^2}{GM_\star}$, where $j$ is the specific angular momentum) of 20$^{+50}_{-20}$ au by fitting the C$^{18}$O emission. With a similar method, \citet{2018ApJ...864L..25O} found the centrifugal barrier ($R_\text{cb} = \frac{j^2}{2GM_\star}$) at 40 au can explain the kinematics of the SO emission, which corresponds to a centrifugal radius of 80 au. The estimated disk radii from both studies are consistent with considerable uncertainty due to the unresolved Keplerian rotation. They also estimated a very low protostellar mass of only $\leq$0.01$^{+0.02}_{-0}$ \msun\ and 0.007$^{+0.004}_{-0.003}$ \msun\ by \citet{2017ApJ...834..178Y} and \citet{2018ApJ...864L..25O}, respectively.
The bipolar outflow of \source\ has a young dynamical age of $\sim 1000$\,yr, as measured from the CO outflow \citep{2015AA...576A.109Y,2016AA...587A.145B}. The outflow consists of a wide-angle wind-driven outflow and jet-driven bow-shocks \citep{2016AA...587A.145B,2017ApJ...834..178Y}. \citet{2020ApJ...900...40O} show compact emission of H$_2$CO in the outflow identified with a Principal Component Analysis, suggesting a shock-induced origin. \citet{2021AA...648A..41V} further showed evidence of a precessing episodic jet-driven outflow with four ejections separated by 50--80 years. Recently, \citet{2021ApJ...910...11O} found an arc-like structure perpendicular to the known outflow, which they interpreted as shocked gas due to a previously launched secondary outflow.
\section{Observations}
\label{sec:observations}
The protostar \source\ was observed with the Mid-InfraRed Instrument \citep[MIRI;][]{2015PASP..127..584R,2015PASP..127..595W} onboard JWST on 2022 July 20, as part of program 2151 (PI: Y.-L. Yang). The observations used the Medium Resolution Spectroscopy (MRS) mode, which is equipped with four Integral Field Units (IFU) that observed the target simultaneously using dichroics. These IFUs are often referred as ``channels'', where channels 1, 2, 3, and 4 cover 4.9--7.65, 7.51--11.71, 11.55--18.02, and 17.71--28.1 \micron, respectively. Each channel is covered by the same three grating settings, which are also called ``sub-bands''. Thus, an exposure with only one grating setting results in four discontinuous spectra. A full 4.9--28 \micron\ coverage requires observations with three grating settings, resulting in twelve spectral segments. The spectroscopic data were taken in SLOWR1 readout mode with a standard 4-point dither pattern.
\source\ was observed with a pointing center on ($15^{\mathrm{h}}43^{\mathrm{m}}02.24^{\mathrm{s}}$, $-34^\circ{09}^\prime{06.7}^{\prime\prime}$) based on the sub-mm continuum peak from \citet{2014ApJ...795..152O} along with a dedicated background pointing centered on ($15^{\mathrm{h}}43^{\mathrm{m}}07.9^{\mathrm{s}}$, $-34^\circ{09}^\prime{01}^{\prime\prime}$). Recent Atacama Large Millimeter/submillimeter Array (ALMA) observations suggest a sub-mm continuum peak at ($15^{\mathrm{h}}43^{\mathrm{m}}02.2307^{\mathrm{s}}$, $-34^\circ{09}^\prime{06.99}^{\prime\prime}$) using the ALMA Band 6 observations taken on 2022 May 16 (2021.1.00357.S; PI: S. Notsu). The integration time is 1433.4 seconds for the SHORT(A) and LONG(C) sub-bands and 3631.3 seconds for the MEDIUM(B) sub-band. The MEDIUM(B) sub-band covers the 8.67--10.15 \micron\ range where the intensity is the lowest due to strong absorption of silicates. Thus, we intentionally integrated longer with the MEDIUM(B) setting to achieve a sufficient signal-to-noise ratio (S/N) to characterize the ice features around the silicate feature.
The data were processed from the Stage 1 data files (\texttt{uncal}) using v1.7.2 of the JWST pipeline and CRDS context (\texttt{jwst\_0977.pmap}) from \texttt{https://jwst-crds-pub.stsci.edu/}. The dedicated background exposures were subtracted on the exposure level during Stage 2 of the pipeline. The Stage 3 process includes \texttt{OutlierDetectionStep}, \texttt{ResidualFringeStep}, and \texttt{CubeBuildStep}. The \texttt{ResidualFringeStep} task is included to correct for residual fringes that are not fully corrected by the application of a fringe flat, particularly in extracted point source spectra. The fringe is suppressed in most sub-bands except for noticeable residuals in \texttt{ch3-long} around 10--12 \micron. The wavelength calibration is generally accurate to within $\sim$1 spectral resolution element \citep[$\sim$100 \kms;][]{2022arXiv220705632R}.
The protostar appears point-like in the MRS spectral cube. Thus, we extracted a 1D spectrum with an aperture ($R_{\rm ap}$) defined by the diffraction-limited beam size ($1.22\lambda/D$) so that the aperture increases with wavelength. The aperture centers at the ALMA continuum peak ($15^{\mathrm{h}}43^{\mathrm{m}}02.2307^{\mathrm{s}}$, $-34^\circ{09}^\prime{06.99}^{\prime\prime}$). We tested the spectral extraction with additional local background subtraction derived from an annulus outside the aperture; however, the resulting spectra appear to have more noise possibly because the extended outflow cavity complicates the determination of the true background. Thus, we performed no additional background subtraction on the reduced spectral cubes. Despite its point-like appearance, the source emission extends beyond the size of the diffraction-limited beam. A 1D spectrum extracted with a small aperture results in inconsistent flux between several sub-bands due to the flux extended beyond the aperture. Appendix\,\ref{sec:extraction} shows a detailed analysis of the extracted spectra with different apertures. We find that a four-beam aperture provides a good balance between the flux agreement between sub-bands and noise. All spectra show in this study are extracted with a four-beam aperture, $4\times1.22\lambda/D$, unless otherwise specified. We further matched the flux between channels by the ratio of median fluxes in the overlapping wavelengths by applying scale factors of order $\lesssim$16\%, starting from the shortest wavelength. The scaled spectrum differs from the original spectrum by at most 16\%.
To estimate the RMS in the extracted 1D spectrum, we subtracted a Gaussian-smoothed baseline and calculated the RMS in the residual with respect to the smoothed baseline, which has a median of 0.8\%\ with a 1$\sigma$ range from 0.4--1.3\%. The Gaussian width is chosen as 20 wavelength channels to approximate the baseline without noise and avoid smoothing out broad absorption features. The RMS may be underestimated between 10 and 12 \micron, where the fringe residuals are not fully suppressed.
Simultaneous MIRI imaging was enabled along with the primary spectroscopic observations for astrometric registration. The simultaneous field is pointed off the MRS target, but the background observation happened to be arranged such that it covered the south-western outflow lobe of \source. The imaging fields were observed with FASTR1 readout pattern, in the F560W, F770W, and F1000W filters, with filter widths of 1.2, 2.2, and 2.0 \micron, respectively. The point spread function (PSF) full width at half maximum (FWHM) in these bands was measured to 0\farcs22, 0\farcs25, and 0\farcs32, respectively. The total exposure time was 1433.4, 1433.4, and 3631.3 seconds, the same as their spectroscopic counterparts. The Stage 3 products were generated by the standard pipeline obtained from the Barbara A. Mikulski Archive for Space Telescopes (MAST); the data were calibrated with \texttt{jwst\_0932.pmap} from \texttt{https://jwst-crds.stsci.edu/} without further re-processing. The RMS noise estimated from the standard deviation in an empty sky region is 2.3, 7.6, and 18.4 MJy sr$^{-1}$, respectively.
\section{Ice bands in the point source spectrum}
\label{sec:ice}
The extracted MIRI MRS spectrum shows strongly increasing flux density with wavelength along with several absorption features, which is typical for embedded protostars (Figure\,\ref{fig:1d_spec}, top). All of the identified absorption features are due to ices and silicates. We estimate the large-scale continuum by fitting a fourth-order polynomial using the 5.05--5.15, 5.3--5.4, and 5.52--5.62 \micron\ range of the MRS spectrum and the 35--38 \micron\ range of the scaled Spitzer/IRS spectrum (see Figure\,\ref{fig:irs_comp}, right). Ideally the spectrum at the longest wavelengths, which is less affected by silicate and \water\ absorption, would be included for the continuum fitting. However, the long wavelength end ($> 27.5$ \micron) of the MRS spectrum has higher noise and a steeper slope compared to the spectrum at 16--27 \micron; thus, we consider the $>27.5$ \micron\ spectrum as less reliably calibrated compared to the rest of the spectrum due to the rapid drop in MRS sensitivity at its longest wavelengths. Including the Spitzer/IRS spectrum allows us to perform the continuum fitting at longer wavelengths ($> 30$ \micron). The fitted continuum is consistent with the long wavelength end of the MIRI MRS spectrum. Nonetheless, this fit has substantial systematic uncertainty depending on various factors, such as the choice of assumed absorption-free ranges and the functional form of the continuum.
The qualitative analysis presented here serves to identify potential carriers of the ice bands, rather than to derive precise ice abundances.
Figure \ref{fig:1d_spec} (bottom) shows the optical depth spectrum, derived as $\tau= -{\rm ln}(F/C)$, where $F$ is the flux density and $C$ is the fitted continuum. We clearly detect the silicate band centered at 10\,$\mu$m, as well as the bending and libration modes of \water\ ice at 6 and 11--13\,$\mu$m, respectively. We also securely detect \methanol\ via the strong band at 9.7\,$\mu$m, supported by substructure at 6.8\,$\mu$m, CH$_4$ at 7.7$\mu$m, and CO$_2$ via its bending mode at 15.2\,$\mu$m. In addition, we highlight notable absorption features due to minor species that still have ambiguous identifications. The features and qualitative description of their shape are listed in Table\,\ref{tbl:abs_features}, where tentative identifications are marked with asterisks. In the following paragraphs, we discuss individual features.
\subsection{Individual Features}
\label{sec:features}
\begin{deluxetable}{ccc}
\tablecaption{Notable ice features}
\label{tbl:abs_features}
\tablehead{
\colhead{Wavelength} & \colhead{Type} & \colhead{Identification} \\
($\mu$m) & &
}
\startdata
5.83 & single & HCOOH*, H$_2$CO* \\
6 & multiple & H$_2$O, NH$_3$* \\
6.7 & single & H$_2$CO \\
6.8 & multiple & \methanol, NH$_4^+$* \\
7.24 & single & HCOOH, \ethanol* \\
7.41 & single & HCOO$^-$*, CH$_3$CHO* \\
7.7 & single & CH$_4$, SO$_2$*, \ethanol* \\
9 & single & NH$_3$, \methanol*, \ethanol* \\
9.7 & single & \methanol\ \\
11 & single/broad & \water, \ethanol*, \\
& &\acetaldehyde*, \methylformate* \\
15.2 & multiple & CO$_2$ \\
\enddata
\tablenotetext{*}{Potential/ambiguous identification}
\end{deluxetable}
\subsubsection{5.83 \micron\ feature: HCOOH*\footnote[0]{*Potential/ambiguous identification} and H$_2$CO*}
\label{sec:5.83}
This feature is likely due to the C=O stretching mode of HCOOH \citep{marechal1987ir,2007AA...470..749B} and/or H$_2$CO \citep{1993Icar..104..118S}. The feature is seen in the MIRI spectrum as a blue shoulder on the broad ($\sim$0.5 \micron) feature of the \water\ bending mode in the 5.8--6.3 \micron\ region \citep{1996AA...315L.333S}.
\citet{2008ApJ...678..985B} measured the abundance of HCOOH as 1.9\%\ relative to \water\ using the 7.25 \micron\ feature of HCOOH, which we also detect (Section\,\ref{sec:7.24}).
Even if the identification of HCOOH is independently confirmed, both species could contribute to this C=O stretching mode at 5.8 \micron. In fact, \citet{2008ApJ...678..985B} showed that H$_2$CO can contribute no more than 10\%--35\%\ of this feature based on the non-detection of its absorption features at 3.34, 3.47, and 3.54 \micron\ in L-band spectra of other sources.
\subsubsection{6 \micron\ feature: \water\ and NH$_3$*}
\label{sec:6}
The \water\ bending mode peaks at 6 \micron, dominating this feature \citep[e.g.,][]{2001AA...376..254K}. The N--H deformation mode of NH$_3$ at 6.16 \micron, whose umbrella mode at 9 \micron\ is detected (Section\,\ref{sec:9}), also contributes to this broad feature \citep{2008ApJ...678..985B}.
While the 6 \micron\ feature is detected in all low-mass protostars, the absorption from \water\ and NH$_3$ often underestimates the depth of this feature, suggesting additional contributions from unidentified species.
\subsubsection{6.7 \micron\ feature: H$_2$CO}
\label{sec:6.7}
We detect a shallow inflection on the blue side of the 6.8\,$\mu$m band (Section\,\ref{sec:6.8}). \citet{1993Icar..104..118S} reported that the C--H bending mode of H$_2$CO occurs at 6.68 \micron. In the c2d survey, \citet{2008ApJ...678..985B} put an upper limit of 15\%\ contribution from this bending mode to the absorption feature centered on 6.85 \micron. In our MIRI spectrum, the optical depth of this feature is $\sim0.05$ with a local baseline fitting (Figure\,\ref{fig:6-7}) and the overall optical depth of the entire 6.8\,$\mu$m band is $\sim1.5$, consistent with the suggested upper limit.
\subsubsection{6.8 \micron\ feature: \methanol\ and NH$_4^+$*}
\label{sec:6.8}
This feature is ubiquitous in icy sightlines toward protostars and in the dense interstellar medium, and \source\ is no exception. Its position and shape is broadly consistent with the C--H bending mode of \methanol\ \citep{2008ApJ...678..985B}. \citet{2003AA...398.1049S} proposed that NH$_4^+$ could be a significant contributor; however, the identification of NH$_4^+$, based on the 6.8\,$\mu$m band alone remains debated, while \methanol\ can be confirmed given the observation of the corresponding C--O stretching mode at 9.75 \micron\ in \source\ (Section\,\ref{sec:9.7}).
\subsubsection{7.24 \micron\ feature: HCOOH and \ethanol*}
\label{sec:7.24}
This feature was tentatively detected in \source\ among a few other low-mass protostars, as well as high-mass protostars \citep{2008ApJ...678..985B,1999AA...343..966S}, but the low S/N of the optical depth spectra prohibited a robust carrier identification. We clearly detect the band at a high level of significance (Figure\,\ref{fig:ice_7-8}). This feature could be associated with the CH$_3$ symmetric deformation mode of \ethanol\ \citep{2011ApJ...740..109O,2018AA...611A..35T} and/or the C--H/O--H deformation mode of HCOOH \citep{1999AA...343..966S,2007AA...470..749B}. The band strength of the HCOOH 7.24 \micron\ feature is $\sim$25 times weaker than that of its 5.83 \micron\ feature \citep{2007AA...470..749B}. Conversely, we estimate $\tau_{5.8\,\mu m}/\tau_{7.24\,\mu m}\sim 1.4$. Despite considerable uncertainty in the fitted baseline and the \water\ absorption at 5.8 \micron, other species, such as \ethanol, may also contribute to the observed feature (Table\,\ref{tbl:com_ice_lab}).
\subsubsection{7.41 \micron\ feature: HCOO$^-$* and CH$_3$CHO*}
\label{sec:7.41}
This feature was tentatively seen in Spitzer/IRS spectra, but is clearly detected in the MIRI spectrum at high confidence. This feature may be due to the C=O stretching mode of HCOO$^-$ \citep{1999AA...343..966S} and/or the CH$_3$ symmetric deformation with the C--H wagging mode of CH$_3$CHO \citep{2011ApJ...740..109O,2018AA...611A..35T}. HCOO$^-$ has another C=O stretching mode at 6.33 \micron, where the observed spectrum has a slight bending feature at $\sim$6.31 \micron. CH$_3$CHO, on the other hand, has a feature at 7.427 \micron, located at a slightly longer wavelength than the observed feature. However, the peak position could move to 7.408 \micron\ depending on the ice mixture of CH$_3$CHO \citep{2018AA...611A..35T}. Thus, both species are potential contributors to this feature.
\subsubsection{7.7 \micron\ feature: CH$_4$}
\label{sec:7.7}
This is a common feature attributed to the CH$_4$ deformation mode \citep{2008ApJ...678..985B}. The optical depth of CH$_4$ is $\sim$0.6, while \citet{2008ApJ...678.1032O} measured a peak optical depth of 0.22$\pm$0.03 using Spitzer data. The lower optical depth may be due to the much lower spectral resolving power ($R\sim 100$; $\Delta\lambda\sim 0.08\,$\micron) that under-resolves the narrow absorption feature (FWHM $\sim 0.07\,$\micron). The higher spatial resolution in the MRS data may also result in a higher CH$_4$ optical depth, which varies spatially.
SO$_2$ ice has a feature at 7.63 \micron\ with a width of $\sim0.15$ \micron\ \citep{1997AA...317..929B}. We cannot distinctively identify the contribution of SO$_2$ because of potential contribution from organic species, such as \ethanol\ (Table\,\ref{tbl:com_ice_lab}).
\subsubsection{9 \micron\ feature: NH$_3$}
\label{sec:9}
Both the CH$_3$ rocking mode of \methanol\ at 8.87 \micron\ and the umbrella mode of NH$_3$ at 9.01 \micron\ are likely to contribute to this feature (Figure\,\ref{fig:ice_8-11}). The former feature is narrower (FWHM=0.24 \micron) than the latter (FWHM=0.58 \micron).
\citet{2010ApJ...718.1100B} showed that the peak position of the NH$_3$ umbrella mode could shift toward shorter wavelengths when mixed with H$_2$O and/or \methanol. \ethanol\ has its CH$_3$ rocking mode at 9.17 \micron\ and C--O stretching mode at 9.51 \micron. However, both features are very narrow (FWHM$\sim$0.1--0.2 \micron), and are not clearly visible in the MIRI spectra.
\subsubsection{9.7 \micron\ feature: \methanol}
\label{sec:9.7}
This feature is commonly attributed to the C--O stretching mode of \methanol\ at 9.74 \micron. While the peak and width of the observed feature matches the expected \methanol\ absorption feature, there is slightly more absorption at the shorter wavelength side of the feature, hinting at contribution from other species, such as NH$_3$ and \ethanol\ (Section\,\ref{sec:composite}). A model of the silicate band, taking into account grain composition and size distribution, is required to accurately extract the profiles of the ice bands in this region, which is beyond the scope of this overview paper.
\subsubsection{11 \micron\ feature: \water\ libration}
\label{sec:11}
This feature is very broad, spanning 10--13 \micron, consistent with the well-known \water\ libration mode, which can extend to 30 \micron. \citet{2000ApJ...544L..75B} reported a narrower, weak absorption feature at 11.2 \micron, interpreted as polycyclic aromatic hydrocarbon (PAH) mixtures. Crystalline silicates, especially forsterite, also have absorption features around $\sim$11\,\micron\ \citep{2005ApJ...622..404K,2016MNRAS.457.1593W,2020MNRAS.493.4463D}. Finally, \citet{2021AA...651A..95T} showed that \ethanol, \acetaldehyde, and \methylformate\ could produce absorption at similar wavelengths. Figure\,\ref{fig:ice_11-12} shows the presence of an unambiguous 11.2\,\micron\ feature in the MIRI spectrum. Determining the carrier of this feature would require additional modeling.
\subsubsection{15.2 \micron\ CO$_2$}
\label{sec:15.2}
This ubiquitous feature is due to the bending mode of CO$_2$ (Figure\,\ref{fig:co2}). The double peaks are a distinctive signature of crystalline, usually relatively pure, CO$_2$ ice \citep{1997AA...328..649E}.
There are two broader features at 15.1 and 15.3 \micron, corresponding to the apolar CO$_2$:CO mixture and the polar CO$_2$:\water\ mixture, respectively. The shoulder extending toward longer wavelengths is due to CO$_2$ mixed with \methanol. \citet{2008ApJ...678.1005P} detected the double-peaked CO$_2$ with Spitzer in the same source; however, the strength of those peaks was weaker than the MRS spectra indicate. The significantly improved spectral resolution may lead to stronger peaks, but constraining the origin of such change, such as a temporal variation, requires further modeling.
Pure CO$_2$ ice only form in regions with elevated temperature, at $\sim50-80$ K via the thermal annealing process \citep{1999ApJ...522..357G,2013PNAS..11012899E,2018ApJ...869...41H} or at $\sim20-30$ K via the distillation of a CO$_2$:CO mixture \citep{2008ApJ...678.1005P}. \citet{2011ApJ...729...84K} suggest that detection of pure CO$_2$ in low-luminosity protostars could be indicative of previous episodic accretion. In fact, \citet{2013ApJ...779L..22J} found a ring-like (inner radius of 150--200 au) structure of H$^{13}$CO$^+$ emission with ALMA, suggesting that water vapor is present on small scales destroying H$^{13}$CO$^+$ \citep{1992ApJ...399..533P}. The origin of this water vapor could be an accretion burst that occurred 100--1000 years ago, increasing the luminosity by a factor of 100, making such an interpretation for the CO$_2$ double peak a viable explanation. In the distillation scenario, both a warm disk and the inner envelope can provide suitable environments; however a well-defined Keplerian disk has not yet been detected in \source.
\subsection{Composite ice spectra}
\label{sec:composite}
The unprecedented S/N combined with the sub-arcsec spatial resolution allows a multi-component ice spectral comparison with laboratory data across the entire range of MIRI coverage (4.9--28 \micron). As discussed in Section\,\ref{sec:features}, many absorption features are likely to have several contributing ice species, and only the strongest features could be robustly identified by previous studies. The highly sensitive MIRI MRS spectrum enables a comprehensive approach to compare composite optical depth spectra including multiple ice species. Figure\,\ref{fig:tau_lab_comp} shows a simple composite synthetic spectrum of several ice species discussed in Section\,\ref{sec:features}. We also include the spectrum of GCS 3, representing the silicate dust \citep{2004ApJ...609..826K}. The optical depth spectrum of each ice species and mixture is scaled to match the observations. While we do not aim to fit the observed optical depth spectra, we can already see wavelength regions where the laboratory ice spectra reproduce the observations in this toy model, such as $\sim$10 \micron\ and $\sim$15 \micron. This simple model underestimates the absorption at 5--9 \micron\ and 11--12 \micron\ regions, calling for detailed ice modeling in future studies. This experiment demonstrates the vast potential of JWST/MIRI spectroscopy for studies of interstellar ices.
\section{Warm Water Vapor and CO Gas as a Signpost of the Embedded Disk}
\label{sec:water_vapor}
JWST provides spatial resolution similar to that achieved by ALMA, allowing us to search for signatures of the embedded disk suggested by ALMA observations \citep{2017ApJ...834..178Y,2018ApJ...864L..25O}. Warm water and CO gas at $M$-band (4.7--5 \micron) are a common tracer of the inner disk in Class I and II sources \citep{2003AA...408..981P,2022AJ....163..174B}, but they have rarely been detected in Class 0 sources, like \source. In Figure\,\ref{fig:h2o_co}, we compare the baseline-subtracted 4.9--7.3 $\mu$m region of the \source\ spectrum with a simple slab model of warm water vapor ($\sim200-300$ K) and CO fundamental ($\nu=1-0$ and $2-1$) ro-vibrational lines at a higher temperature \citep{2011ApJ...743..112S,2020zndo...4037306S}. The synthetic spectra are multiplied with the continuum to account for variable extinction on these emission lines, which fit the data better. The molecular data are taken from HITRAN \citep{GORDON2022107949}. The water lines appear prominently from 5.8--7.3 $\mu$m, while the (P-branch; $\Delta J = -1$) CO appears at the shortest MIRI wavelengths (4.9--5.3 $\mu$m). Although these models are not adapted to this source, it is clear from inspection that the region contains a large number of compact emission lines.
The agreement between model and observation is considerable. We can state with confidence that the majority of this emission comes from a compact region of the source, and is attributable to warm water vapor, which is likely excited in the previously undetected embedded disk region, within the inner 0.2$\arcsec$, and/or the shocked gas in the inner envelope. The specific model fits and constraints on the spatial extent of the emission are left to a future work.
\section{Outflows and Jets}
\label{sec:outflows}
\subsection{MIRI Imaging}
\label{sec:imaging}
The parallel imaging of our background pointing serendipitously covered the blue-shifted outflow of \source. Figure\,\ref{fig:miri_image} shows the MIRI images of the blue-shifted outflows in three filters. The F560W image contains both the continuum and the H$_2$ S(7) line; the F770W image includes the continuum and the H$_2$ S(4) line; and the F1100W image consists of the continuum and the H$_2$ S(3) line. These images unveil exquisite details in the outflow, showing at least four shell-like structures. The outermost shell appears similar to a terminal bow-shock. The opening angle of each shell decreases with the distance from the protostar. ALMA observations of outflow tracers, such as CO, H$_2$CO, and CS, show similar shell-like variations \citep{2016AA...587A.145B,2020ApJ...900...40O,2021ApJ...910...11O}, for which \citet{2021AA...648A..41V} interpret as precessing episodic outflows driven by a jet. Compared to archival IRAC images taken in 2004 September 3, the terminal shock knot moved by $1.8\arcsec$ along the outflow, which is measured from the centroids of the fitted 2D Gaussian profiles to the blob in the IRAC 3 image and the MIRI F560W image convolved with the IRAC 3 resolution of 1.88\arcsec\ (Figure\,\ref{fig:miri_image}). Considering a length of $\sim17\arcsec$ measured in our MIRI images, the dynamical time of the blue-shifted outflow is, thus, $\sim$170 years, suggesting an extremely recent ejection. \citet{2021AA...648A..41V} also identified four ejections separated by 50--80 years. Interestingly, the mid-IR outflow has almost the same morphology as the molecular outflow observed in sub-mm.
\subsection{Spectral Line Emission}
\label{sec:emission}
\begin{deluxetable}{ccc}
\tablecaption{Detected Emission Lines}
\label{tbl:emission_lines}
\tablehead{
\colhead{Wavelength} & \colhead{Species} & \colhead{Transition} \\
($\mu$m) & &
}
\startdata
5.053 & H$_2$ & 0--0 S(8) \\
5.340 & [Fe\,\textsc{ii}] & $^4F_{9/2}$--$^6D_{9/2}$ \\
5.511 & H$_2$ & 0--0 S(7) \\
5.811 & H$_2$ & 1--1 S(7) \\
6.109 & H$_2$ & 0--0 S(6) \\
6.636 & [Ni\,\textsc{ii}] & $^2D_{3/2}$--$^2D_{5/2}$ \\
6.910 & H$_2$ & 0--0 S(5) \\
6.985 & [Ar\,\textsc{ii}] & $^2P_{1/2}$--$^2P_{3/2}$ \\
8.025 & H$_2$ & 0--0 S(4) \\
9.665 & H$_2$ & 0--0 S(3) \\
12.279 & H$_2$ & 0--0 S(2) \\
12.814 & [Ne\,\textsc{ii}] & $^2P^0_{1/2}$--$^2P^0_{3/2}$ \\
17.035 & H$_2$ & 0--0 S(1) \\
17.936 & [Fe\,\textsc{ii}] & $^4F_{7/2}$--$^4F_{9/2}$ \\
24.519 & [Fe\,\textsc{ii}] & $^4F_{5/2}$--$^4F_{7/2}$ \\
25.249 & [S\,\textsc{i}] & $^3P_1$--$^3P_2$ \\
25.988 & [Fe\,\textsc{ii}] & $^6D_{7/2}$--$^6D_{9/2}$ \\
\enddata
\end{deluxetable}
We also identified several emission lines in the MRS spectra besides the CO and \water\ lines. We extracted a 1D spectrum at ($15^{\mathrm{h}}43^{\mathrm{m}}02.16^{\mathrm{s}}$ $-34^\circ09{}^\prime07.99{}^{\prime\prime}$), which is ($-$1\arcsec, $-$1\arcsec) from the sub-mm continuum peak, with an aperture of 1\arcsec\ to better probe the emission due to outflow activity (Figure\,\ref{fig:emission_lines}). Most lines appear strong in outflows compared to the spectrum toward the protostar, except for the [Ni\,\textsc{ii}] line at 6.636 \micron. Veiling due to scatter light and extinction from the envelope are not considered in this simple extraction, which aims to present a qualitative view of the detected emission lines. As noted in Table \ref{tbl:emission_lines}, most of the strong line emission is identified with either H$_2$ pure rotational lines or ionized/neutral fine-structure lines from Fe, Ne, or S.
Previously with Spitzer IRS spectra, \citet{2010AA...519A...3L} detected H$_2$, S(1) and S(4), [Fe\,\textsc{ii}], 17.9 and 26.0 \micron, as well as the [Si\,\textsc{ii}] 35 \micron\ in \source, the last of which is not covered by MIRI. All of these lines are spatially extended in a bipolar pattern on the NW-SE axis. There is tentative evidence of other weaker emission from the species mentioned in Table\,\ref{tbl:emission_lines}. We defer a comprehensive analysis of emission lines to a future paper.
Figure\,\ref{fig:emission_maps} shows the continuum-subtracted intensity maps of several representative ionic and molecular lines. The molecular lines, such as H$_2$, show a broad opening angle morphology and appear to highlight the walls of the shocked cavity. They also show sub-structures mostly within the south-western (blue-shifted) outflow cavity. The ionic lines, such as [Fe\,\textsc{ii}] and [Ne\,\textsc{ii}], likely represent hotter regions and are tightly collimated into a jet within the cavity region. In most cases, the ionic lines are spectrally resolved across a few channels, corresponding to a velocity range of $\pm$200 \kms. The ionic lines are generally associated with outflows and connected to accretion processes in the central protostar \citep{2016ApJ...828...52W}.
\section{Conclusions}
\label{sec:conclusions}
It is clear from these first observations of \source\ that JWST MIRI will transform our understanding of protostellar ice chemistry as well as ice chemistry in all environments. We present detections of previously identified ice species and provide evidence for the possible presence of organic ice species. We also show gaseous emission of warm water and CO, which is often found in warm disks. Other detected emission lines, including H$_2$, [Fe\,\textsc{ii}], [Ne\,\textsc{ii}], and [S\,\textsc{i}], appear extended along the outflow direction, tracing a wide-angle outflow cavity and a collimated jet. The MIRI imaging serendipitously captured the south-western outflow of \source, providing us an exquisite view of the outflow structure in the infrared.
The main conclusions of this first analysis of the JWST/MIRI observations of \source\ are summarized below.
\begin{itemize}
\item A MIRI MRS spectrum of a Class 0 protostar, \source, is reported for the first time. The protostar appears as a point source over the full wavelength range at 5--28 \micron.
\item The MRS data show rich ice absorption features. Particularly, the ice features between 5 and 8 \micron\ are detected with high S/N, allowing us to search for organic ice species. We robustly identify ice species including \water, CO$_2$, CH$_4$, NH$_3$, \methanol, H$_2$CO, and HCOOH. Furthermore, we detect ice absorption features that could imply the presence of NH$_4^+$, HCOO$^-$, \ethanol, \acetaldehyde, and \methylformate. The CH$_4$ and pure CO$_2$ ice features appear stronger in the MIRI MRS spectra compared to previous Spitzer studies. Significantly improved spectral resolution could result in deeper absorption, providing accurate constraints on the ice compositions. Stronger absorption could also imply variability in ice column densities.
\item The spectra between 5 and 8 \micron\ have many weaker emission lines. The continuum-subtracted spectra present similar features to those from the synthetic spectra of warm water vapor and CO gas. These emission lines only appear toward the protostar, hinting at warm water vapor and CO gas on small scales possibly on the disk surface.
\item The MIRI imaging captures the blue-shifted outflow of \source, showing multiple shell-like structures consistent with the molecular outflows seen at sub-mm wavelengths. The infrared outflow has similar length as the sub-mm outflow. The proper motion of the compact shock knot indicates a dynamical time of $\sim$150 year for that ejection.
\item Multiple emission lines are detected in the MRS spectra, including [Fe\,\textsc{ii}], [Ne\,\textsc{ii}], [S\,\textsc{i}], and H$_2$. The H$_2$ S(8) line is the first detection in young protostars.
\item The [Fe\,\textsc{ii}] and [Ne\,\textsc{ii}] emission show a collimated bipolar jet-like structure along the known outflow direction. The emission also highlights a bright knot $\sim$2.5\arcsec\ away from the protostar toward southwest. The emission of H$_2$ appears more extended, tracing a wide-angle outflow cavity.
\end{itemize}
This JWST/MIRI observations of \source\ show striking details about solid-state features, providing the observational constraints for extensive searches of new ice species and detailed modeling of their abundances. The characterization of gas-phase COMs has progressed significantly in the last decade, in large part due to the maturation of sub-mm interferometry (e.g., ALMA and NOrthern Extended Millimeter Array). Conversely, observational constraints on the ice-phase COMs are so far mostly from observations using ISO/SWS and Spitzer/IRS with limited spectral and spatial resolving power and sensitivity. Absorption features of rare organic ice species in low mass protostars have low contrast and therefore require very high S/N and accurate spectro-photometric calibration to detect. The absorption features between 7 and 8 $\mu$m were only detected in high-mass YSOs (e.g., W33A) with ISO-SWS, and similar features were only marginally detected with Spitzer in low-mass protostars. Consequently, the composition of organic ices around low-mass protostars has only been weakly constrained until now. With the advent of the JWST and the Mid Infrared Instrument (MIRI) spectrograph, the present observations definitively demonstrate that we can now detect and constrain mid-IR COM ice feature strength at high precision and provide much stronger guidance to models of gas-grain chemistry.
\clearpage
\acknowledgements
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with JWST GO Cycle 1 program ID 2151. Y.-L. Yang acknowledges support from the Virginia Initiative of Cosmic Origins Postdoctoral Fellowship. Y.-L. Yang and N. Sakai acknowledge support from a Grant-in-Aid from the Ministry of Education, Culture, Sports, Science, and Technology of Japan (20H05845, 20H05844), and a pioneering project in RIKEN (Evolution of Matter in the Universe). L.I.C. acknowledges support from the David and Lucille Packard Foundation, Johnson \& Johnson WISTEM2D, and NASA ATP 80NSSC20K0529. Y.-L. Yang thanks J. Terwisscha van Scheltinga for laboratory ice spectra, Y. Okoda for useful discussion on the ALMA observations of the presented source, and S. Zeng and R. Nakatani for the motivation to explore the MIRI imaging products. L. I. Cleeves, R. T. Garrod, B. Shope, J. B. Bergner, C. N. Shingledecker, K. M. Pontoppidan, and J. D. Green acknowledges support from NASA/STScI GO grant JWST-GO-02151. J.-E. Lee and C.-H. Kim were supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (grant number 2021R1A2C1011718). EvD is supported by EU A-ERC grant 101019751 MOLDISK and by the Danish National Research Foundation (grant agreement no. DNRF150, ``InterCat''). This research benefited from discussions held with the international team \#461 ``Provenances of our Solar System's Relics'' (team leaders Maria N. Drozdovskaya and Cyrielle Opitom) at the International Space Science Institute, Bern, Switzerland. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources \citep{larry_bradley_2022_6825092}. This research made use of APLpy, an open-source plotting package for Python \citep{aplpy2012,aplpy2019}.
\facilities{JWST, Spitzer, IRSA}
The JWST data used in this paper can be found in MAST: \dataset[10.17909/wv1n-rf97]{\doi{10.17909/wv1n-rf97}}.
\software{astropy v5.1 \citep{2013AA...558A..33A,2018AJ....156..123A}, jwst \citep{2019ASPC..523..543B}, photutils v1.5.0 \citep{larry_bradley_2022_6825092}, aplpy v2.1.0 \citep{aplpy2019}}
\appendix
\section{Characteristics of the Extraction Apertures}
\label{sec:extraction}
The protostar appears point-like in the MRS spectral cube, showing the Airy pattern most noticeably at the longer wavelengths. Therefore, to extract 1D spectra, we define the aperture in units of the diffraction-limited beam, resulting in variable aperture increasing with wavelength. Because the source is not a perfect point source, we expect the 1D spectrum extracted with a small aperture would lead to missing flux if the emission is more extended due to scattering; the actual beam size may be larger than the diffraction-limited beam size due to the detector scattering at shorter wavelength. On the other hand, a larger aperture may start to add noise to the 1D spectrum.
The extracted 1D spectra with different aperture sizes demonstrate the aforementioned effects (Figure\,\ref{fig:extraction}, top). The 4-beam aperture extraction results in a good balance between missing flux and noise, which is adopted in this study for extracting the 1D spectrum. The spectrum extracted with a 4-beam aperture with the median scaling between sub-bands (see Section\,\ref{sec:observations}) differs from the un-scaled spectrum by up to 16\% (Figure\,\ref{fig:extraction}, bottom).
\section{Comparison between JWST/MIRI MRS Spectra and Spitzer/IRS Spectra}
\label{sec:irs_comp}
To check the accuracy of our overall calibration, we compared the MIRI spectra with Spitzer/IRAC aperture photometry, both extracted with a 3\arcsec\ aperture (Figure\,\ref{fig:irs_comp}). Appropriate aperture corrections were applied to the IRAC aperture photometry \citep[Table 4.8 in][]{IRAC_handbook}. After convolving the MRS spectra with the IRAC 4 filter, the spectro-photometric flux at 8 \micron\ agrees with the IRAC 4 flux. The MRS spectra have limited wavelength coverage that prevents a similar comparison at 5.8 \micron. Figure\,\ref{fig:irs_comp} (right) shows the MRS 1D spectra extracted from the protostar compared with the scaled Spitzer/IRS Long-Low (LL1) spectra. The IRS LL1 spectrum matches the long wavelength part of the MRS spectra, making the $\lambda > 30$ \micron\ in the IRS spectra suitable for baseline fitting.
Figure\,\ref{fig:irs_comp_ice} shows the absorption features in both the MIRI MRS spectrum and the Spitzer/IRS spectra. All features are deeper and much better resolved in the MRS spectra. The apparent shifts in the CO$_2$ feature (15.2 \micron) may be due to the uncertainty in the wavelength solution (Section\,\ref{sec:observations}).
\section{Laboratory Data}
\label{sec:lab_data}
Several laboratory absorbance spectra are taken from the Leiden Ice Database for Astrochemistry (LIDA; \citealt{2022arXiv220812211R}) along with others that are collected from individual studies. Table\,\ref{tbl:lab_ref} shows the references of ice species included in the composite synthetic ice spectra (Section\,\ref{sec:composite}). Table\,\ref{tbl:com_ice_lab} lists the absorption features of organic ice species used for the discussion in Section\,\ref{sec:ice}.
\begin{deluxetable}{ccc}
\centering
\tablecaption{References of laboratory spectra}
\label{tbl:lab_ref}
\tablehead{\colhead{Species} & \colhead{Temperature (K)} & \colhead{References}}
\startdata
GCS 3\tablenotemark{a} & \nodata\ & \citet{2004ApJ...609..826K} \\
H$_2$O & 15 & \citet{2007AA...462.1187O} \\
H$_2$O+CH$_3$OH+CO$_2$+CH$_4$ (0.6:0.7:1.0:0.1) & 10 & \citet{1999AA...350..240E} \\
H$_2$O+HCOOH (1:1) & 15 & \citet{2007AA...470..749B} \\
CH$_3$OH & 15 & \citet{2018AA...611A..35T} \\
CO$_2$ & 15 & \citet{2006AA...451..723V} \\
CH$_3$CHO & 15 & \citet{2018AA...611A..35T} \\
CH$_3$CH$_2$OH & 15 & \citet{2018AA...611A..35T} \\
NH$_3$ & 10 & \citet{2003AA...399..169T} \\
H$_2$CO & 10 & \citet{1996AA...312..289G} \\
\enddata
\tablenotetext{a}{The GCS 3 spectra are taken from the ice library of ENIIGMA \citep{2021AA...654A.158R}.}
\end{deluxetable}
\begin{deluxetable}{llll}
\centering
\tablecaption{Complex organic ice features measured in laboratory}
\label{tbl:com_ice_lab}
\tablehead{\colhead{Species} & \colhead{Mode} & \colhead{Peak position} & \colhead{Reference} \\
\colhead{} & \colhead{} & \colhead{(\micron)} & \colhead{} }
\startdata
\multirow{4}{1in}{Acetaldehyde (CH$_3$CHO)} & CH$_3$ rock. + CC stretch. + CCO bend. & 8.909 & \multirow{4}{*}{\citet{2018AA...611A..35T}} \\
& CH$_3$ sym-deform. + CH wag. & 7.427 & \\
& CH$_3$ deform. & 6.995 & \\
& C=O stretch. & 5.803 & \\
\hline
\multirow{6}{1in}{Ethanol (CH$_3$CH$_2$OH)} & CC stretch. & 11.36 & \multirow{6}{*}{\citet{2018AA...611A..35T}} \\
& CO stretch. & 9.514 & \\
& CH$_3$ rock. & 9.170 & \\
& CH$_2$ torsion. & 7.842 & \\
& OH deform. & 7.518 & \\
& CH$_3$ sym-deform. & 7.240 & \\
\hline
\multirow{5}{1in}{Methyl formate (HCOOCH$_3$)} & C=O stretch. & 5.804 & \multirow{5}{*}{\citet{2021AA...651A..95T}} \\
& C--O stretch. & 8.256 & \\
& CH$_3$ rock. & 8.582 & \\
& O--CH$_3$ stretch. & 10.98 & \\
& OCO deform. & 13.02 & \\
\enddata
\tablecomments{The listed features are measured from amorphous ice at 15 K.}
\end{deluxetable}
\input{bib}
|
Title:
On the nature of cosmic strings in the brane world |
Abstract: We investigate a static, cylindrically symmetric cosmic string on the brane
without a perturbative approximation. We find there could be a (large)
enhancement of the (effective) string tension when the energy density at the
center of the string is (much) larger than twice the brane tension. We also
point out a new way to evade the cosmic string problem when the energy density
at the center of the string approaches twice the brane tension. These findings
could have experimental and theoretical implications for searching for cosmic
strings on the brane, in particular for cosmic strings generated after
inflation (such as D-term inflation) on the brane.
| https://export.arxiv.org/pdf/2208.09589 |
\title{On the nature of cosmic strings in the brane world}
\author{Chia-Min Lin}
\affiliation{Fundamental General Education Center, National Chin-Yi University of Technology, Taichung 41170, Taiwan}
\large
\baselineskip 18pt
\section{Introduction}
The production of cosmic strings is quite generic in the framework of Grand Unified Theories (GUT) \cite{Jeannerot:2003qv}.
A seminal work in studying the gravitational effects of cosmic strings by a linear approximation to general relativity is given in \cite{Vilenkin:1981zs}. It is found that the spacetime is conical outside a static, cylindrically symmetric cosmic string. The results are extended to the exact spacetime metric in \cite{Hiscock:1985uc}.
A study of cosmic string on the brane by linear approximation is given in \cite{Davis:2000uf}.
The method of finding the spacetime metric without a linear approximation is applied to cosmic strings on the brane in \cite{Abdalla:2015vna}. As we would see in the following sections, particular assumptions have to be made in order to obtain simple results that can be compared with previous results in general relativity. In this work, we extend the results of \cite{Abdalla:2015vna} and derive new formulas. In particular, we consider the role played by the four-dimensional cosmological constant. These results are general and could be applied to a wide range of cosmic strings on the brane. In order to have a concrete example, we use a notation with an eye toward its potential application to the cosmic strings generated after D-term inflation on the brane \cite{Lin:2022gbl}.
Cosmic strings produce stochastic gravitational waves \cite{Hindmarsh:1994re} that can be constrained via experiments such as European Pulsar Timing Array (EPTA) \cite{vanHaasteren:2011ni}, NANOGrav Collaboration \cite{NANOGrav:2020bcs, Ellis:2020ena, Blasi:2020mfx}, LIGO-Virgo \cite{LIGOScientific:2017ikf}, and Laser Interferometer Space Antenna (LISA) \cite{Auclair:2019wcv}. These observations constrain the cosmic string tension $\mu$\footnote{Usually expressed as $G\mu$. Here $G$ is Newton's constant and $\mu$ is mass per unit length.}. We would like to investigate how the string tension is modified in the framework of a braneworld.
As an example, cosmic strings on the brane can be generated if we consider a D-term inflation \cite{Binetruy:1996xj, Halyo:1996pp} on the brane.
The potential energy density of D-term inflation is
\begin{equation}
V \simeq V_0=\frac{g^2\xi^2}{2},
\label{eq1}
\end{equation}
where $\xi$ is the Fayet-Iliopoulos term and $g$ is the $U(1)_{FI}$ gauge coupling. After inflation, the symmetry-breaking field developed a vacuum expectation value of $\sqrt{2\xi}$ and form a network of cosmic strings. Conventional D-term inflation predicts $\sqrt{2\xi}$ to be around the scale of grand unified theories (GUT) which generates cosmic strings with too large tension to be compatible with current experimental observations. This is referred to as a cosmic string problem. We will discuss the implication of our finding to this problem.
\section{brane world}
The brane world models of Randall and Sundrum \cite{Randall:1999ee, Randall:1999vf} have stimulated vast interest in higher-dimensional theories.
If our four-dimensional world is a 3-brane embedded in a higher-dimensional bulk, the four-dimensional Einstein equations induced on the brane are given by the covariant approach as \cite{Shiromizu:1999wj} (see \cite{Maartens:2010ar} for a review and more references therein.)
\begin{equation}
G_{\mu \nu}=-\Lambda_4 g_{\mu\nu}+\left( \frac{8\pi}{M_4^2} \right) T_{\mu\nu}+\left( \frac{8\pi}{ M_5^3} \right)^2 \Pi_{\mu\nu}-E_{\mu\nu}.
\label{branee}
\end{equation}
In the above equation, $T_{\mu\nu}$ is the energy-momentum tensor of matter on the brane, and
\begin{equation}
\Pi_{\mu\nu}=-\frac{1}{4} T^\sigma_\mu T_{\nu \sigma}+\frac{1}{12}TT_{\mu\nu}+\frac{1}{8}g_{\mu\nu}T_{\alpha\beta}T^{\alpha\beta}-\frac{1}{24}g_{\mu\nu}T^2,
\label{qqq}
\end{equation}
which is quadratic in $T_{\mu\nu}$. The last term $E_{\mu\nu}$ is from the projection of the bulk Weyl curvature on the brane, which can be expressed as a Weyl fluid \cite{Maartens:2010ar},
\begin{equation}
-E_{\mu\nu}=8 \pi G \left[ U \left( u_\mu u_\nu -\frac{1}{3}h_{\mu\nu} \right)+P_{\mu\nu}+Q_\mu u_\nu+Q_\nu u_\mu \right],
\label{eqe}
\end{equation}
where $U$ is the energy density of dark radiation\footnote{It incorporates the spin-0 mode of the 5D graviton.}, $P_{\mu\nu}$ is the anisotropic pressure, and $Q_\mu$ is the momentum density.
The metric tensor is decomposed by a 4-velocity $u_\mu$ as $g_{\mu\nu}=h_{\mu\nu}+u_\mu u_\nu$.
The four-dimensional cosmological constant $\Lambda_4$ is determined by the five-dimensional bulk cosmological constant $\Lambda_5$ and the brane tension $\Lambda$ as
\begin{equation}
\Lambda_4=\frac{4\pi}{M_5^3}\left(\Lambda_5+\frac{4\pi}{3M_5^3} \Lambda^2 \right),
\end{equation}
which can be set to $\Lambda_4=0$ by asuming a suitable $\Lambda_5$.
The brane tension $\Lambda$ provides a relation between the four-dimensional Planck scale $M_4=\sqrt{8 \pi}M_P$ and five-dimensional Planck scale $M_5$ via
\begin{equation}
M_4=\sqrt{\frac{3}{4\pi}}\left( \frac{M_5^2}{\sqrt{\Lambda}} \right)M_5.
\label{pl}
\end{equation}
By using Eq.~(\ref{pl}) and $1/M_4^2=G$, Eq.~(\ref{branee}) can be expressed as
\begin{equation}
G_{\mu \nu}=-\Lambda_4 g_{\mu\nu}+8\pi G T_{\mu\nu}+\frac{48\pi G}{\Lambda} \Pi_{\mu\nu}-E_{\mu\nu} \equiv 8\pi G \overline{T}_{\mu\nu}.
\label{branee2}
\end{equation}
Here we defined an effective energy momentum tensor $\overline{T}_{\mu\nu}$.
If we assume that there is no energy-momentum exchange between the bulk and the brane, the energy conservation is satisfied both for the matter on the brane $T_{\mu\nu}$ and for $\overline{T}_{\mu\nu}$, therefore
\begin{eqnarray}
\nabla^\mu T_{\mu\nu}&=&0, \label{tcon} \\
\nabla^\mu \left( \frac{48\pi G}{\Lambda}\Pi_{\mu\nu}-E_{\mu\nu} \right)&=&0.
\end{eqnarray}
\section{cosmic string on the brane}
For simplicity, we consider a straight cosmic string produced after D-term inflation on the brane in cylindrical coordinates $\{t, \rho, \phi, z\}$.
The energy-momentum tensor of a cosmic string is represented by \cite{Vilenkin:1981zs}
\begin{equation}
T^\nu_\mu=-V_0\mbox{ diag}(1,0,0,1).
\label{emt}
\end{equation}
We assume for $\rho < \rho_0$, the energy density is constant $V_0=g^2 \xi^2/2$, and for $\rho > \rho_0$ it is zero.
By using the vacuum expectation value $\sqrt{2\xi}$ of the symmetry breaking field, the value of $\rho_0$ can be obtained by the balance between ``spatial variation'' term and potential term from Eq.~(\ref{eq1}) as
\begin{equation}
\left( \frac{\sqrt{2\xi}}{\rho_0} \right)^2 \sim \frac{g^2\xi^2}{2}.
\end{equation}
This implies\footnote{In general, one may assume the vacuum expectation value of the symmetry-breaking field to be $\eta$ with energy density $\eta^4$ at the center of the cosmic string and estimates $\rho_0=1/\eta$. Our calculation applies to general cases.}
\begin{equation}
\rho_0=\frac{2}{g\sqrt{\xi}}.
\end{equation}
By using the cylinderical symmetry, the line element is
\begin{equation}
ds^2=-e^{2\alpha(\rho)}dt^2+e^{2\beta(\rho)}d\rho^2+e^{2\gamma(\rho)}d\phi^2+e^{2\delta(\rho)}dz^2.
\label{metric}
\end{equation}
From Eqs.~(\ref{qqq}) and (\ref{emt}), the only non-vanishing components of $T^\nu_\mu$ are
\begin{equation}
\Pi^1_1=\Pi^2_2=\frac{1}{12}V_0^2.
\end{equation}
In order to have a static and cylindrically symmetric metric, the momentum density is assumed to be $Q_\mu=0$, and Eq.~(\ref{eqe}) can be arranged as
\begin{equation}
-E^\nu_\mu=8 \pi G \mbox{ diag}\left[ -U,P_1+\frac{1}{3}U, P_2+\frac{1}{3}U,P_3+\frac{1}{3}U\right],
\end{equation}
where $U$, $P_1$, $P_2$, and $P_3$ are functions of $\rho$. Because $E^\nu_\mu$ is the projection of the Weyl curvature tensor, it is traceless. This implies
\begin{equation}
P_1+P_2+P_3=0.
\label{ppp}
\end{equation}
From Eq.~(\ref{metric}), we can calculate $G_{\mu\nu}$. Some details are given in the Appendix section.
By using (\ref{branee}), when $\rho < \rho_0$,
\begin{eqnarray}
e^{-2\beta}(\gamma^{\prime 2}+\alpha^{\prime 2}-\beta^\prime \gamma^\prime+\beta^\prime \alpha^\prime-\gamma^\prime\alpha^\prime +\gamma^{\prime\prime}-\alpha^{\prime\prime})&=&-8\pi G V_0 -8\pi G U-\Lambda_4,\\
-e^{-2\beta}\alpha^{\prime 2}&=&8\pi G\frac{V_0^2}{2\Lambda}+\frac{8\pi G}{3}U+8\pi G P_1-\Lambda_4, \label{e16}\\
e^{-2\beta}\alpha^{\prime 2}&=&8\pi G\frac{V_0^2}{2\Lambda}+\frac{8\pi G}{3}U+8\pi G P_2-\Lambda_4, \label{e17} \\
e^{-2\beta}(\gamma^{\prime 2}+\alpha^{\prime 2}-\beta^\prime \gamma^\prime-\beta^\prime \alpha^\prime+\gamma^\prime\alpha^\prime +\gamma^{\prime\prime}+\alpha^{\prime\prime})&=&-8\pi G V_0 +\frac{8\pi G}{3}U+8\pi G P_3-\Lambda_4,
\end{eqnarray}
From Eq.~(\ref{tcon}), we have
\begin{equation}
\nabla^\mu T_{\mu 1}=V_0(\alpha^\prime +\delta^\prime)=0,
\end{equation}
which gives $\alpha+\delta=\mbox{constant}$. The constant can be chosen to be zero by a coordinate transformation. Therefore
\begin{equation}
\delta=-\alpha.
\end{equation}
We wish the right-hand sides of Eqs.~(\ref{e16}) and (\ref{e17}) to be zero in order to obtain a form of $\overline{T}^\nu_\mu$ similar to Eq.~(\ref{emt}). Therefore $\alpha=\mbox{constant}$, which again can be chosen to be zero. Actually, having $\alpha=\delta=0$ here is equivalent to imposing Lorentz invariance in the $z$ direction. Finally, there is still a degree of freedom of $\beta$ to be chosen to $\beta=\delta=0$ through a coordinate (gauge) transformation \cite{Chandrasekhar:1972zz}. Therefore
\begin{eqnarray}
\gamma^{\prime 2}+\gamma^{\prime\prime}&=&-8\pi G V_0 -8\pi G U-\Lambda_4, \label{e22} \\
0&=&8\pi G\frac{V_0^2}{2\Lambda}+\frac{8\pi G}{3}U+8\pi G P_1-\Lambda_4, \\
0&=&8\pi G\frac{V_0^2}{2\Lambda}+\frac{8\pi G}{3}U+8\pi G P_2-\Lambda_4, \\
\gamma^{\prime 2}+\gamma^{\prime\prime}&=&-8\pi G V_0 +\frac{8\pi G}{3}U+8\pi G P_3-\Lambda_4. \label{e24}
\end{eqnarray}
\subsection{Assuming $\Lambda_4=0$}
Fistly, let us set $\Lambda_4=0$ as was done in \cite{Abdalla:2015vna}. By using Eq.~(\ref{ppp}) and the above equations, one obtains
\begin{eqnarray}
U&=&-\frac{V_0^2}{2\Lambda} \\
P_3&=&-2P_1=-2P_2=-\frac{4}{3}U.
\end{eqnarray}
Note that the required energy density of dark radiation is negative and it is only confined within the cosmic string at $\rho < \rho_0$.
Substituting these into Eq.~(\ref{e22}) (or (\ref{e24})), we have
\begin{equation}
\gamma^{\prime 2}+\gamma^{\prime\prime}=-8\pi G V_0\left( 1-\frac{V_0}{2\Lambda} \right).
\label{eq28}
\end{equation}
The solution of $\gamma$ depends on whether $V_0<2\Lambda$ or $V_0>2\Lambda$. In the case $V_0<2\Lambda$ the solution is
\begin{equation}
\gamma=\ln \left[ \rho_\ast \sin \left( \frac{\rho}{\rho_\ast } \right) \right],
\label{gamma1}
\end{equation}
where
\begin{equation}
\rho_\ast=\frac{1}{\sqrt{8\pi G V_0\left( 1-\frac{V_0}{2\Lambda} \right)}}.
\end{equation}
The integration constants are fixed by requiring the metric on the axis to be flat without cone singularity.
Compare Eq.~(\ref{emt}) with Eq.~(\ref{eq28}), the effective energy density of the cosmic string is
\begin{equation}
-\overline T^0_0=\frac{G^0_0}{-8\pi G}=V_0\left( 1-\frac{V_0}{2\Lambda} \right).
\end{equation}
The effective cosmic string tension is given by\footnote{Our result is different from that of \cite{Abdalla:2015vna}, where there is a discontinuity at $V_0=2\Lambda$.}
\begin{eqnarray}
\mu_{\mathrm{eff}}&=&\int^{2\pi}_0\int^{\rho_0}_0 V_0\left( 1- \frac{V_0}{2\Lambda} \right)\rho_\ast \sin \left( \frac{\rho}{\rho_\ast } \right) d\phi d\rho \\
&=&\frac{1}{4G} \left[ 1- \cos \left( \sqrt{2\xi}\sqrt{8\pi G \left( 1-\frac{V_0}{2\Lambda} \right)} \right) \right]. \label{mu1}
\end{eqnarray}
By expanding $\cos x \sim 1-x^2/2$ we have
\begin{equation}
\mu_{\mathrm{eff}}=2\pi \xi \left( 1-\frac{V_0}{2\Lambda} \right),
\label{f1}
\end{equation}
which reproduces the standard result from general relativity when $V_0 \ll 2\Lambda$.
The result of $\gamma$ in Eq.~(\ref{gamma1}) can be connected with a solution $\gamma=\ln(ar)$ outside the cosmic string with a radial distance $r$ by matching the junction conditions $\gamma(\rho_0)=\gamma(r_0)$ and $\gamma^\prime(\rho_0)=\gamma^\prime(r_0)$ at $\rho_0=r_0$ to obtain\footnote{The junction conditions are that the (induced) metric and the extrinsic curvature be the same on both sides of the hypersurface \cite{Poisson:2009pwt}. On the hypersurface of $r_0=\rho_0$, the unit normal vector is $n^\mu=(0,0,e^{-\gamma},0)$, and the non-vanishing extrinsic curvature is $K_{12}=-2\gamma^\prime e^{-\gamma}$.}
\begin{equation}
a= \cos \left( \sqrt{2\xi} \sqrt{8\pi G \left( 1-\frac{V_0}{2\Lambda} \right)} \right)=1-4G\mu_{\mathrm{eff}},
\end{equation}
where the second equality is obtained from Eq.~(\ref{mu1}). With $\gamma=\ln (ar)$ (and $\alpha=\beta=\delta=0$), the metric outside the cosmic string is\footnote{In \cite{Heydari-Fard:2013eoa}, this form of metric is assumed without a calculation of string tension.}
\begin{equation}
ds^2=-dt^2+dr^2+(1-4G \mu_{\mathrm{eff}})^2 r^2 d\phi^2.
\label{ex}
\end{equation}
When $\phi$ changes by $2\pi$, the effective angular coordinate $\bar{\phi}=(1-4G \mu_{\mathrm{eff}})\phi$ changes by $2\pi-\Delta$.
The deficit angle $\Delta$ is
\begin{equation}
\Delta=8\pi G \mu_{\mathrm{eff}} = 2\pi(1-a).
\end{equation}
On the other hand, we are interested in having $V_0>2\Lambda$. In this case, the solution of Eq.~(\ref{eq28}) is given by
\begin{equation}
\gamma(\rho)=\ln \left[ \rho_\ast \sinh \left( \frac{\rho}{\rho_\ast } \right) \right],
\end{equation}
where
\begin{equation}
\rho_\ast=\frac{1}{\sqrt{8\pi G V_0\left( \frac{V_0}{2\Lambda} -1 \right)}}.
\end{equation}
The effective cosmic string tension is
\begin{eqnarray}
\mu_{\mathrm{eff}}&=&\int^{2\pi}_0\int^{\rho_0}_0 V_0\left( \frac{V_0}{2\Lambda}-1 \right)\rho_\ast \sinh \left( \frac{\rho}{\rho_\ast } \right) d\phi d\rho \\
&=&\frac{1}{4G} \left[ \cosh \left( \sqrt{2\xi}\sqrt{8\pi G \left( \frac{V_0}{2\Lambda}-1 \right)} \right)-1 \right].
\end{eqnarray}
By expanding $\cosh x \sim 1+x^2/2$ we have
\begin{equation}
\mu_{\mathrm{eff}}=2\pi \xi \left( \frac{V_0}{2\Lambda}-1 \right).
\label{mu2}
\end{equation}
If $V_0 \gg 2\Lambda$, there is a big enhancement of the cosmic string tension compared with the case of $V_0 \ll 2\Lambda$.
In this case, we have
\begin{equation}
a= \cosh \left( \sqrt{2\xi} \sqrt{8\pi G \left( \frac{V_0}{2\Lambda} -1\right)} \right)>1,
\end{equation}
which implies a negative deficit angle $\Delta=2\pi(1-a)$. In this case, there is no duplicated images from gravitational lensing.
One interesting case happens when $V_0=2\Lambda$. From Eq.~(\ref{mu2}) (or Eq.~(\ref{f1})), we have $\mu_{\mathrm{eff}}=0$ and $\Delta=0$. This is due to the assumption that dark radiation with negative energy density exists inside the cosmic string. If this is true, it could be a method to solve the cosmic string problem in D-term inflation. In order to evade the cosmic string problem, we only need $V_0$ to be close enough to $2\Lambda$ to obtain a string tension within experimental bounds.
\subsection{Assuming $E_{\mu\nu}=0$}
In the above discussion, we have assumed $\Lambda_4=0$, now we consider the case $E_{\mu\nu}=0$. namely $U=P_1=P_2=P_3=0$. We have
\begin{eqnarray}
\gamma^{\prime 2}+\gamma^{\prime\prime}&=&-8\pi G V_0 -\Lambda_4, \label{e40} \\
0&=&8\pi G\frac{V_0^2}{2\Lambda}-\Lambda_4. \label{e41}
\end{eqnarray}
From Eq.~(\ref{e41}), we obtain
\begin{equation}
\Lambda_4=8\pi G \frac{V_0^2}{2\Lambda}.
\end{equation}
Substitute this into Eq.~(\ref{e40}) gives
\begin{equation}
\gamma^{\prime 2}+\gamma^{\prime\prime}=-8\pi G V_0 -8\pi G \frac{V_0^2}{2\Lambda}.
\label{40}
\end{equation}
The solution is
\begin{equation}
\gamma=\ln \left[ \rho_\ast \sin \left( \frac{\rho}{\rho_\ast } \right) \right],
\end{equation}
where
\begin{equation}
\rho_\ast=\frac{1}{\sqrt{8\pi G V_0\left( 1-\frac{V_0}{2\Lambda} \right)}}.
\end{equation}
Compare Eq.~(\ref{emt}) with Eq.~(\ref{40}), the effective energy density of the cosmic string is
\begin{equation}
\frac{G^0_0}{-8\pi G}=V_0\left( 1+\frac{V_0}{2\Lambda} \right).
\end{equation}
The effective cosmic string tension is given by
\begin{eqnarray}
\mu_{\mathrm{eff}}&=&\int^{2\pi}_0\int^{\rho_0}_0 V_0\left( 1+ \frac{V_0}{2\Lambda} \right)\rho_\ast \sin \left( \frac{\rho}{\rho_\ast } \right) d\phi d\rho \\
&=&\frac{1}{4G} \left[ 1- \cos \left( \sqrt{2\xi}\sqrt{8\pi G \left( 1+\frac{V_0}{2\Lambda} \right)} \right) \right].
\end{eqnarray}
By expanding $\cos x \sim 1-x^2/2$ we have
\begin{equation}
\mu_{\mathrm{eff}}=2\pi \xi \left( 1+\frac{V_0}{2\Lambda} \right)
\label{f3}
\end{equation}
In this case, we have
\begin{equation}
a= \cos \left( \sqrt{2\xi} \sqrt{8\pi G \left( 1+\frac{V_0}{2\Lambda} \right)} \right).
\end{equation}
This result can be applied for both $V_0>2\Lambda$ and $V_0< 2\Lambda$. The deficit angle $\Delta=2\pi(1-a)$ would always be positive in this case.
Although we consider two cases $\Lambda_4=0$ or $E_{\mu\nu}=0$, in general, they can both be non-zero which gives
\begin{equation}
U=\frac{\Lambda_4}{8\pi G}-\frac{V_0^2}{2\Lambda}.
\end{equation}
This provides a chance to have a positive energy density for dark radiation.
\section{Conclusion}
\label{con}
We have calculated three different forms of $\mu_{\mathrm{eff}}$. The main findings of this work are Eqs.~(\ref{f1}), (\ref{mu2}), and (\ref{f3}). Experiments searching for cosmic strings usually provide constraints of the dimensionless quantity $G\mu$. In the context of cosmic strings on the brane, the constraints have to be imposed on $G\mu_{\mathrm{eff}}$ instead of $G\mu$. This would have significant implications for the search for cosmic strings if we live in a brane world. One interesting observation is that when $V_0=2\Lambda$, we have $\mu_{\mathrm{eff}}=0$ from Eqs.~(\ref{f1}) and (\ref{mu2}). This provides a novel method to deal with the cosmic string problem. Another effect is when $V_0 \gg 2\Lambda$, there is a big enhancement of $\mu_{\mathrm{eff}}$ from Eq.~(\ref{mu2}) and (\ref{f3}). This potentially could make the cosmic string problem more severe.
\appendix
\section{Useful formulas}
The non-zero Christoffel symbols are
\begin{eqnarray}
\Gamma^1_{00}&=&\alpha^\prime e^{2(\alpha-\beta)}\\
\Gamma^0_{10}&=&\alpha^\prime \\
\Gamma^1_{11}&=&\beta^\prime \\
\Gamma^2_{21}&=&\gamma^\prime \\
\Gamma^1_{22}&=&-\gamma^\prime e^{2(\gamma-\beta)} \\
\Gamma^3_{31}&=&\delta^\prime \\
\Gamma^1_{33}&=&-\delta^\prime e^{2(\delta-\beta)} \\
\end{eqnarray}
The Ricci tensors are
\begin{eqnarray}
R_{00}&=&(\alpha^{\prime\prime}+\alpha^{\prime 2}-\alpha^\prime \beta^\prime +\alpha^\prime \gamma^\prime + \alpha^\prime \delta^\prime)e^{2(\alpha-\beta)} \\
R_{11}&=&\alpha^\prime \beta^\prime + \gamma^\prime \beta^\prime + \delta^\prime \beta^\prime - \gamma^{\prime\prime} -\delta^{\prime\prime}-\alpha^{\prime 2}-\gamma^{\prime 2}-\delta^{\prime 2}-\alpha^{\prime\prime} \\
R_{22}&=&(-\gamma^{\prime\prime} + \gamma^\prime \beta^\prime-\alpha^\prime \gamma^\prime - \delta^\prime \gamma^\prime -\gamma^{\prime 2})e^{2(\gamma-\beta)} \\
R_{33}&=&(-\delta^{\prime\prime}+\delta^\prime \beta^\prime - \alpha^\prime \delta^\prime - \gamma^\prime \delta^\prime -\delta^{\prime 2})e^{2(\delta-\beta)}
\end{eqnarray}
The Ricci scalar is
\begin{equation}
R=(-2 \alpha^{\prime\prime}-2\alpha^{\prime 2}+2\alpha^\prime \beta^\prime -2 \alpha^\prime \gamma^\prime -2\alpha^\prime \delta^\prime +2\gamma^\prime \beta^\prime +2 \delta^\prime \beta^\prime -2\gamma^{\prime\prime}-2\delta^{\prime\prime}-2\gamma^{\prime 2}-2\delta^{\prime 2}-2\delta^\prime \gamma^\prime)e^{-2\beta}
\end{equation}
The Einstein tensors are
\begin{eqnarray}
G^0_0&=&(\gamma^{\prime 2}+\alpha^{\prime 2}-\beta^\prime \gamma^\prime + \beta^\prime \alpha^\prime -\gamma^\prime \alpha^\prime + \gamma^{\prime\prime}-\alpha^{\prime\prime})e^{-2\beta}\\
G^1_1&=&-\alpha^{\prime 2}e^{-2\beta} \\
G^2_2&=&\alpha^{\prime 2}e^{-2\beta} \\
G^3_3&=&(\alpha^{\prime\prime}+\alpha^{\prime 2}-\alpha^\prime \beta^\prime + \alpha^\prime \gamma^\prime -\gamma^\prime \beta^\prime +\gamma^{\prime\prime}+\gamma^{\prime 2})e^{2(\delta-\beta)}
\end{eqnarray}
\acknowledgments
This work is supported by the National Science and Technology Council (NSTC) of Taiwan under Grant No. NSTC 111-2112-M-167-002.
|
Title:
Detailed accretion history of the supermassive black hole in NGC 5972 over the past $\gtrsim$10$^4$ years through the extended emission line region |
Abstract: We present integral field spectroscopic observations of NGC 5972 obtained
with the Multi Unit Spectroscopic Explorer (MUSE) at VLT. NGC 5972 is a nearby
galaxy containing both an active galactic nucleus (AGN), and an extended
emission line region (EELR) reaching out to $\sim 17$ kpc from the nucleus. We
analyze the physical conditions of the EELR using spatially-resolved spectra,
focusing on the radial dependence of ionization state together with the light
travel time distance to probe the variability of the AGN on $\gtrsim 10^{4}$ yr
timescales. The kinematic analysis suggests multiple components: (a) a faint
component following the rotation of the large scale disk; (b) a component
associated with the EELR suggestive of extraplanar gas connected to tidal
tails; (c) a kinematically decoupled nuclear disk. Both the kinematics and the
observed tidal tails suggest a major past interaction event. Emission line
diagnostics along the EELR arms typically evidence Seyfert-like emission,
implying that the EELR was primarily ionized by the AGN. We generate a set of
photoionization models and fit these to different regions along the EELR. This
allows us to estimate the bolometric luminosity required at different radii to
excite the gas to the observed state. Our results suggests that NGC 5972 is a
fading quasar, showing a steady gradual decrease in intrinsic AGN luminosity,
and hence the accretion rate onto the SMBH, by a factor $\sim 100$ over the
past $5 \times 10^{4}$ yr.
| https://export.arxiv.org/pdf/2208.04911 |
\title{Detailed accretion history of the supermassive black hole in NGC 5972 over the past $\gtrsim$10$^4$ years through the extended emission line region}
\correspondingauthor{Finlez, C.}
\email{[email protected]}
\author[ 0000-0003-1778-1061]{Finlez, C.}
\affiliation{Instituto de Astrof\'isica, Facultad de F\'isica, Pontificia Universidad Cat\'olica de Chile, Casilla 306, Santiago 22, Chile}
\author[0000-0001-7568-6412]{Treister, E.}
\affiliation{Instituto de Astrof\'isica, Facultad de F\'isica, Pontificia Universidad Cat\'olica de Chile, Casilla 306, Santiago 22, Chile}
\author{Bauer, F.}
\affiliation{Instituto de Astrof\'isica, Facultad de F\'isica, Pontificia Universidad Cat\'olica de Chile, Casilla 306, Santiago 22, Chile}
\affiliation{Millennium Institute of Astrophysics, Nuncio Monse\~nor S\'otero Sanz 100, Of 104, Providencia, Santiago, Chile}
\affiliation{Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, Colorado 80301}
\author[0000-0002-6131-9539]{Keel, W.}
\affiliation{Department of Physics and Astronomy, University of Alabama, Box 870324, Tuscaloosa, AL 35487, USA}
\author[0000-0002-7998-9581]{Koss, M.}
\affiliation{Eureka Scientific, 2452 Delmer Street Suite 100, Oakland, CA 94602-3017, USA}
\affiliation{Space Science Institute, 4750 Walnut Street, Suite 205, Boulder, Colorado 80301}
\author[0000-0001-6920-662X]{Nagar, N.}
\affiliation{Astronomy Department, Universidad de Concepci\'on, Casilla 160-C, Concepci\'on, Chile}
\author{Sartori, L.}
\affiliation{ETH Z\"urich, Institute for Particle Physics and Astrophysics, Wolfgang-Pauli-Strasse 27, CH-8093 Z\"urich, Switzerland}
\author[0000-0002-2203-7889]{Maksym, W.P.}
\affiliation{Center for Astrophysics, Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA}
\author[ 0000-0001-8349-3055]{Venturi, G.}
\affiliation{Instituto de Astrof\'isica, Facultad de F\'isica, Pontificia Universidad Cat\'olica de Chile, Casilla 306, Santiago 22, Chile}
\affiliation{INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125 Firenze, Italy}
\author[ 0000-0002-2688-7960]{Tub\'in, D.}
\affiliation{Leibniz-Institut f\"ur Astrophysik Potsdam, An der Sternwarte 16, D-14482 Potsdam, Germany}
\author[ 0000-0002-4130-636X]{Harvey, T.}
\affiliation{Center for Astrophysics, Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA}
\keywords{Active Galactic Nuclei -- galaxies:Seyfert -- techniques: Spectroscopy}
\section{Introduction}
\label{sec:introduction}
Active galactic nuclei (AGN) can have a significant impact on the interstellar medium (ISM) of their host galaxies, through the mechanical input of radio jets, AGN wind-driven, and photoionization of the gas \cite[e.g.,][]{morganti+2017}. The energy injected into the host galaxy is thought to play a critical regulatory role in both galaxy and supermassive black hole (SMBH) evolution \cite[e.g.,][]{gebhardt+2000,kormendy+2013}.
The AGN can photoionize gas out to $\gtrsim$ 1 kpc, which is considered the so-called narrow line region (NLR), as the kinematics of this gas are typically $< 1000$ km/s. However, observations have shown that AGN-ionized gas can, and often does, extend well beyond this limit. These extended emission line regions (EELRs) can extend through the entire host galaxy and reach tens of kpcs \cite[e.g.,][]{liu+2013,harrison+2014}. EELRs can exhibit complex morphologies, and be spatially and kinematically distinct from the NLR. Some EELRs show no direct morphological relation to the host galaxy, and have been connected to tidal tails from galaxy interactions \cite[e.g.,][]{keel+2012a} or large-scale outflows \cite[e.g.,][]{harrison+2015}. While other EELRs show conical or biconical shapes, with their apex near the active nucleus, and can be considered as large-scale extensions of the NLR. The ionized gas within an EELR can show large velocities, ranging between several hundreds to $>$ 1000 km s$^{-1}$ with respect to the systemic velocity of the host, while also showing low velocity dispersion \cite[e.g.,][]{fu+2009,husemann+2013}.
While early studies connected EELRs with the presence of radio-jets \cite[][]{stockton+2006}, they have also been observed in active galaxies with no appreciable radio emission \cite[][]{husemann+2013}. Some EELRs show narrow line widths and modest electron temperatures, indicating that they must be ionized by radiation from the nucleus, rather than direct interaction with either a radio jet or an outflow, for which the linewidth and electron temperature would be much higher due the presence of shocks \citep[e.g.][]{knese+2020}.
EELRs can potentially offer a unique way to probe the effect of the AGN on the galaxy due to their physical extension, which connects the large scales of the host with the active nucleus \cite[e.g.,][]{harrison+2015,sun+2017}. Furthermore, by considering the light-time travel to the clouds and then to us, we can obtain a view into the past of the AGN, on longer timescales than typical AGN variability \cite[e.g.,][]{dadina+2010,lintott+2009,keel+2012a}.
The suggestion that EELRs can be light echoes from a former high-luminosity AGN was originally explored in a nearby, highly ionized, extended cloud near the active galaxy IC 2497 \cite[][]{lintott+2009,sartori+2016}, this object and others alike have been since called Voorwerpjes. The current AGN luminosity failed to account for the ionization observed in the cloud, thus suggesting that the AGN decreased by factor $\sim100$ in luminosity over a $\sim10^{5}$ yrs timescale \citep{lintott+2009,keel+2012a,schawinski+2010}. Further studies of nearby Voorwerpjes have found comparable variability amplitudes in similar timescales \citep{keel+2012a,keel+2017,sartori+2018}.\\
AGN variability has been observed and inferred to occur on virtually all timescales, ranging from seconds to days, to decades, to $> 10^4$ yr. These different timescales indicate that there are likely different physical processes at play at different spatial scales of the system,from the SMBH vicinity to galaxy-wide scale. Days to years variability, observed in the optical and UV and probed by ensemble analysis, may arise from accretion disk instabilities \citep[e.g.,][]{sesar+2006,dexter+2019,caplar+2017}. While possible explanations for years-to-decades timescales, observed in the so-called 'changing look AGN', include changes in accretion rate or accretion disk structure \citep[e.g.,][]{lamassa+2015,macleod+2019,graham+2020}. Variability on longer timescales ($> 10^{4}$ yr), probed through AGN-photoionized EELRs \citep[e.g.,][]{schawinski+2010,lintott+2009, keel+2012a}, are possibly linked to dramatic changes in accretion rate. Simulations indicate mergers, bar-induced instabilities and clumpy accretion as possible mechanisms behind these changes \citep[e.g.,][]{hopkins+2010,bournaud+2011}.
A typical AGN duty cycle is estimated to last $10^7 - 10^9$ yr \citep[][]{marconi+2004}. This however does not constrain whether the total mass growth is achieved during a single active accretion phase or is broken up in shorter phases.
Simulations and theoretical models have addressed accretion variability \citep[e.g.,][]{king+2015,gabor+2013,schawinski+2015,sartori+2018,sartori+2019}, suggesting a typical timescale for AGN phases of $10^{5}$ yr, implying that nearby AGN switch "on" rapidly during $\sim 10^4$ yr and stay on for $\sim 10^5$ yr before switching "off". The AGN continues "flickering" on these $10^5$yr cycles resulting in a total $10^7 - 10^9$ yr "on" lifetime over the course of the host.
Quantifying the effect the AGN has on the host galaxy evolution requires a better understanding of how the nuclear activity evolves and varies during the galaxy lifetime. Following the analysis of IC 2497, efforts to find similar objects in the nearby Universe were made, based on SDSS DR X optical imaging. In order to accomplish this, both targeted and serendipitous searches were carried out at z $< 0.1$ by Galaxy Zoo volunteers during a six-week period \citep{keel+2012a}. This search retrieved 19 objects, including NGC 5972, a nearby (z = 0.02974, where 1\arcsec\ corresponds to 0.593 kpc) Seyfert 2 galaxy \citep{veron+2006}.
NGC 5972 was previously known for the presence of powerful ($10^{23.9}$ WHz$^{-1}$ at 4850 MHz) extended (9\farcm4, corresponding to 0.3 Mpc) double radio lobes \citep{condon+1988,veron+1995} that extend along a position angle (PA) of 100\degree, almost perpendicular to the major axis of the optical emission.\\
The galaxy has been morphologically classified as a S0/a. Further modeling of the I-band luminosity profile shows complex residuals that indicate a possible merger or galaxy interaction event \citep{veron+1995}.
The EELRs extends to a radius of $20\arcsec$ from the center, which corresponds to $\sim$12 kpc,forming a double-helix shape which shows a highly ionized complex filamentary structure, \cite[narrow and medium band HST imaging][]{keel+2015}.
In this paper we present results from VLT/MUSE observations of NGC 5972. We study the long-term variability of the source by analysing the changes in luminosity required to ionize the EELR to its current state as a function of radius.
This paper is organized as follows. In Sect. \ref{sec:observations} we describe the observations and the data reduction process. In Sect. \ref{sec:results} we present the results of our analysis. The results are discussed and summarized in Sect. \ref{sec:discussion} and Sect. \ref{sec:summary}, respectively.
\section{Observations and Data Reduction}
\label{sec:observations}
The Multi-Unit Spectroscopic Explorer \citep[MUSE,][]{bacon+2010} is an integral field spectrograph installed on the Very Large Telescope (VLT). Its field of view (FOV) in wide field mode (WFM) covers 1\arcmin $\times$ 1\arcmin, with a pixel scale of 0\farcs2. The wavelength range spans $\sim$4600-9300 \AA\ and the resolving power is R$=$1770-3590.
NGC 5972 was observed with MUSE in WFM on the night of March 10th, 2019 (program ID 0102.B-0107, PI L. Sartori), in two observing blocks (OBs). Each OB consisted of 3 on-target observations with an exposure of 950 seconds each, with 1\arcsec dithers, and a seeing constraint of 1\farcs3. The last exposure of the second OB was finished 8 minutes into twilight, causing a small increase in the blue background. The exposure was included in the analysis.
The data were reduced with the ESO VLT/MUSE pipeline (v2.8) under the ESO Reflex environment \citep{freudling2013}.
Briefly, this pipeline generates a master bias and flat. This is followed by the wavelength calibration, employing the arc-lamp exposures, and after the flux calibration. Both of these are applied to the raw science exposures.
A sky model is created from selected pixels free of source emission and is subtracted from the science exposures. The coordinate offsets are calculated for each FOV image to align the 6 exposures. Finally, we combined all the exposures by resampling the overlapping pixels to obtain the final data cube.
The final data cube has a mean seeing-limited spatial resolution of $\sim$ 0\farcs78, as estimated from a point-source located in the southern portion of the FOV. The datacube is rotated 35\degree\ so North is up and East to the left. This results into our final data cube containing $432 \times 431$ spaxels, corresponding to 86\farcs4 and 86\farcs2. At the redshift of NGC 5972, the physical area observed is 50$\times$50 kpc$^{2}$, sufficient to cover the entire galaxy and its EELR, as can be seen in Fig. \ref{fig:compositecolor}.
\section{Results}
\label{sec:results}
In this section we present the results from our analysis of the MUSE observations of NGC 5972. The results are organized as follows. We first present the fitting to the stellar component (Sect. \ref{sec:stellarcomp}). In Sect. \ref{sec:emlines} we present an analysis of the ionized gas component by studying the moment maps from the emission lines, the dust extinction and the dominant ionization mechanism. In Sect. \ref{sec:kinematics} we present a kinematic analysis of the stellar and ionized gas components. Finally, in Sect. \ref{sec:lum_hist} we present our analysis to the extended emission line region, comparing the observed emission line ratios to ionization models in order to constrain the required luminosity to ionize the gas to its observed state.
We create a color composite image (Fig. \ref{fig:compositecolor}) by collapsing three separate sections of the data cube, one that encompasses the [OIII] emission line, the second one for the H$\alpha$ emission line, and a third one collapsing the range 7000-9000 \AA\ to represent the stellar continuum. These three images are represented by green, red and white pseudo-colors, respectively. Each color image was stretched with an logarithmic scale before being combined.
This composite image shows the large scale morphology of the ionized gas, as well as the filamentary complex structure that extends from the center to the N and S. The emission line gas forms a double helix shape to the S and an arm that seems to twist on itself towards the W is observed in the N. Low-luminosity tidal tails are observed beyond these bright arms both to the NE and SE.
Narrow band imaging obtained with the 2.1 m telescope at Kitt Peak in Arizona, where V-band continuum has been subtracted from the narrow filter centered at redshifted [OIII], show ionized structure reaching 70\arcsec\ as a lower limit in the S, and up to 81\arcsec\ in the N, this corresponds to 41 and 48 kpc, respectively (Keel, W; priv. comm.). When considering fainter features that are revealed with smoothing this limits can extend up to 76\arcsec\ and 91\arcsec\ for the S and N regions.
\subsection{Stellar Component fitting}
\label{sec:stellarcomp}
To analyze the stellar component of NGC 5972 we use the python implementation of the penalised PiXel-Fitting software
\citep[pPXF;][]{cappellari+2004,cappellari+2017}, together with the MILES single stellar population (SSP) models \citep{vazdekis+2015} as spectral templates for the stellar continuum. The templates have an intrinsic spectral resolution of 2.51 \AA, and were broadened to the wavelength-dependent MUSE resolution before any fits with pPXF were performed. We further adopt the line-spread function (LSF) model of the MUSE spectra as described in \cite{bacon+2017}.
The first step in the analysis is to apply the adaptive Voronoi tessellation routine of \cite{cappellari+2003}, to guarantee a minimum signal-to-noise ratio (SNR) across the entire field-of-view (FOV), in order to ensure reliability of the measurement for the stellar component.
The SNR is measured using the der\_SNR algorithm \citep{stoehr+2008} in the wavelength range 6400 $\leq \lambda $ (\AA) $\leq $ 6500, as there are no strong emission lines present in this spectral window. With this we achieve a minimum SNR of 50, on average per spectral bin, in 1407 spatial bins, in contrast to the 180,000 original pixels. Furthermore, we masked the FOV to exclude spaxels with SNR $< 4$.
The first run of pPXF was unregularised, fitting the wavelength range 4800-7000 \AA. We masked the following emission lines present in this range, using a width of 1800 km/s: H$\beta \lambda 4861$ \AA, [OIII]$\lambda 4958$ \AA, [OIII]$\lambda 5007$ \AA, [NI]$\lambda 5197$ \AA, [NI]$\lambda 5200$ \AA, HeI$\lambda 5875$ \AA, [OI]$\lambda 6300$ \AA, [OI]$\lambda 6363$ \AA, [NII]$\lambda 6547$ \AA, H$\alpha \lambda 6562$ \AA, [NII]$\lambda 6583$ \AA, [SII]$\lambda 6716$ \AA, [SII]$\lambda 6730$ \AA.
We use a fourth-order multiplicative Legendre polynomial to match the overall spectral shape of the data. The purpose of this first fit is to derive the stellar component kinematics. In Fig. \ref{fig:ppxf_example_fit} we show an example of the pPXF fitting to one of the central Voronoi bins.
A fit of elliptical isophotes (Fig. \ref{fig:ellipse_fit}) to the stellar moment 0 map, as obtained from the pPXF fit, shows that the inner region ($\lesssim$ 2\arcsec, Fig. \ref{fig:ellipse_fit_rad}) is best fitted with a different PA ($\sim 15$\degree) and ellipticity ($\epsilon \sim 0.1 $) than the rest of the disk (PA $\sim 3$\degree, $\epsilon \sim 0.25 $).
In Fig. \ref{fig:stellar_flux} we show the integrated flux distribution (moment 0) obtained from collapsing the cube near 9000 \AA\ with a 200 \AA\ range. On Fig. \ref{fig:stellar_moments} we show the stellar component velocity field (moment 1) and velocity dispersion (moment 2) maps from the pPXF fit.
The velocity distribution shows a fairly regular rotating system that reaches velocities of $\pm 150$ km/s along a PA $\sim 10$\degree. However a slightly bent zero-velocity contour and some blueshifted (redshifted) features in the southern (northern) regions indicate the presence of a perturbation to the rotation. While the velocity dispersion map shows a distribution peaking in the center, as expected for a rotating disk, the values of up to $\sim 220$ km/s (slightly higher than the rotation velocities) suggests some turbulence present in the system.
The second pPXF run is regularised to impose a smoothness constraint on the solution, applying the 'REGUL' option on pPXF. For this fit we mask the emission lines and apply an eight-order multiplicative Legendre polynomial, fixing the stellar kinematics to those of the first run to avoid degeneracies between stellar velocity dispersion and metallicity. Weights are applied to every template, which are regularly sampled on an age and metallicity grid that covers 0.03 $<$ Age (Gyr) $<$ 14 and -2.27 $<$ Metallicity (dex) $<$ 0.40. The regularisation allows templates with similar age and metallicity to have smoothly varying weights.
The stellar ages map is shown in Fig. \ref{fig:metalage} where we can observe a stellar population distribution with older populations at the center and younger populations at larger radii.
\subsection{Emission Lines}
\label{sec:emlines}
\subsubsection{Moment maps}
We run one further pPXF fit to the complete unbinned FOV with the purpose of subtracting the stellar continuum from the data cube. We use the unbinned data cube in order to recover the fine spatial structure of the gas component. This is possible due to the high SNR of the emission lines in contrast to the absorption lines. Following the same procedure as above, we masked the emission lines.
A scaled version of the best-fitted stellar continuum is subtracted to every spaxel of the unbinned cube associated to a given bin. We therefore obtain a continuum-free data cube, from which we can recover the emission lines. The total flux of the emission lines are model-dependent estimates based on this pPXF fitting.
The emission lines, in every bin, were then fitted with Gaussian profiles, tying the [OIII]$\lambda4958,5007$ \AA\ and [NII]$\lambda$6548,6562 \AA\ flux ratios, H{$\beta$} and H{$\alpha$} line widths, the relative positions of the emission lines, and the forbidden emission lines width to the [OIII]$\lambda$5007 \AA\ line width, given its considerably higher SNR. This approach does not leave considerable residuals in the other emission lines. This fit was made for all spaxels with SNR $> 4$, where the SNR was calculated in a wavelength range that includes the [OIII]$\lambda$5007 \AA\ forbidden emission line. From this one Gaussian component fit we extract the flux and kinematics for the emission lines as shown in Fig. \ref{fig:moments_onecomp}.
The flux distribution of the gas component shows a complex spatial distribution. The ionized gas extends further out than the stellar component, forming a "helix"-shaped pattern that displays very fine filamentary structure. High SNR ionized gas extends $\sim$25\arcsec\ from the center, while fainter structure is observed beyond this radius, up to 32\arcsec.
The northern trail of gas appears to extend from the center and then curve back towards the nucleus, forming at the same time fine gas filaments towards the NE. In the Southern region the ionized gas appears to extend from the nucleus in two arms. One curves slightly towards the E, and shows higher flux. The other extends from the E of the nucleus, and curves towards the W.
The gas velocity field shows complex kinematics that, while mainly redshifted in the N region and blueshifted in the S, do not appear to be a primarily rotating disk, in contrast to the stellar component. This is supported by the distribution of the velocity dispersion map, which shows a complex distribution that does not resemble the expectation for a purely rotating disk.
\subsubsection{Dust Extinction}
To obtain a map of the extinction by dust in the line of sight we calculate the Balmer decrement (using the H$\alpha$/H$\beta$ emission line ratio). From this, we estimate the total extinction (A$_{V}$), in the V-band, following \cite{dominguez+2013}. We use the average Galactic extinction curve from \cite{osterbrock+2006} assuming an intrinsic value of H$\alpha$/H$\beta = 2.86$ which corresponds to a temperature T$= 10^4$ K and electron density n$_{e} = 10^{2}$ cm$^{-3}$ for case B recombination. The extinction map (Fig. \ref{fig:extinction}) shows the presence of dust in an upside-down V shape with its apex close to the center, and some dust extending along the SE and SW arms. It has been suggested \citep{keel+2015} that these dust lanes were formed by a differentially precessing warped disk, as modeled by \cite{steiman+1988}. We use the derived extinction values to correct the emission line fluxes used in the analysis of Section \ref{sec:lum_hist}.\\
A starlight attenuation map was already created for this galaxy based on HST WFC3 imaging, as presented in Fig. 7 of \citet{keel+2015}. Assuming it represents pure continuum, which is then divided by a smooth model. This map shows almost no starlight attenuation in the SE arm, which suggests that it is on the far side of the system, and only absorbs a small fraction of the starlight behind it.
The three dots observed in Fig. \ref{fig:extinction} in the NW arm match well with the features observed in the starlight attenuation map, implying the filament in that region is in front of most of the starlight. Finaly, a dust feature is observed aligned N to S from 3.5\arcsec\ to 5.6\arcsec, this feature is S of the nucleus and it does not show a counterpart in the Balmer-decrement map, indicating that it may be decoupled from the ionized structure.
\subsubsection{Origin of the ionized gas}
\label{sec:origin_ionization}
We compute emission line ratios from the fluxes obtained from the one-component Gaussian fit, in order to analyse the distribution and origin of the ionized gas. We use the [OIII]/H$\beta$ and [NII]/H$\alpha$ emission line ratios to create a Baldwin, Phillips \& Telervich \citep[BPT;][]{baldwin+1981} diagram, where every spaxel is classified between Star-Forming, Composite, LINER or Seyfert. This color-coded classification is presented in Figures \ref{fig:bpt_1d}-\ref{fig:bpt_2d}, and shows the presence of Seyfert-like ionization along the arms that form the helix pattern, which is surrounded by a mixture of LINER-like and Composite ionization. The [OIII]/H$\beta$ map (Fig. \ref{fig:bpts_gradient}) shows that [OIII] dominates over H$\beta$ in the arms, with higher [OIII] to H$\beta$ ratio inside the arms, falling to a lower ratio quickly outside the filamentary structure. The [NII]/H$\alpha$ map shows the structure inside the arms where H$\alpha$ emission is higher than the [NII]. Some nodes with higher H$\alpha$ emission are observed near the nucleus and in the filamentary structure of the arms.\\
The resolved BPT reveals that AGN ionized gas extends uninterrupted from the central region up to radius $\sim$22\arcsec, and that AGN photoionization is the main ionization mechanism observed in the galactic disk, with Seyfert-like ionization in the arms and mostly LINER-like ionization in the disk outside the arms.
To evaluate the possibility that the ionized region extends over a bicone centered in the nucleus, tracing a large-scale extended NLR, we assume the observed ionization is bounded by the availability of radiation rather than gas. We estimate a required full opening angle for the ionization cones of $\sim 75$\degree\ to encompass the entire EELR. This angle is estimated from the projected angular width of each half of a notional bicone that encompasses the observed ionized regions. Given the projection effects this angle is an upper limit. Making the ionization cones narrower than the observed $\sim 75$\degree\ on each side would require the axis to be closer to our line-of-sight, putting all the EELR features farther from the AGN. In the context of the analysis conducted on Sect. \ref{sec:lum_hist} this would imply at even larger mismatch between the present-day AGN luminosity and that of the ionized clouds.
\subsection{Kinematics}
\label{sec:kinematics}
\subsubsection{Stellar component}
We modelled the kinematics of the stellar component using the \texttt{2DFIT} task included in the package \texttt{$ ^{3D}$Barolo} \citep{diteodoro+2015}, which fits tilted-rings of a chosen thickness, we use a width of 0\farcs4, fixing the center (as the peak of the continuum) and the systemic velocity and varying the rotation velocity, the PA and the inclination for every ring. The best-fit model and the residuals are shown in Fig. \ref{fig:stellar_model}. The residuals, obtained by subtracting the model of our stellar velocity map from the measured one, show a small blueshifted excess S and SE of the nucleus and redshifted E and SW of the nucleus. Some of these features coincide with the region where the arms observed in the gas component are present. The model reaches $\pm 150$ km/s and it has a curved zero-velocity contour, indicative of some perturbation in the disk. Along the different radii, the PA remains close to 6\degree\ varying to $\sim 353$\degree\ at radius $\sim 15$\arcsec and then returning to 6\degree. The inclination remains closer to 41\degree in the inner $\sim 15$\arcsec and increases to $\sim 50$\degree at larger radii. The rotation curve reaches $\sim 150$ km/s at radius 7\arcsec and remains more or less constant until radius 15\arcsec from where it increases steadily up to 220 km/s.
\subsubsection{Gas component}
The moment 1 for the [OIII] emission line reveals complex kinematics that seem to be, at least partially, rotating on a disk. To model this kinematics we utilize the \texttt{3DFIT} task in the \texttt{$ ^{3D}$Barolo} routine, to perform a 3D-fitting to a data cube trimmed on the wavelength-axis to contain only the [OIII] emission line. The fit is done in two stages. For the first stage we leave as free parameters the inclination, major axis PA, circular velocity and velocity dispersion. From this fit we obtain values for the inclination and the PA on each radius. The mean of these parameters over all the rings are 14\degree and 35\degree, respectively. The values from ring to ring do not deviate considerable from the mean values. For the second stage we fix the center to the peak of the continuum, and the inclination (35\degree) is fixed to the value obtained in stage one. We leave the PA as a free parameter but we use the mean value from stage one as the initial guess for the second fit. The radial velocity is fixed at zero at this point, assuming only a rotational component.
The code fits a pure circular rotation model to the data cube, while the non-circular motions can be observed in a residual map (Fig. \ref{fig:3dfit_moments}, and a zoomed-in version in Fig. \ref{fig:oiii_resd_zoom}), in which very complex structures can be observed. The fitted-model shows a large-scale rotation disk that reaches $\sim 120$ km/s. In the inner radius (6\arcsec), a higher velocity gradient is observed reaching $\sim 200$ km/s. This feature can correspond to an inner disk that has been disrupted, given the complex distribution observed in the velocity field. In the nuclear 1\arcsec, a feature can be observed along PA 100\degree, with velocities of $\sim 50$ km/s.
To confirm that these features are not model-dependent we create a position-velocity diagram (PVD), which consists on extracting the flux along a slit from the datacube. The positions are taken relative from the center, and the wavelength is transformed into a velocity offset centered at systemic velocity. In the PVD extracted along the minor axis, two loci can also be observed (Fig. \ref{fig:3dfit_pvds}). Considering the possibility of the near side corresponding to the W side of the disk, as the extinction map indicates, this feature could be explained by a nuclear outflow, we further explore this nuclear outflow by fitting the spectra with two Gaussian components on Appendix \ref{sec:outflow}.
A final feature is observed along the NW and SE arms, which show excess redshift in the approaching side of the galaxy and blueshift in the receeding side, with velocities reaching $\sim 300$ km/s (Fig. \ref{fig:pvd_150}). If we consider that the near side corresponds to the W side of the disk, this feature would correspond to an inflow in the plane of the galaxy disk. However, given the high velocities reached for an inflow, it is possible that, given the uncertainty about the three dimensional distribution of the gas, that it corresponds to a secondary component in the line-of-sight, caused by extraplanar gas related to tidal debris.\\
To further understand the kinematics of the gas in NGC 5972, we extract the spectra from 6\arcsec apertures in different regions of the disk, along the arms (Figures \ref{fig:apertures_1} and \ref{fig:apertures_2}). The spectral windows, centered on the [OIII] emission line show that the ionized gas along the arms is more redshifted (blueshifted) in the receeding (approaching) side than the stellar rotation, while in apertures outside the arms, the gas seems to coincide with the stellar rotation (apertures 23-24 and 4-6). This can also be observed by comparing with the position-velocity diagrams (PVDs) extracted from slits along the major and minor axis (Fig. \ref{fig:3dfit_pvds}), and along PA $\sim 150$\degree, which crosses the SE and NW arms (Fig. \ref{fig:pvd_150}). The arms seem to be dominated by a component that has velocities deviating from the model by about 150 km/s in both the redshifted and blueshifted sides. The spectra of these NW and SE arms (apertures 8-10 and 14-17) show that a secondary component, appearing as an asymmetric wing (or tail) in the line profiles, follows the rotation, indicating that the emission in these apertures is dominated by a secondary kinematic component, which may be emission from extraplanar gas in the line-of-sight.
\subsection{Extended emission line region}
\label{sec:lum_hist}
Having established that the gas from the arms has been ionized primarily by the AGN (See Sect. \ref{sec:origin_ionization}) and given the extension of the ionized clouds, which is large enough to be substantially influenced by light-travel time effects, we can trace the AGN luminosity from the nucleus to $\sim 17$ kpc and assess possible luminosity changes over time.
For this analysis we compare our observed emission line ratios to the spectra obtained from photoionization models to derive a best-fit ionization parameter (defined as the dimensionless ratio of hydrogen-ionizing photon to total-hydrogen densities), and use it to estimate the AGN luminosity required to produce the EELR ionization.
In Fig. \ref{fig:arms_apers} we present the apertures (radius 0\farcs6) where the analysis was carried out. We separated the regions in the arms 'A' (N arm), 'B' (SE arm) and 'C' (SW arm), each covering distances of 11, 12 and 8 kpc to the inner kpc respectively. Furthermore, we add apertures over the northern tidal tail of the 'A' arm, labeled as 'AT', which covers between 11 to 17 kpc. Given the low flux in this region we use apertures of 1.2\arcsec.\ The furthest distance corresponds to $5.5 \times 10^{4}$ ly, located in the A arm. For every aperture we obtain the integrated spectrum, which is fitted with Gaussian profiles to obtain the fluxes for the emission lines H$\beta \lambda 4861$ \AA, [OIII]$\lambda 5007$ \AA, HeI$\lambda 5875$\AA, [OI]$\lambda 6300$ \AA, [NII]$\lambda 6547,6583$ \AA, $H\alpha \lambda 6563$ \AA, [SII]$\lambda 6716, 6731$ \AA.
After creating a continuous sequence of apertures from the center covering the extension of every arm, we fill the vicinity of each aperture with 30 new apertures of the same size. From these new apertures we choose the one with largest SNR, to the corresponding spectra we fitted a two-component Gaussian profile to separate the multiple kinematic components. An example of this can be observed in Fig.~\ref{fig:fit_example}.
To model the emission line ratios observed in our data, we use the photoionization code \texttt{Cloudy} \cite[version 17.02 last described in][]{ferland+2017}, following the method outlined in \cite{treister+2018}. \texttt{Cloudy} resolves the equations of thermal and statistical equilibrium on a plane-parallel slab of gas being illuminated and ionized by a central source. For the central source we consider an spectrum consistent with the observed spectra of local AGN \citep[e.g.,][]{elvis+1994}. This spectrum is described by a broken power law ($L_{\nu} \propto \nu^{\alpha}$), where $\alpha = -0.5$ for E $<$ 13.6 eV, $\alpha = -1.5$ for 13.6 eV $<$ E $<$ 0.5 keV and $\alpha = -0.8$ for E $>$ 0.5 keV.
The ionization parameter is defined as the dimensionless ratio of number of ionizing photons to hydrogen atoms at the face of the gas cloud:
$$U = \frac{Q(H)}{4\pi r_{0}^{2}n(H)c}$$
where $r_{0}$ is the distance between source and the ionized cloud, n$_{H}$ is the hydrogen number density (cm$^{-3}$), and {\it c} is the speed of light. \textit{Q(H)} is the number of ionizing photons per unit of time (s$^{-1}$), which is defined by
$$Q(H) = \int_{\nu_{0}}^{\infty} \frac{L_{\nu}}{h\nu}d\nu$$
where $L_{\nu}$ is the AGN luminosity as a function of frequency, $h$ is the Planck constant, and $\nu_{0} = 13.6$ eV$/h$ is the frequency corresponding to the ionization potential of hydrogen \citep{osterbrock+2006}.
The first run of simulations assumes solar metallicity with a grid covering the ionization parameter ($-3.5 <$ U $< -2.0$) and the hydrogen density (1.0 $<$ n$_{H}$ $<$ 5.0). However these simulations do not cover the entire range of measurements on a BPT-diagram. Thus, following \cite{bennert+2006}, we run a series of models changing the metallicity from 1.0 to 4.0 times the solar metallicity, and a third calculation changing only the nitrogen and sulphur metallicity from 0.1 to 4.0 $Z_{\sun}$. From these runs, we find that changing the N and S metallicity to $1.5 \times Z_{\sun}$ and $1.5 \times Z_{\sun}$, respectively,while maintaining the other elements at their solar values, best covers the parameter space of our data (see Fig. \ref{fig:cloudy_sims}) for the arms 'A', 'B', and 'C'.
This metallicity, however, does not fit well the fluxes observed for arm 'AT'. Therefore we maintain a solar metallicity. More details on the metallicity choice can be found on Appendix \ref{sec:metal}.
To obtain a model that fits the observations, it should be able to reproduce all the observed emission line ratios relative to H$\beta$. For each aperture we compare the following observed line ratios scaled to H$\beta$: [OIII], [NII], [SII], H$\alpha$, [OI], to those obtained from the simulations and choose the combination of ionization parameter and hydrogen density that delivers the best fit. We consider an acceptable fit when the difference between the emission line ratios is less than a factor of two for every line ratio. For [OIII]$\lambda 5007$ \AA\ we require the model to match the observations within 50$\%$. From the models that meet these criteria, we choose the one with the smallest reduced $\chi^2$. Examples of these results along three different apertures, one per arm, are shown in Fig. \ref{fig:cloudy_chisqr}.
Following previous EELRs analysis \citep{keel+2012a} we assume a fully ionized gas, and thus we can estimate the atomic hydrogen column density as being the same as the electron density (n$_{H}$ = n$_{e}$). The electron density can be constrained from the [SII]$\lambda6716/6730$ \AA\ emission line ratio (hereafter referred to as [SII] ratio). We use the Python package \texttt{PyNeb} which computes emission line emissivities \citep{luridiana+2012}, to obtain the n$_{e}$ from the observed [SII] ratio, assuming a $10^4$ K temperature \citep{osterbrock+2006}.
The $\chi^{2}$ maps for the U and $n_{H}$ parameters (Fig. \ref{fig:cloudy_chisqr}) show that for a given U the $n_{H}$ remains roughly constant. Considering that the required bolometric luminosity of the AGN is proportional to $U \times n_{H}$, and since the $n_{H}$ is not well constrained by the Cloudy fit, we adopt the electron density from the [SII] ratio as $n_{H}$. However, for completeness and given that not for every aperture the $n_{H}$ value for the Cloudy fit and the [SII] line ratio fall in the same U value, we calculate two different bolometric luminosities: one assuming the density from the [SII] ratio (hereafter referred to as 'model 1') and one assuming the hydrogen density from the best-fit Cloudy model (hereafter 'model 2').
Using the obtained ionization parameter and hydrogen density values, and assuming that the distance to the cloud is traced by the projected distance from the central source to the aperture, we can estimate $Q(H)$ for each aperture. Then, integrating our SED we can use the obtained $Q(H)$ to further estimate the bolometric luminosity required to ionize the gas at each point. \\
In figure \ref{fig:lboldist} we show the bolometric luminosity as a function of distance to the nucleus, using the electron density from model 1 (blue shaded area), and the electron density for model 2 (red shaded area).
The errors are derived from MonteCarlo (MC) simulations, where random noise is added to the observed emission line ratios. These simulations are then fitted with the described technique, and the errors are considered as the standard deviation (stdv). For model 2, we consider the stdv for the U and n$_{H}$ parameters. For model 1, the noise is applied first to the [SII] ratio, and we obtain the error as the stdv in electron density. With this value, we fix the density and fit the ionization parameter in the same manner as before.
The results on both models show a clear trend of increasing luminosity with distance from the center. Both L$_{bol}$ derived from the different density calculations follow the same trend and overlap at some radii. For model 1 we see a change in luminosity of $\sim 124$ times between 1 kpc and 17 kpc, which considering time travel distance, corresponds to $5 \times 10^4$ years. While for model 2 the luminosity change is $\sim 160$ times.
The maximum bolometric luminosity is observed in the 'AT' arm, and reaches $4 \times 10^{46}$ erg s$^{-1}$ at $\sim 17$ kpc from the center for model 1, and $10^{47}$ erg s$^{-1}$ for model 2.
The AGN in NGC 5972 has been reported to have a L(15-55 keV) = $1.0 \times 10^{43}$ erg s$^{-1}$ \citep{marchesi+2017}, a L(FIR) $< 5.5 \times 10^{43}$ \citep{keel+2012a}, corresponding to a ionizing luminosity of L$_{ion} > 7.8\times10^{43}$ erg s$^{-1}$.
We estimate the present-day bolometric luminosity from the [OIII] emission line in the apertures closest to the center to be $\sim 2 \times 10^{44}$ erg s$^{-1}$, applying the correction factor of \citet[][]{heckman+2004}. Our results are in concordance with the values reported in \cite{keel+2012a} for this object, who estimate a L$_{ion} < 9.8 \times 10^{46}$ erg s$^{-1}$ at 15-18 kpc, in contrast with present-day L$_{ion} > 7.8 \times 10^{43}$ erg s$^{-1}$ for the AGN in its current accretion state.
\section{Discussion}
\label{sec:discussion}
\subsection{Morphology and kinematics}
The spatial resolution provided by the MUSE observations allows us to spatially resolve and characterize the EELR.
The most striking features of this EELR are the arms seen to the North and South of the center. Additionally, faint streams of gas reminiscent of tidal tails are observed towards the NE and SE. The spectra of these arms show that they are dominated by AGN-photoionized gas. The kinematic analysis shows complex structure with a component that follows a usual galaxy disk rotation reaching 120 km/s. Additionally, the NW and SE arms are a clearly distinct independent component. While the kinematics of the arms follow the sense of rotation in the disk, with blueshifted (redshifted) emission on the SE (NW), they show larger offsets from a simple rotation model. Given the inclination of the disk, this feature could represent an outflow if the blueshifted emission is on the near-side of the galaxy. The velocities reached by this feature ($\sim 300$ km/s) are consistent with an outflow in a low or moderate power source. However, another possibility is that this feature is another component in the line-of-sight, due to extraplanar gas.
Finally, we identify a kinematically decoupled inner disk (radius 6\arcsec, peak velocity $\sim 200$ km/s). Inside the disk, in the inner 1\arcsec~ we observe a 50 km/s feature along PA 100\degree\ that could be an inflow if the NE side of the galaxy is the near side.
Given the presence of tidal features, and the inner disk which seems kinematically decoupled from the large-scale disk, it is possible that this galaxy has experienced a merger or close-encounter event in the past. This event could have tidally disrupted the arms into the currently observed shape. However the presence of a undisturbed distribution of stellar populations strongly argues against this event being a major merger.
\subsection{Luminosity history and variability}
Previous luminosity history analyses of other sources have typically compared the luminosity of the current AGN to distant gas clouds $\sim 10^{4-5}$ light years from the center. In the case of \citet{gagne+2014} the analysis was made using a long-slit for the "teacup" galaxy, covering the change from center to the EELR along the slit.
In the case of NGC 5972 the EELR extends from the center to $\sim 10^{4.7}$ ly. This allows us to carry out a tomographic analysis and thus trace the continuous change in luminosity over the past $\sim 10^{4}$ yr.
Using the photoionization code \texttt{CLOUDY}, we have created models that cover a range of ionization parameter, metallicity and hydrogen density, which we matched to the observed spectra from different apertures along the arms of ionized gas. To obtain a bolometric luminosity based on these models, some caveats must be taken into consideration:
a) \textit{Geometry:} we have assumed that the distance from the center to each ionized gas cloud corresponds to the projected distance in the plane of the sky between the two points (r$_{proj}$). However, it is possible that the ionized gas is not in the plane of the galaxy disk. In this case the true distance will correspond to r$_{proj}/sin(i)$, where $i$ is the inclination of the cloud with respect to the galaxy plane. Considering this inclination, the difference in epochs between the currently observed L$_{bol}$ and that inferred from the cloud emission, corresponds to:
$$ \Delta t = \frac{r_{proj}}{c sin(i)} (1-cos(i))$$
b) \textit{Density:} In the case of model 1, where the gas density was derived from the [SII] emission line ratio, it is important to consider that for electron density values between $\sim 100-1000$ cm$^{-3}$ the slope in the conversion curve is steep \citep{osterbrock+2006} and thus small changes in the [SII] ratios can translate into largely different electron density values.
c) \textit{Metallicity:} In this analysis we assume a solar abundances for arms 'A', 'B' and 'C', which are sufficient to model the physical conditions of NLRs \citep[as suggested by][]{kraemer+2000}. However, the line ratios observed on the 'AT' arm do not fit within our grid of simulations while assuming solar metallicity. Thus, we vary the metallicity until it matches the observed values.
Taking this into consideration, we obtain a bolometric luminosity change vs light travel time along the cloud, based on the best-fitted CLOUDY model. To test the consistency with previous analysis in Figure \ref{fig:Q_plot}, we show a comparison of the required rate of ionizing photons (Q) from our CLOUDY models versus the values obtained by \cite{keel+2017} from recombination balance, which represents a lower limit. There is good agreement between both methods.
For model 1, the apertures located in the bright arms ('A'-'C') show a decrease of $\sim 40$ times in bolometric luminosity between 10 to 1.2 kpc. However, the fainter tidal tail that covers between 11 and 17 kpc shows that this increase can reach a 125 times difference, reaching bolometric luminosities of $\sim 5\times 10^{46}$ erg/s at $\sim 17$ kpc. This external region was not covered previously in \cite{keel+2017}.
We compare the largest bolometric luminosities obtained from our analysis with the AGN luminosity as derived from the WISE MIR and FIR luminosity. This corresponds to a current AGN bolometric luminosity of $2\times 10^{44}$ erg/s \citep{keel+2017}. This implies a decrease of 85 times when considering only the 'A' - 'C' arms, over the past $\sim 3\times 10^{4}$ years, or $\sim 250$ times when considering the fainter 'AT' arm, over the past $5\times 10^{4}$ years. For model 2, we see a difference of 80 between 1.2 and 10 kpc and 160 between 1.2 and 17 kpc. While the difference between the largest bolometric luminosity with the current bolometric luminosity for this model is 790.
Support for order-of-magnitude variations in the AGN accretion state on $10^{5-6}$ yr timescales include simulations \citep[][]{novak+2011,gabor+2013,yuan+2018}, observational arguments \citep[e.g.,][]{schawinski+2015,sartori+2018}, and theoretical models \citep[][]{martini+2003,king+2015}. It is possible that the observed short timescale variability is caused by rapid AGN duty cycles \citep[{\it "flickering"};][]{schawinski+2015}. Possible scenarios that could explain the short timescales for these cycles include:
a) Simulations of {\it feedback-regulated BH accretion} which suggest changes in the character of the accretion over time, from well-separated sharp bursts, to chaotic, stochastic accretion. These bursts are needed to prevent gas pile-up, and can be caused by interactions between radiation pressure and winds with the galactic gas. The bursts of activity are followed by a rapid shutdown on timescales of $\sim 10^{5}$ \citep[][]{novak+2011,ciotti+2010}.
b) {\it Chaotic cold accretion:}, whereby the AGN flickering can be caused by cold infalling gas that tends to fragment and fall as large discrete clumps. These clouds can condense from a hot halo due to thermal instabilities, losing angular momentum and falling into the SMBH on timescales of $\sim 10^{5}$ yr \citep[][]{gaspari+2013,king+2015}
c) {\it Sharply truncated accretion disk:} \cite{inayoshi+2016} suggests, based on nuclear starburst (SB) models \citep{thompson+2005}, that the growth of a SMBH over a few $10^{10}$ M$_{\odot}$ is stunted by small-scale physical processes. If a high accretion rate is achieved, vigorous star formation in the inner $\sim 10-100$ pc would be able to deplete most of the gas, causing the accretion rate to decrease rapidly (factor of $100-1000$).
In general, it remains unclear whether the main driver of the variability is the fueling mechanism at large scales or instabilities in the accretion disk.\\
We further compare our results to the framework for AGN variability presented by \cite{sartori+2018}, which links the variability over a wide range of timescales. We calculate the total variability for the AGN in NGC 5972 as the difference between the apertures closest and farther from the center, which translates into a magnitude difference ($\Delta$m $= 9.8$) at a given time lag \citep[$\tau = 5\times 10^{4}$ yr; eq. 1 in ][]{sartori+2018}. We compare this value with the structure function \citep[SF; Fig. 2 in][]{sartori+2018} and find that our results fall within the Voorwerpjes region in the SF. The Voorwerpjes cover a range of $\Delta$m from $\sim 6\times 10^{-1} - 6\times 10^{1}$ and of time lags of $\sim 10^{4} - 10^{5}$ yr. The total variability of NGC 5972 falls well into this region with $\Delta$m$ = 9.8$ and time lag$ = 5\times 10^{4}$ yr.
Furthermore, we calculate the $\Delta$m for the entire light travel time along the cloud, calculating the magnitude difference between each radius (r$_{i}$) with the following (r$_{i+1}$), and obtain a distribution that mostly follows the SF from light curve simulations in \cite{sartori+2018}, as can be observed in Fig. \ref{fig:SF}. Noticeably, this distribution fills a gap between data points from optical changing-look quasars, quasar structure function and Voorwerpjes derived from different methods.
\section{Summary and conclusions}
\label{sec:summary}
In this work we present integral field spectroscopic VLT/MUSE observations for the nearby active galaxy NGC 5972 which shows a prominent EELR. The combination of the uninterrupted presence of ionized gas from the nucleus out to $\sim 17$ kpc and the spatial resolution and coverage of the MUSE observations has allowed us to study in detail the characteristics of this object. Our findings can be summarized as follows:
\begin{enumerate}
\item We detect an EELR that extends over $\sim 11$ kpc with a fainter tidal tail that extends between $11-17$ kpc. The morphology of this region resembles a double-helix shape with highly filamentary structure. The analysis of the gas excitation though BPT diagnostic diagrams of this emission line region shows that it consistent with AGN photoionization.
\item The kinematics as disclosed by the emission lines shows a complex scenario, with multiple components as evidenced by the PV-diagrams and the broad, often double-peaked spectral profiles. We find evidence for a component that indicates disk rotation. The offsets from the systemic velocity are higher in the arms, reaching 300 km/s on the SE and NW regions, as compared to the disk rotation that reaches velocities of only $\sim$120 km/s. These features can be interpreted as extraplanar gas connected to the tidal debris. A final component is observed as a kinematically decoupled inner disk, in the inner 6\arcsec, which contains an outflow along the minor axis reaching $\sim 180$ km/s and appears to be dragging gas rotating in the large-scale disk.
\item The faint tidal tails of ionized gas observed on the NE and SE can also be a hint of a past merger event. EELRs are often associated to events of this type, as a merger can create tidal tails that are then illuminated and ionized by the AGN.
\item We use the photoionization code \texttt{CLOUDY} to generate a grid of models covering a range of ionization parameter and hydrogen density values, which we then fit to each aperture of an array that covers between 1-17 kpc of the extended ionized gas. The bolometric luminosities derived from this analysis for each radii along the arms show a systematic decrease with radius from the center.
This suggests a decrease in AGN luminosity for model 1 (gas density derived from [SII] line ratio) between 40 to 120. For model 2 (gas density fitted with Cloudy) the difference is between 80-160 times over the past $3-5\times 10^{4}$ years. These variability amplitudes and timescales are in good agreement with previous luminosity history analyses \citep[e.g.,][]{lintott+2009,keel+2012a,gagne+2014} and AGN variability models \cite[e.g.,][]{sartori+2018}.
\end{enumerate}
The extension of the ionized cloud in NGC 5972 makes it an ideal laboratory to carry out a comprehensive tomographic analysis of its EELR. It allow us to probe the AGN variability over a continuous $\sim 10^{4}$ yr timescale, due to the light travel time. This timescale is significantly larger than human timescales, and therefore it provides a unique opportunity to fill in the timescale gap for studies of AGN variability at different timescales \citep[e.g.,][]{sartori+2018}.
The dramatic change in luminosity observed in the AGN of NGC 5972, as well as in other similar objects \citep[e.g.,][]{lintott+2009,gagne+2014,keel+2017},
suggests a connection to similarly dramatic changes in the AGN accretion state.
The results presented are consistent with the scenario described in \cite{schawinski+2015} where AGN duty cycles ($10^{7-9}$ yr) can be broken down in shorter ($10^{4-5}$ yr) phases. To probe for longer timescales larger extensions of ionized gas, and larger FOV coverage with MUSE, would be required.
\section*{Acknowledgements}
We acknowledge support from FONDECYT through Postdoctoral grants 3220751 (CF) and 3200802 (GV), Regular 1190818 (ET, FEB) and 1200495 (ET., FEB); ANID grants CATA-Basal AFB-
170002 (ET, FEB, CF), FB210003 (ET, FEB, CF) and ACE210002 (CF, ET and FEB); Millennium Nucleus NCN19\_058 (TITANs; ET, CF); and Millennium Science Initiative Program – ICN12\_009 (FEB).
WPM acknowledges support by Chandra grants GO8-19096X, GO5-16101X, GO7- 18112X, GO8-19099X, and Hubble grant HST-GO-15350.001-A. D.T. acknowledges support by DLR grants FKZ 50 OR 2203.
\bibliography{refs}{}
\bibliographystyle{aasjournal}
\appendix
\section{Nuclear Outflow}
\label{sec:outflow}
We perform a two-component Gaussian fit to the spaxels in the central 2\arcsec\ where we observe an outflow. The velocity maps and example spectra for each component (narrow and broad) are shown in Fig. \ref{fig:double_comp_outflow}. The outflow is extended along PA $\sim 100$\degree\ which corresponds to the minor axis of the large-scale rotation. The flux is obscured to the E side as shown by the magenta contours that mark the extinction. Red and blueshifted velocities are observed in both the narrow and the broad component, with the broad component reaching larger velocities. The observed velocities however are asymmetric, with the blueshifted region reaching velocities of 90 km/s and 180 km/s for the narrow and broad components, respectively. While the redshifted region shows velocities near systemic and 50 km/s for the narrow and broad components.
The outflow seems to extend along the same axis of the radio lobes \citep[][]{condon+1988}. It is possible that, at least partially, the receding outflow lies behind the galactic disk. Therefore, it could be affected by the extinction on the disk. In this scenario, the approaching section of the outflow would be, at least partially, above the disk in our line of sight. The velocity profiles extracted along PA 110\degree\ for both components (Fig. \ref{fig:outflow_vel_profile}) show that both follow similar patterns with the broad component reaching larger velocities, therefore it is possible that the main component of the outflow, represented by the broad component, is dragging along at least part of the gas that follows the main-disk rotation.
\section{Metallicity} \label{sec:metal}
The metallicity chosen for our final CLOUDY models come from creating a grid of models with different solar metallicities. In Fig. \ref{fig:metallicity_bpt} we show examples for an example aperture along arm 'A' and 'AT', we show solar and 1.5 times solar metallicities, and our custom metallicity where we only changed N and S elements based on their positions on the BPT diagram, these two elements were scaled 1.5 and 1.2 times their solar value. As it can be observed in Fig. \ref{fig:metallicity_bpt}, for apertures in arm 'A' (and the same is true for arms 'B' and 'C'), a simple increase of decrease of solar metallicity was not enough to achieve a good enough fit, which is why we opted for scaling N and S. However, for arm 'AT' this change resulted in a worse fitting, which is why we decided to maintain the metallicity as solar. Is important to remark that this change in metallicity kept very similar results for the U value obtained but changed the density (as can be observed in Fig. \ref{fig:cloudy_sims}), a problem we avoid by assuming the density obtained directly from the [SII] ratio in model 1.
|
Title:
Scalar Weak Gravity Conjecture and Inflationary Models |
Abstract: In [arXiv:2208.09842], Yuennan and Channuie examined four inflation models,
such as Composite NJL Inflation(NJLI), Glueball Inflation(GI), super Yang-Mills
Inflation (SYMI), and Orientifold Inflation (OI) from further refining the dS
swampland conjecture (FRSDC) perspective. They found that all models violate
the dS swampland conjecture(DSC) but are compatible with (FRSDC) through manual
adjustment of free parameters of the mentioned conjecture. Now, in this
article, we want to check each of the mentioned inflation models with two other
conjectures of the swampland program: scalar weak gravity conjecture (SWGC) and
strong scalar weak gravity conjecture (SSWGC). We want to study the
simultaneous compatibility of each model with these two new conjectures.
Despite being consistent with (FRSDC), we find that all models are not
compatible with the other conjectures of the Swampland program in all regions,
and these conjectures are only satisfied in a specific area. Also, Due to the
presence of constant parameter ($\phi_{0}$) in the higher orders derivatives,
the (SYMI) and (OI) among all the models are more compatible with all
conjectures of the swampland program. They can provide a more significant
amount of satisfaction with all of them. They can be suitable and accurate
inflation models for a more profound examination of universe developments. We
determined a particular region for these models is compatible with (FRSDC),
(SWGC), and (SSWGC) simultaneously.
| https://export.arxiv.org/pdf/2208.13093 |
\begin{center}
\Large{\bf Scalar Weak Gravity Conjecture in Super Yang-Mills Inflationary Model}\\
\small \vspace{1cm} {\bf Jafar Sadeghi$^{\star}$\footnote {Email:[email protected]}}, \quad
{\bf Mohammad Reza Alipour $^{\star}$\footnote {Email:[email protected]}}, \quad
{\bf Saeed Noori Gashti$^{\star}$\footnote {Email:[email protected]}}, \quad
\\
\vspace{0.5cm}$^{\star}${Department of Physics, Faculty of Basic
Sciences,\\
University of Mazandaran
P. O. Box 47416-95447, Babolsar, Iran}\\
\small \vspace{1cm}
\end{center}
\newpage
\tableofcontents
\section{Introduction}
Swampland program has recently been presented to evaluate and prove string theory to investigate effective low energy theories related to quantum gravity.
In recent years, many efforts have been made to develop the theory of everything, and perhaps string theory is one of the most well-known theories in this route.
Since, the string theory is explained very clearly, we can expect various consequences from it in cosmology.
As a result, a wide range of ideas and structures in the literature takes an in-depth look at the cosmological implications of string theory\cite{b,e,i,t}.
In string theory, many possible vacuums are created, which are one of the exciting aspects of the theory's cosmology; the set of these also forms the string theory landscape.
The important question in this regard is which of the effective low energy theories can be compatible with the string theory\cite{b,e,i,t}.
Therefore, the swampland program was introduced. This program includes many conjectures, including weak gravity conjecture (WGC), dS and AdS conjectures, SWGC, SSWGC, TCC, etc\cite{a,b,c,d,e,f,h,i,j,k,l,m,n,o}.
The collection of effective low-energy theories compatible with quantum gravity lives in the landscape.
A broader area surrounds the landscape, and the set of theories incompatible with quantum gravity is placed in this area, referred to as the swampland.
Many researchers have accepted string theory as a theory that determines quantum gravity.
Therefore, effective low-energy theories compatible with these swampland conjectures are aligned with quantum gravity, which can be of great help in finding a solution to advance this great question that has occupied researchers for years, namely, quantum gravity. So far, much work has been examined about the swampland program in literature; you can see Ref.s\cite{a,b,c,d,e,f,h,i,j,k,l,m,n,o,p,q,r,s,t,tt,ttt,u,v,w,x,y,z,aa,bb,cc,dd,ee,ff,gg,hh,ii,jj,kk,ll} for further study.
The swampland program has played an important role in finding phenomena consistent with quantum gravity. It has been used in various parts of physics, including black holes, inflation, and dark energy. Recently, many improvements have been made in the swampland program, which can be used to solve many cosmological problems. Among them, we can mention the de Sitter and refined de Sitter swampland conjecture that it is possible to find models compatible with quantum gravity by using the derivative of scalar field potentials \cite{1,2}. One investigate the four-dimensional theory of a real field $\varphi^i$ coupled to gravity, whose dynamics can be controlled using a scalar potential $V(\varphi^j)$ and whose action is as follows \cite{2,3},
\begin{equation}\label{eq1}
S=\int_{4D} d^4x \sqrt{-g}\left[-\frac{1}{2}M_p^2 R+\frac{1}{2}g^{\mu\nu}h_{ij} \partial_{\mu}\varphi^i \partial_{\nu}\varphi^j-V\right]
\end{equation}
where $g$ is the matrix metric in the four-dimensional space, $M_p$ is the Planck mass, $R$ is the Riemann curvature in the four-dimensional space, and $h_{ij}$ is the metric of the field space. Therefore, we examine different phenomenological models, such as inflation, described by action \eqref{eq1}.
Among the conjectures of the swampland program used to investigate cosmology is the de Sitter and refined de Sitter swampland conjecture, which state that the effective theories of quantum gravity placed in the landscape must satisfy at least one of the following constraints \cite{4,5},
\begin{equation}\label{2}
|\nabla V|\geq\frac{c_{1}}{M_{p}}V, \hspace{12pt} min(\nabla_{i}\nabla_{j}V)\leq -\frac{c_{2}}{M_{pl}^{2}}V
\end{equation}
The above equations for the $V>0$ can be rewritten in terms of the slow-roll parameters as follows,
\begin{equation}\label{3}
\sqrt{2\epsilon_{V}}\geq c_{1} ,\hspace{12pt} or \hspace{12pt}\eta_{V}\leq -c_{2}
\end{equation}
where $c_1$ and $c_2$ are both positive and order of one, i.e., $c_1=c_2=\mathcal{O}(1)$. Also, the left side of equation (2) is related to the main swampland conjecture. Recently, David Andriot and Christoph Roupec combined the two of the de Sitter swampland conjecture and refined and formulated them, which is called further refining de Sitter swampland conjecture. It states that an effective low-energy theory of quantum gravity that consider the action equation(1) must satisfy the following relation \cite{2,3},
\begin{equation}\label{4}
\bigg(M_{p}\frac{|\nabla V|}{V}\bigg)^{q}-aM_{p}^{2}\frac{min(\nabla_{i}\nabla_{j}V)}{V}\geq b,
\end{equation}
where $a+b=1$, $a,b>0$, $q>2$. $a$, $b$, and $q$ are its free constant parameters that create a restriction for this conjecture. Many inflationary models have been investigated using this conjecture. The advantage of this conjecture over the old conjecture is that, unlike the refined dS conjecture, it doesn't sound inconsistent with the slow-roll single-field inflationary models \cite{3}.
One of the other important conjectures of the swampland program is the weak gravity conjecture $WGC$, which states that gravity is the weakest force.
Also, Palti generalized the WGC and showed the scalar field forces are stronger than gravity \cite{6,7}. Considering a particle $h$ with mass $m$ coupled to a light scalar $\varphi$ whose mass is a function of the scalar, in that case, the scalar weak gravity conjecture ($SWGC$) states that the intermediate force is stronger than gravity, and assuming $m^2=V^{\prime \prime}=\frac{\partial^2 V}{\partial \varphi^2}$, we have the following condition for SWGC,
\begin{equation}\label{eq5}
(V^{(3)})^2\geq \frac{(V^{(2)})^2}{M_p^2},
\end{equation}
where the power number in the parentheses means the order of the derivative relative to $\varphi$. Also, Eduardo Gonzalo and Luis E.IbГЎГ±ez suggested a strong version of SWGC, i.e., SSWGC \cite{8} which expresses that the potential of any canonically normalized real scalar $V(\varphi)$ must satisfy any value of the field restriction:
\begin{equation}\label{eq6}
2(V^{(3)})^2 - V^{(2)}V^{(4)}\geq \frac{(V^{(2)})^2}{M_p^2}.
\end{equation}
In this article, we are trying to test the considered inflation models with $SWGC$ and $SSWGC$ conjectures to find the model compatible with quantum gravity.
Therefore, according to all the above explanations, we organize the article.\\
In section 2, we overview the inflation models such as (NJLI), (GI), (SYMI) and (OI) in 4 subsections.
In section 3, we challenge these inflationary models with two conjectures of the swampland program, i.e., the SWGC and SSWGC. We will discuss the compatibility or incompatibility of each model with mentioned conjecture and determine the consistent regions. We compare the results with each other. Finally, we describe the outcomes in Section 4.
\section{Overview of inflationary models}
In this section, according to \cite{a}, we briefly introduce four inflation models composite NJL Inflation(NJLI), Glueball inflation(GI), super Yang-Mills inflation (SYMI), and Orientifold inflation (OI). We review the results of each model's compatibility with the two swampland conjectures described in \cite{a}. Then, in the next section, using the potential of each model, we will check other important conjectures of the swampland program, namely (SWG) and (SSWG). Finally, we will introduce the results thoroughly, and the best model that has the most compatibility with all conjectures presented as well as the best inflation model to examine the universe development.
\subsection{Model I: NJLI}
The action expressing the inflationary model that the inflation has a non-minimally coupling with gravity is defined as follows in Jordan's framework \cite{a,100},
\begin{equation}\label{7}
\begin{split}
&S_{J}=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi-\frac{\xi R}{2}\Big[\varphi^{2}-\frac{\upsilon^{2}}{2}\Big]-V_{J}(\varphi)\bigg)\\
&V_{J}(\varphi)=-\frac{1}{2}m^{2}_{\varphi}\varphi^{2}+\frac{1}{2}\lambda\varphi^{4},
\end{split}
\end{equation}
where ($\upsilon$) and ($\varphi$) specify vacuum expectation value and inflation field. Also, the index (J) shows the Jordan frame. The action, as mentioned earlier, can be transformed into an Einstein framework by applying conformal transformation with a new canonical normalized field as a minimally coupled form. Hence, this conformal transformation is expressed in the following form \cite{100,101},
\begin{equation}\label{8}
\begin{split}
\widetilde{g}_{\mu\nu}=\Omega^{2}g_{\mu\nu}=\bigg(1+\frac{\xi(\varphi^{2}-\upsilon^{2}/2)}{M_{p}^{2}}\bigg)g_{\mu\nu}.
\end{split}
\end{equation}
The action in equation (7) is rewritten in Einstein's framework, where the index (E) is the characteristic of Einstein's frame,
\begin{equation}\label{9}
\begin{split}
S_{E}=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\frac{1}{2}\Omega^{-4}\Big(\Omega^{2}+\frac{6\xi\varphi^{2}}{M_{p}^{2}}\Big)g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-U(\varphi)\bigg)\\
\end{split}
\end{equation}
where
\begin{equation}\label{10}
\begin{split}
&\Omega^{2}=\bigg(1+\frac{\xi(\varphi^{2}-\upsilon^{2}/2)}{M_{p}^{2}}\bigg)\\
&U(\varphi)\equiv\Omega^{-4}V_{J}(\varphi)
\end{split}
\end{equation}
According to \cite{a} by introducing a new canonically normalized scalar field, we will have,
\begin{equation}\label{11}
\begin{split}
\frac{1}{2}g^{\mu\nu}\partial_{\mu}\chi(\varphi)\partial_{\nu}\chi(\varphi)=\frac{1}{2}\big(\frac{d\chi}{d\varphi}\big)^{2}g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi,
\end{split}
\end{equation}
where
\begin{equation}\label{12}
\begin{split}
\big(\frac{d\chi}{d\varphi}\big)=\sqrt{\Omega^{-4}\Big(\Omega^{2}+\frac{6\xi\varphi^{2}}{M_{p}^{2}}\Big)}.
\end{split}
\end{equation}
In the limit $\xi\varphi^{2}\ll M_{p}^{2}$, that is, the small field values, the potential for the new field becomes the original field, which is not valid for the $\xi\varphi^{2}\gg M_{p}^{2}$. Therefore, the field solution ($\varphi$) is rewritten according to the new field $\chi$ in the following form \cite{a,100},
\begin{equation}\label{13}
\begin{split}
\varphi\simeq\frac{M_{p}}{\sqrt{\xi}}\exp\Big(\frac{\chi}{\sqrt{6}M_{p}}\Big).
\end{split}
\end{equation}
Thus, the effective potential is also expressed as follows,
\begin{equation}\label{14}
\begin{split}
U(\chi)\simeq\frac{\lambda M_{p}^{4}}{2\xi^{2}}\bigg(1+\exp\Big[-\frac{2\chi}{\sqrt{6}M_{p}}\Big]\bigg)^{-2}.
\end{split}
\end{equation}
The authors in \cite{a} challenged this inflationary model according to one of the conjectures of the swampland program. It was found that the model is in strong tension with dS swampland conjectures because $C_{1}=C_{2}\neq\mathcal{O}(1)$\cite{a}. Therefore, they checked the model with another conjecture: further refining dS swampland conjecture. By manually adjusting the parameters of the mentioned conjecture, namely a, b and q, they showed that the mentioned model is compatible with it. In the next section, we will examine this model with other conjectures of the swampland program,i.e., (SWGC) and (SSWGC), and explain the results in detail.
\subsection{Model II: GI}
We provide a brief description of this model. According to \cite{a}, the action of the corresponding model is expressed in the following form, where the model has a general non-minimal coupling to gravity \cite{102,103},
\begin{equation}\label{15}
\begin{split}
S=\int d^{4}x\sqrt{-g}\bigg(-\frac{M_{p}^{2}+\xi\Lambda^{2}(\phi/\phi_{0})^{2}}{2}R+L_{GI}\bigg),
\end{split}
\end{equation}
where,
\begin{equation}\label{16}
\begin{split}
&L_{GI}=\varphi^{-3/2}\partial_{\mu}\varphi\partial^{\mu}\varphi-\frac{\varphi}{2}\ln\big(\frac{\varphi}{\Lambda^{4}}\big)\\
&\frac{\varphi}{\Lambda^{4}}=\big(\frac{\phi}{\phi_{0}}\big)^{4}\\
&\phi_{0}=4\sqrt{2}\Lambda,
\end{split}
\end{equation}
which $\Lambda$ is called mass scale, and parameter $\xi$ characterized the coupling to gravity, respectively. According to the above explanations, the action in Einstein's frame is rewritten as follows \cite{a,102,103},
\begin{equation}\label{17}
\begin{split}
S=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\Omega^{-2}\bigg[1+\frac{3\xi^{2}\Lambda^{2}(\phi/\phi_{0})^{2}}{16M_{p}^{2}}\Omega^{-2}\bigg]\big(\frac{\Lambda}{\phi_{0}}\big)^{2}\partial_{\mu}\phi\partial^{\mu}\phi-\Omega^{-4}V_{GI}\bigg)
\end{split}
\end{equation}
where $\Omega^{2}=\big(M_{P}^{2}+\xi\Lambda^{2}(\phi/\phi_{0})^{2}\big)/M_{p}^{2}$. According to $\xi\neq 0$ and the large field limit, the mentioned equation is reduced to $\Omega^{2}\simeq \xi\Lambda^{2}(\phi/\phi_{0})^{2}/M_{p}^{2}$, and the potential is calculated \cite{a,102,103},
\begin{equation}\label{18}
\begin{split}
V_{GI}=2\Lambda^{4}(\phi/\phi_{0})^{4}\ln(\phi/\phi_{0})
\end{split}
\end{equation}
where $\phi_{0}\equiv4\sqrt{2}\Lambda$. According to the definition, we will consider a canonically normalized field $\chi$ associated with ($\phi$),
\begin{equation}\label{19}
\begin{split}
\frac{1}{2}\widetilde{g}^{\mu\nu}\partial_{\mu}\chi(\phi)\partial_{\nu}\chi(\phi)=\frac{1}{2}\big(\frac{d\chi}{d\phi}\big)^{2}\widetilde{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi
\end{split}
\end{equation}
where
\begin{equation}\label{20}
\begin{split}
\frac{1}{2}\big(\frac{d\chi}{d\phi}\big)^{2}=\Omega^{-2}\bigg[1+\frac{3\xi^{2}\Lambda^{2}(\phi/\phi_{0})^{2}}{16M_{p}^{2}}\Omega^{-2}\bigg]\big(\frac{\Lambda}{\phi_{0}}\big)^{2}
\end{split}
\end{equation}
By considering a condition as $\chi\propto \ln \phi$, the potential reduces to $\Omega^{-4}V_{GI}\propto \ln \phi$ \cite{a,102,103}. Hence, we will have the following equation with respect to the canonically normalized field,
\begin{equation}\label{21}
\begin{split}
S_{E}=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\chi\partial_{\nu}\chi-U_{GI}(\chi)\bigg),
\end{split}
\end{equation}
where
\begin{equation}\label{22}
\begin{split}
U_{GI}(\chi)=\Omega^{-4}V_{GI}(\varphi).
\end{split}
\end{equation}
The potential of this model in Einstein's frame is rewritten in terms of field ($\phi$) according to \cite{a}, slow-roll analysis and the large field regime $N_{c}^{2}\xi\Lambda^{2}(\phi/\phi_{0})^{2}\gg M_{p}^{2}$,
\begin{equation}\label{23}
\begin{split}
U_{GI}(\phi)=\frac{2M_{p}^{4}}{\xi^{2}}\ln\big(\frac{\phi}{\phi_{0}}\big)
\end{split}
\end{equation}
Similar to the previous situation, the authors in\cite{a} concluded that this model is incompatible with the dS conjecture and does not satisfy it. Then they applied FRDSSC to this model and proved complete compatibility between the model and this conjecture by manually adjusting the free parameters. We will also challenge this model with two other conjectures of the swampland program and describe the results in detail.
\subsection{Model III: SYMI}
We also briefly explain this model, which you can see \cite{a} for further study. The action in Jordan's frame for this model is generally expressed in the following form according to the non-minimally coupled gravity structure \cite{102},
\begin{equation}\label{24}
\begin{split}
S_{J}=\int d^{4}x\sqrt{-g}\bigg(-\frac{M^{2}+N_{c}^{2}\xi\Lambda^{2}(\phi/\phi_{0})^{2}}{2}R+L_{SYM}\bigg)
\end{split}
\end{equation}
where
\begin{equation}\label{25}
\begin{split}
&L_{SYM}=-\frac{N_{c}^{2}}{\alpha}(\varphi\varphi^{\dagger})^{-2/3}\partial_{\mu}\varphi\partial^{\mu}\varphi^{\dagger}-\frac{4\alpha N_{c}^{2}}{9}(\varphi\varphi^{\dagger})^{2/3}\ln\big(\frac{\varphi}{\Lambda^{3}}\big)\ln\big(\frac{\varphi^{\dagger}}{\Lambda^{3}}\big)\\
&\frac{\varphi}{\Lambda^{3}}=\big(\frac{\phi}{\phi_{0}}\big)^{3}\\
&\phi_{0}=3N_{c}\big(\frac{2}{\alpha}\big)^{1/2}\Lambda .
\end{split}
\end{equation}
In the above equations, $\alpha$ is a constant parameter, and $\Lambda$ is the mass scale. We also focus on the real part of the inflation field ie ($\varphi=\varphi^{\dagger}$) \cite{a,102}. The above action is rewritten in Einstein's framework \cite{102},
\begin{equation}\label{26}
\begin{split}
&S_{E}=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\frac{9N_{c}^{2}}{\alpha}\Omega^{-2}\bigg[1+\frac{\alpha N_{c}^{2}\xi^{2}}{3M_{p}^{2}}\Omega^{-2}\Lambda^{2}\big(\frac{\phi}{\phi_{0}}\big)^{2}\bigg]\\
&\times\big(\frac{\Lambda}{\phi_{0}}\big)^{2}\partial_{\mu}\phi\partial^{\mu}\phi-\Omega^{-4}V_{SYM}\bigg)
\end{split}
\end{equation}
Assuming $\xi\neq 0$, $\Omega^{2}\simeq N_{c}^{2}\xi\Lambda^{2}(\phi/\phi_{0})^{2}/M_{p}^{2}$ and according to the explanations in\cite{a}, we will have
\begin{equation}\label{27}
\begin{split}
V_{SYM}(\phi)=4\alpha N_{c}^{2}\Lambda^{4}\big(\frac{\phi}{\phi_{0}}\big)^{4}\ln^{2}\big(\frac{\phi}{\phi_{0}}\big)
\end{split}
\end{equation}
By introducing a canonically normalized field related to the $\phi$ through the following relations \cite{a},
\begin{equation}\label{28}
\begin{split}
\frac{1}{2}\widetilde{g}^{\mu\nu}\partial_{\mu}\chi(\phi)\partial_{\nu}\chi(\phi)=\frac{1}{2}\big(\frac{d\chi}{d\phi}\big)^{2}\widetilde{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi
\end{split}
\end{equation}
where
\begin{equation}\label{29}
\begin{split}
\frac{1}{2}\big(\frac{d\chi}{d\phi}\big)^{2}=\frac{9N_{c}^{2}}{\alpha}\Omega^{-2}\bigg[1+\frac{\alpha N_{c}^{2}\xi^{2}}{3M_{p}^{2}}\Omega^{-2}\Lambda^{2}\big(\frac{\phi}{\phi_{0}}\big)^{2}\bigg]\big(\frac{\Lambda}{\phi_{0}}\big)^{2}
\end{split}
\end{equation}
Thus, we can rewrite the action in Einstein's frame in terms of the canonically normalized field as follows,
\begin{equation}\label{30}
\begin{split}
S_{E}=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\chi\partial_{\nu}\chi-U_{SYM}(\chi)\bigg)
\end{split}
\end{equation}
where
\begin{equation}\label{31}
\begin{split}
U_{SYM}(\chi)=\Omega^{-4}V_{SYM}(\chi)
\end{split}
\end{equation}
According to \cite{a} and slow-roll analysis of the potential, also taking into account the large field regime $N_{c}^{2}\xi \Lambda^{2}\big(\phi/\phi_{0}\big)^{2}\gg M_{p}$, the model's potential in Einstein's frame is calculated in terms of the field $\phi$ \cite{104},
\begin{equation}\label{32}
\begin{split}
U_{SYM}(\phi)=\frac{4\alpha}{N_{c}^{2}}\frac{M_{p}^{4}}{\xi^{2}}\ln^{2}\big(\frac{\phi}{\phi_{0}}\big).
\end{split}
\end{equation}
The authors showed in \cite{a} that this model, similar to the previous models, violates the dS swampland conjecture and agrees and is compatible with FRDSSC. Therefore, in this article, we challenge this model concerning other conjectures of the swampland. So we can introduce the best model that is more compatible with all conjectures of the swampland program.
\subsection{Model IV: OI}
To introduce the final model, we start by expressing the action in the Jordan frame by considering non-minimal coupled to gravity; hence we have \cite{102},
\begin{equation}\label{33}
\begin{split}
S_{J}=\int d^{4}x\sqrt{-g}\bigg(-\frac{M^{2}+N_{c}^{2}\xi\Lambda^{2}\big(\frac{\phi}{\phi_{0}}\big)^{2}}{2}R+L_{OI}\bigg)
\end{split}
\end{equation}
where
\begin{equation}\label{34}
\begin{split}
&L_{OI}=-\frac{N_{c}^{2}}{\alpha_{OI}}(\varphi\varphi^{\dagger})^{-2/3}\partial_{\mu}\varphi\partial^{\mu}\varphi^{\dagger}-\frac{4\alpha_{OI}N_{c}^{2}}{9}(\varphi\varphi^{\dagger})^{2/3}\bigg[\ln\big(\frac{\varphi}{\Lambda^{3}}\big)\ln\big(\frac{\varphi^{\dagger}}{\Lambda^{3}}\big)-\beta\bigg]\\
&\frac{\varphi}{\Lambda^{3}}=\big(\frac{\phi}{\phi_{0}}\big)^{3}\\
&\phi_{0}=3N_{c}\big(\frac{2}{\alpha}\big)^{1/2}\Lambda
\end{split}
\end{equation}
where M is a mass scale, $\beta=\mathcal{O}(1/N_{c})$ and $\varphi=\varphi^{\dagger}$, the above action in Einstein's frame is in the following form.
\begin{equation}\label{35}
\begin{split}
&S_{E}=\int d^{4}x\sqrt{-g}\bigg[-\frac{1}{2}M_{p}^{2}R+\frac{9N_{c}^{2}}{\alpha}\Omega^{-2}\bigg(1+\frac{\alpha N_{c}^{2}\xi^{2}}{3M_{p}^{2}}\Omega^{-2}\Lambda^{2}\big(\frac{\phi}{\phi_{0}}\big)^{2}\bigg)\big(\frac{\Lambda}{\phi_{0}}\big)^{2}\\
&\times\partial_{\mu}\phi\partial^{\mu}\phi-\Omega^{-4}V_{OI}\bigg]
\end{split}
\end{equation}
Here, taking into account conditions such as $\xi\neq 0$ and with respect to\cite{a}, we will have,
\begin{equation}\label{36}
\begin{split}
V_{OI}(\phi)=4\alpha N_{c}^{2}\Lambda^{4}\big(\frac{\phi}{\phi_{0}}\big)^{4}\bigg[\ln^{2}(\frac{\phi}{\phi_{0}}\big)-\frac{\beta}{9}\bigg]
\end{split}
\end{equation}
It is possible to introduce a canonically normalized field related to the $\phi$ with respect to the following equation,
\begin{equation}\label{37}
\begin{split}
\frac{1}{2}\widetilde{g}^{\mu\nu}\partial_{\mu}\chi(\phi)\partial_{\nu}\chi(\phi)=\frac{1}{2}\big(\frac{d\chi}{d\phi}\big)^{2}\widetilde{g}^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi,
\end{split}
\end{equation}
where
\begin{equation}\label{38}
\begin{split}
\frac{1}{2}\big(\frac{d\chi}{d\phi}\big)^{2}=\frac{9N_{c}^{2}}{\alpha}\Omega^{-2}\bigg(1+\frac{\alpha N_{c}^{2}\xi^{2}}{3M_{p}^{2}}\Omega^{-2}\Lambda^{2}\big(\frac{\phi}{\phi_{0}}\big)^{2}\bigg)\big(\frac{\Lambda}{\phi_{0}}\big)^{2}.
\end{split}
\end{equation}
According to the canonically normalized field, we have,
\begin{equation}\label{39}
\begin{split}
S_{E}=\int d^{4}x\sqrt{-g}\bigg(-\frac{1}{2}M_{p}^{2}R+\frac{1}{2}g^{\mu\nu}\partial_{\mu}\chi\partial_{\nu}\chi-U_{OI}(\chi)\bigg)
\end{split}
\end{equation}
where
\begin{equation}\label{40}
\begin{split}
U_{OI}(\chi)=\Omega^{-4}V_{OI}(\varphi)
\end{split}
\end{equation}
Like the previous models, assuming conditions such as slow-roll analysis of the potential, the large field regime $N_{c}^{2}\xi\Lambda^{2}\big(\phi/\phi_{0}\big)^{2}\gg M^{2}$, and according to \cite{a}, the final model potential is also calculated in the Einstein frame as follows,
\begin{equation}\label{41}
\begin{split}
U_{OI}(\phi)=\frac{4\alpha}{N_{c}^{2}}\frac{M_{P}^{4}}{\xi^{2}}\bigg[\ln^{2}\big(\frac{\phi}{\phi_{0}}\big)-\frac{\beta}{9}\bigg]
\end{split}
\end{equation}
Like the previous models, this final model satisfies the FRDSSC while it is inconsistent with the dS swampland conjecture. Hence, we face four models in different structures, all of which violate the dS swampland conjecture and satisfy FRDSSC. Therefore, we challenge all these models with other swampland conjectures to introduce a model compatible with all conjectures of the swampland, which can be considered a suitable model for further investigations of universe evolutions.
\section{SWGC and SSWGC on inflation Models}
In this section, we apply the SWGC and SSWGC of the swampland program to the mentioned inflation models. Considering that all mentioned models satisfy the FRDSSC, we intend to choose the best model with the highest compatibility with all of these swampland conjectures through further investigation. We will also explain the results in detail.
\subsection{Model I}
According to the SWGC and SSWGC in equations (5), (6) and using the potential of Model I (NJLI) in equation (14), we will have, we put $M_p=1$,
\begin{equation}\label{eq42}
U^{(1)}(\chi)\simeq\frac{2\lambda}{\sqrt{6}\xi^2}\frac{e^{\frac{2\chi}{\sqrt{6}}}}{(1+e^{\frac{2\chi}{\sqrt{6}}})^3},
\end{equation}
\begin{equation}\label{eq43}
U^{(2)}(\chi)\simeq \frac{-2 \lambda}{3\xi^2}\frac{e^{\frac{4}{\sqrt{6}}}\chi}{(1+e^{\frac{2\chi}{\sqrt{6}}})^4}{(-2+e^{\frac{2\chi}{\sqrt{6}}})},
\end{equation}
\begin{equation}\label{eq44}
U^{(3)}(\chi)\simeq\frac{4\lambda}{3\sqrt{6}\xi^2}\frac{e^{\frac{4}{\sqrt{6}}\chi}(4-7e^{\frac{2\chi}{\sqrt{6}}}+
e^{\frac{4\chi}{\sqrt{6}}})}{(1+e^{\frac{2\chi}{\sqrt{6}}})^5},
\end{equation}
\begin{equation}\label{eq45}
U^{(4)}(\chi)\simeq \frac{-8 \lambda}{9\xi^2}\frac{e^{\frac{4}{\sqrt{6}}\chi}(-8+33e^{\frac{2\chi}{\sqrt{6}}}-18e^{\frac{4\chi}{\sqrt{6}}}
+e^{\frac{6\chi}{\sqrt{6}}})}{(1+e^{\frac{2\chi}{\sqrt{6}}})^6}.
\end{equation}
Now, we put the above equation in the $SWGC$ equation(5). So, we get the following relationship,
\begin{equation}\label{eq46}
\frac{4 \lambda^2 e^{\frac{8\chi}{\sqrt{6}}}}{9\xi^4(1+e^{\frac{2\chi}{\sqrt{6}}})^{10}}\left[
-(2+e^{\frac{2\chi}{\sqrt{6}}}-e^{\frac{4\chi}{\sqrt{6}}})^2+\frac{2}{3}(4-7e^{\frac{2\chi}{\sqrt{6}}}+e^{\frac{4\chi}{\sqrt{6}}})^2\right] \geq 0
\end{equation}
The above relation is greater than zero if the following condition is met,
\begin{equation}\label{eq47}
(\frac{2}{\sqrt{6}}+1)e^{\frac{4\chi}{\sqrt{6}}}-(\frac{14}{\sqrt{6}}+1)e^{\frac{2\chi}{\sqrt{6}}}+(\frac{8}{\sqrt{6}}+2) \geq 0
\end{equation}
Now, we use the change of variable $y=e^{\dfrac{2\chi}{\sqrt{6}}}$. A 2nd-degree equation is obtained in the following form,
\begin{equation}\label{eq48}
f(y)=(\frac{2}{\sqrt{6}}+1)y^2-(\frac{14}{\sqrt{6}}+1)y+(\frac{8}{\sqrt{6}}+2) \geq 0.
\end{equation}
First, we get the points where the function $f(y)=0$. We also calculate its minimum point. So, we will have,
\begin{equation}\label{eq49}
f(y)=0 \longrightarrow (y=1.129, y=2.568) \qquad \frac{\partial f(y)}{\partial y}=0 \longrightarrow y_{min}=1.849, f(y_{min})=-0.941 .
\end{equation}
Since, the minimum point is negative, $f(y)$ has a negative value in the interval $1.129<y<2.568$, so the SWGC is not satisfied in this range. We see the SWGC is met in the $\chi<0.148$ and $\chi>1.154$. Next, we check the SSWGC in equation (6). So, we will have,
\begin{equation}\label{eq50}
\frac{4\lambda^{2}\exp\big(\frac{8\chi}{\sqrt{6}}\big)}{27\xi^{4}(1+\exp\big(\frac{2\chi}{\sqrt{6}}\big))^{10}}\bigg[20-88\exp\big(\frac{2\chi}{\sqrt{6}}\big)+99\exp\big(\frac{4\chi}{\sqrt{6}}\big)-\exp\big(\frac{8\chi}{\sqrt{6}}\big)-10\exp\big(\sqrt{6}\chi\big)\bigg]\geq0.
\end{equation}
The above relationship is established if that, $$F(\chi)=\bigg[20-88\exp\big(\frac{2\chi}{\sqrt{6}}\big)+99\exp\big(\frac{4\chi}{\sqrt{6}}\big)-\exp\big(\frac{8\chi}{\sqrt{6}}\big)-10\exp\big(\sqrt{6}\chi\big)\bigg]\geq0.$$ As a result, we will plot a figure to check it.
From the Figure 1, we find that when $\chi \leq 2.068$, $F(\chi)\geq 0$, SSWGC is established.
Therefore, comparing the two conjectures, we can see that these two conjectures can be satisfied when $1.154\leq \chi \leq2.068$ and $\chi \leq0.148$.
\subsection{Model II}
We go through a similar process for all models to determine their compatibility with the mentioned conjectures. Therefore, with respect to the equations (5) and (6), also the potential of model II (GI) in equation(23), one can calculate,
\begin{equation}\label{eq51}
U_{GI}^{1}(\phi)=\frac{2}{\xi^{2}\phi}
\end{equation}
\begin{equation}\label{eq52}
U_{GI}^{2}(\phi)=-\frac{2}{\xi^{2}\phi^{2}}
\end{equation}
\begin{equation}\label{eq53}
U_{GI}^{3}(\phi)=\frac{4}{\xi^{2}\phi^{3}}
\end{equation}
\begin{equation}\label{eq54}
U_{GI}^{4}(\phi)=-\frac{12}{\xi^{2}\phi^{4}}
\end{equation}
Now, we first consider the SWGC. By putting the above relation in the equation(5), one can obtain
\begin{equation}\label{eq55}
\frac{4}{\xi^{2}\phi^{6}}(4-\phi^{2})\geq0
\end{equation}
According to the above relation, the SWGC is satisfied when $\phi<2$.
Next, we examine the SSWGC,
\begin{equation}\label{eq56}
\frac{4}{\xi^{2}\phi^{6}}(2-\phi^{2})\geq0
\end{equation}
According to the above relation, when $\phi<\sqrt{2}$, the SSWGC will be satisfied.
Therefore, both them (SWGC) and (SSWGC) are satisfied by sharing two conjectures, when $\phi<\sqrt{2}$.
\subsection{Model III}
We apply the conjectures to model III (SYMI), so with respect to equation (5), (6) and (32), we have
\begin{equation}\label{eq57}
U^1_{SYM}(\phi)=\frac{8\alpha}{N_c^2\xi^2\phi}(\ln(\frac{\phi}{\phi_0}))
\end{equation}
\begin{equation}\label{eq58}
U^{2}_{SYM}(\phi)=\frac{8\alpha}{N_{c}^{2}\xi^{2}\phi^{2}}\big(1-\ln\big(\frac{\phi}{\phi_{0}}\big)\big)
\end{equation}
\begin{equation}\label{eq59}
U^{3}_{SYM}(\phi)=\frac{8\alpha}{N_{c}^{2}\xi^{2}\phi^{3}}\big(-3+2\ln\big(\frac{\phi}{\phi_{0}}\big)\big)
\end{equation}
\begin{equation}\label{eq60}
U^{4}_{SYM}(\phi)=\frac{8\alpha}{N_{c}^{2}\xi^{2}\phi^{4}}\big(11-6\ln\big(\frac{\phi}{\phi_{0}}\big)\big)
\end{equation}
First, we consider SWGC. So, one can obtain,
\begin{equation}\label{eq61}
\frac{64\alpha^{2}}{N_{c}^{4}\xi^{4}\phi^{6}}F(\phi)\geq0,\hspace{12pt}F(\phi)=\bigg[\big(3-2\ln\big(\frac{\phi}{\phi_{0}}\big)\big)^{2}-\phi^{2}\big(1-\ln\big(\frac{\phi}{\phi_{0}}\big)\big)^{2}\bigg].
\end{equation}
Next, we discuss the SSWGC and reach the following equation,
\begin{equation}\label{eq62}
\frac{64\alpha^{2}}{N_{c}^{4}\xi^{4}\phi^{6}}G(\phi)\geq0,\hspace{12pt}G(\phi)=\bigg[7-\phi^{2}+(2\phi^{2}-7)\ln\big(\frac{\phi}{\phi_{0}}\big)+(2-\phi^{2})\big(\ln\big(\frac{\phi}{\phi_{0}}\big)\big)^{2}\bigg].
\end{equation}
In order for the two conjectures met, they must be $F(\phi)\geq 0$ and $G(\phi)\geq 0$. We need to find their common points to see in which interval of $\phi$ these conjectures are valid. Since, $F(\phi)$ and $G(\phi)$ are complex relations, we are trying to examine using the plot to find the points compatible with these conjectures.
According to the above plots, both functions are positive for different $\phi_0$ in different ranges, and both conjectures are valid in these ranges. As seen in the above figures, for $\phi_0 \leq 2.37$, two curves meet the $\phi$ axis at only one point, and the two conjectures are compatible only in one interval. For example, in figure $2(a)$ in the range of $\phi \leq 6.141$ and for figure $2(b)$ in the range of $\phi \leq 7.184$, the two conjectures are compatible.
If for $\phi_0>2.37$, two curves intersect the $\phi$ axis at three points, in this case, these two conjectures are compatible in the two regions. For example, in figure $2(c)$, the two conjectures are not compatible in the intervals of $\phi \leq 2.701$ and $5.142 \leq \phi \leq 7.822$. We can also find that when $\phi_0$ takes different values, we have various allowed ranges and regions.
For $\phi_0=3$, which is shown in figure $3(a)$, two conjectures are satisfied between $\phi \leq 2.394$ and $6.556\leq\phi \leq 8.948$. Also, two conjectures are met for $\phi_0=4$ in the range of $\phi\leq 2.141$ and $9.559\leq \phi \leq 11.675$. The potential structure for the fourth model equation(41) in Einstein's framework is similar to model III in equation (32). Since, the SWGC and SSWGC of the swampland program are proportional to the derivatives of higher orders, as is apparent in the equations (5) and (6).
Therefore, for the fourth model, the same results as model III are obtained, and thus the last two models satisfy the FRDSSC, SWGC, and SSWGC, respectively.
Thus, these two models are more compatible with the swampland program, which is somehow related to quantum gravity. They can be considered desirable models to investigate the universe's evolution.
They are considered favorable inflation models in terms of string theory structure. This way, other cosmological applications of these models can be investigated more profoundly and compared with the latest observable data.
\section{Concluding remarks}
In this article, we want to check four inflation models, such as composite NJL Inflation(NJLI), Glueball Inflation(GI), super Yang-Mills Inflation (SYMI), and Orientifold Inflation (OI), with two conjectures of the swampland program: scalar weak gravity conjecture (SWGC) and strong scalar weak gravity conjecture (SSWGC). Since all these models violate the dS swampland conjecture(DSC) but are compatible with (FRDSSC) through manual adjustment of free parameters of the mentioned conjecture. We studied the simultaneous compatibility of each model with these two new conjectures. Despite being consistent with (FRDSSC), we found that all models are not compatible with the other conjectures of the Swampland program in all regions, and these conjectures are only satisfied in a specific area. Also, Due to the presence of constant parameter $(\phi_{0})$ in the higher orders derivatives, the (SYMI) and (OI) among all the models are more compatible with all conjectures of the swampland program. They can provide a more significant amount of satisfaction with all of them. They can be suitable and accurate inflation models for a more profound examination of universe developments. We determined a particular region for these models are compatible with (FRDSSC), (SWGC), and (SSWGC) simultaneously.\\
We can ask some questions, such as What are the consequences of these conjectures for other models and theories? are there inflationary models that satisfy all conjectures in all regions? Is it possible to extend these conjectures to be consistent with any of the inflationary theories be compatible? We have left the examination of these questions to future work.\\\\
Conflict of Interest\\
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.\\
Data Availability Statement\\
Data sharing is not applicable to this article as no datasets were generated or analyzed during
the current study.\\
|
Title:
CCAT-prime: The Optical Design for the Epoch of Reionization Spectrometer |
Abstract: The Epoch of Reionization Spectrometer (EoR-Spec) will be an instrument
module for the Prime-Cam receiver on the CCAT-prime Collaboration's Fred Young
Submillimeter Telescope (FYST), a 6-m primary mirror Crossed Dragone telescope.
With its Fabry-Perot interferometer (FPI), EoR-Spec will step through
frequencies between 210 and 420 GHz to perform line intensity mapping of the
158 $\mu$m [CII] line in aggregates of star-forming galaxies between redshifts
of 3.5 and 8 to trace the evolution of structure in the universe during the
epoch of reionization. Here we present the optical design of the module
including studies of the optical quality and other key parameters at the image
surface. In order to achieve the required resolving power (R$\sim$100) with the
FPI, it is important to have a highly collimated beam at the Lyot stop of the
system; the optimization process to achieve this goal with four lenses instead
of three as used in other Prime-Cam modules is outlined. As part of the
optimization, we test the effect of replacing some of the aspheric lenses with
biconic lenses in this Crossed Dragone design and find that the biconic lenses
tend to improve the image quality across the focal plane of the module.
| https://export.arxiv.org/pdf/2208.09521 |
\keywords{CCAT-prime, Epoch of Reionization Spectrometer, optical design, biconic lenses, Fred Young Submillimeter Telescope, Fabry-Perot Interferometer, line intensity mapping}
\section{INTRODUCTION}
\label{sec:intro} %
The Epoch of Reionization Spectrometer (EoR-Spec) \cite{Cothard_EoRSpec_2020}\textsuperscript{,} \cite{Nikola_EoRSpec_2022} is an instrument module that will occupy one of the six outer optics tube locations in the Prime-Cam receiver on the CCAT-prime Collaboration’s Fred Young Submillimeter Telescope (FYST) \cite{ccat_science_2021}, a 6-m primary mirror Crossed Dragone telescope. \cite{Niemack_telescope_2016}\textsuperscript{,} \cite{Parshley2018}
EoR-Spec uses a Fabry-Perot interferometer (FPI) composed of silicon-substrate-based (SSB) mirrors \cite{Bugao_EoRSpecMirrors_2022} located at the Lyot stop of the optical system to perform line intensity mapping of the 158 $\mu$m [CII] line to study the evolution of structure between redshifts 3.5 and 8. These measurements will enable studies of the formation of the first galaxies near the end of the epoch of reionization at higher redshifts and near the peak star-forming epoch at lower redshifts. The FPI steps through frequencies between 210 and 420 GHz and illuminates one broadband, non-polarization sensitive detector array centered around 370 GHz (0.8 mm) and two arrays centered around 260 GHz (1.1 mm). EoR-Spec is slated to begin science operations on FYST in late 2024.
In order to make the best use of this instrument, the design of the cold refracting silicon optical elements that reimage the secondary focus of the telescope onto the focal plane of the instrument module needs to be optimized for high image quality and a well-collimated beam at the location of the FPI.
Section~\ref{sec:design} provides an overview of the design and section~\ref{sec:performance} quantifies its performance. The use of a biconic lens in this design and its benefits are discussed in section~\ref{sec:biconic} before section~\ref{sec:conclusion} outlines possible avenues for future improvement.
\section{OPTICAL DESIGN}
\label{sec:design}
The optical design\footnote{The optical design and optimization were performed in Zemax OpticsStudio \cite{zemaxwebsite}.} for the EoR-Spec instrument module evolved from the three-lens Simons Observatory (SO) instrument module optical design described in Ref.~\citenum{Dicker2018}, which is identical to the design of the 280 GHz module for Prime-Cam. Like the SO optics design, the goal of the optimization was to produce excellent image quality across the focal plane, a consistent Lyot stop for the system for all fields, a focal plane size near 275 mm to illuminate only the mechanical size of our detector arrays, and angles of incidence on the image plane of less than two degrees to appropriately match the size of our detector array feedhorns.
In addition to these constraints, this design needs to be diffraction-limited across a wider range of frequencies than the previous design due to the wider range of frequencies covered by the FPI. For this instrument, it is also important to have a highly collimated beam at the Lyot stop of the system to achieve the required resolving power of R$\sim$100 with the FPI. The fabrication of AR coatings for the silicon lenses used in this module is significantly simplified if the sag of the surface of each lens is 14 mm or smaller \cite{Datta_ARcoating_2013}. The final sags for each surface are shown in Table~\ref{sag_table}.
Moving lens two closer to lens one produced a more highly collimated beam, and the addition of a fourth lens compared to the SO design maintained a high level of image quality across the focal plane for the relevant frequencies. To improve the image quality further, lens three was changed from an aspheric lens to a biconic lens. Lenses two and three were changed to convex-convex to accommodate the sag requirements. Fig.~\ref{fig:raytrace} shows a ray trace of the optics design with these modifications.
\begin{table}
\begin{center}
\begin{tabular}{ |c|c| }
\hline
Surface & Sag (mm)\\
\hline
Lens 1 & 12.08 \\
\hline
Lens 2 Front & 3.00 \\
\hline
Lens 2 Back & 13.35 \\
\hline
Lens 3 Front & 12.40 \\
\hline
Lens 3 Back & 9.97 \\
\hline
Lens 4 & 5.81 \\
\hline
\end{tabular}
\caption{The absolute value of the sags for the different lens surfaces in the design. Lenses 1 and 4 are planar-convex, while lenses 2 and 3 are convex-convex.}
\label{sag_table}
\end{center}
\end{table}
There are seven possible locations for instrument modules in Prime-Cam. EoR-Spec will be located in one of the outer six instrument module locations. In order to determine which location would have the best optical quality for the needs of this instrument, the optimization simultaneously tests the same optics in six different locations, or configurations, within the receiver, as shown in Fig.~\ref{fig:sixtubesreceiver}. Each tube is rotated to ensure that the x and y axes for the biconic lens always remain in the same orientation with the x axis pointing radially towards the center of the telescope.
In addition to helping us to determine the best location for EoR-Spec in Prime-Cam, testing all six tubes also allows us to simulate the effect of scanning the telescope in elevation. Unlike SO, which uses a co-rotator to keep the receiver in the same orientation relative to the telescope mirrors as the telescope aperture scans in elevation, the effective location of each instrument module in Prime-Cam rotates as the elevation angle changes. The different module locations in the optical design provide information on how the optical properties of each location change as the telescope tilts 60 degrees in elevation. Since 60 degrees is larger than the maximum elevation change that the telescope can scan through during operation, these tubes provide an upper bound on the changes to the optical properties of each optics tube due to the changing elevation of the telescope during observations.
\section{OPTICAL PERFORMANCE OF NOMINAL DESIGN}
\label{sec:performance}
The Strehl ratio, defined as the ratio of the peak intensity of the point spread function in the real system to the peak intensity of the point spread function of the system with aberrations removed \cite{zemaxdocs}, is used to quantify the image quality of the full design, including the telescope mirrors and the cold lenses. The goal of the optimization was to produce a diffraction-limited design, which we define as a Strehl ratio greater than 0.8. As shown in Fig.~\ref{fig:strehl11mm} and Fig.~\ref{fig:strehl086mm} for an elevation of 60 degrees, the design is diffraction-limited for most locations across 80-95\% of the focal plane at 1.1 mm (with much of the remaining area above 0.7) and 30-50\% at 0.8 mm. This meets our requirements for one 0.8 mm detector array and two 1.1 mm detector arrays.
Two locations achieve greater than 90\% coverage at 1.1 mm while simultaneously covering at least one array's worth of focal plane array at 0.8 mm.
To measure the collimation of the beam, we calculate the F/\# at the Lyot stop for many fields across our field of view.
For each field, we trace the rays from that field to the extreme $+$x (far right), $-$x (far left), $+$y (top), and $-$y (bottom) locations on the Lyot stop, calculate the angle between the extreme rays in the x direction or the extreme rays in the y direction by taking a dot product, converting each angle to an F/\# using
\begin{equation}
F/\# = \frac{1}{2 \tan(\theta)},
\end{equation}
and averaging the F/\# calculated from the extreme rays in the x direction and the F/\# calculated from the extreme rays in the y direction to get an estimate of the F/\# for that field. A higher F/\# indicates a more collimated beam.
In order to achieve the target FPI resolving power of 100, we aim for an F/\# of 100 for each field at the Lyot stop. The results for each configuration are shown in Fig.~\ref{fig:FNumLyot}.
For locations 3-6, which also have the best image quality, around 40\% of the fields have an F/\# greater than 100 and 75\% of the fields have an F/\# greater than 50. The maximum F/\# for any field is around 250, while the minimum near the edges of the focal plane is near 20.
The design also meets all other optical requirements for the cryostat and detector design outlined in Ref.~\citenum{Dicker2018}, including an underfilled primary mirror and a well-established Lyot stop that is the pupil-defining surface for all fields. The re-imaging optics set the focal plane size to be around 275 mm in diameter to maximize the area filled by the detector arrays. In order to ensure that the light couples well to the feedhorns on the detector array, the chief ray angles across the focal plane are restricted to two degrees or smaller, as shown in Fig.~\ref{fig:chiefrayimage} for a single configuration.
\section{BICONIC LENS COMPARISON}
\label{sec:biconic}
Prior to changing lens three to a biconic lens, it was difficult to obtain a design with good image quality that also had a high F/\# at the Lyot stop for most fields in the field of view. The right column of Fig.~\ref{fig:biconiccomp} shows the best design obtained with good F/\# using aspheric lenses alone. The design shows a strong radial dependency with high Strehl quality along the axis running radially towards the center of the receiver and poor image quality off of that line.
The left column of Fig.~\ref{fig:biconiccomp} shows the same design after lens three is changed to a biconic lens and that design is optimized for image quality. Since lens three comes after the Lyot stop, this change did not alter the collimation of the beam at the stop, but it did significantly improve the image quality. The fraction of the focal plane area filled by detector arrays with a Strehl ratio greater than 0.8 is 37.1\% and 24.4\% higher at 1.1 mm and 0.8 mm, respectively, than the design that used aspheric lenses alone.
We also explored the possibility of changing lens four to a biconic lens, but such designs did not show a significant improvement of the image quality in limited testing. Future designs for similar FPI-based spectrometers in Prime-Cam or future receivers could continue to explore the conversion of another lens to a biconic lens. Further, more detailed studies of making lens four biconic could lead to an improvement of image quality. Another option would be to change lens two to a biconic lens, which would also impact the collimation of the beam at the stop.
\section{CONCLUSION}
\label{sec:conclusion}
In order to use EoR-Spec's FPI to make quality measurements of the 158 $\mu$m [CII] line to probe structure formation during the epoch of reionization, it is important to have a cold optical design that balances a highly collimated beam at the Lyot stop of the system with excellent image quality across the detector arrays on the focal plane. Other mechanical constraints on the sag of the lenses and the size of the focal plane must also be met. By utilizing three aspheric lenses and one biconic lens, this optical design ensures the majority of the focal plane is diffraction-limited near 1.1 mm and sufficient area is diffraction-limited near 0.8 mm for the single detector array centered at that frequency in EoR-Spec. It also produces a highly collimated beam for a significant portion of the field of view of the instrument across all six possible locations for EoR-Spec in Prime-Cam. In the coming year, machining of the lenses for the cold optics of EoR-Spec will begin in order to prepare the instrument for calibration and early science in 2024.
\acknowledgments %
The CCAT-prime project, FYST and Prime-Cam instrument have been supported by generous contributions from the Fred M. Young, Jr. Charitable Trust, Cornell University, and the Canada Foundation for Innovation and the Provinces of Ontario, Alberta, and British Columbia. The construction of the FYST telescope was supported by the Gro{\ss}ger{\"a}te-Programm of the German Science Foundation (Deutsche Forschungsgemeinschaft, DFG) under grant INST 216/733-1 FUGG, as well as funding from Universit{\"a}t zu K{\"o}ln, Universit{\"a}t Bonn and the Max Planck Institut f{\"u}r Astrophysik, Garching.
The construction of EoR-Spec is supported by NSF grant AST-2009767. ZBH acknowledges support from a NASA Space Technology Graduate Research Opportunities Award. MDN acknowledges support from NSF grant AST-2117631. SKC acknowledges support from NSF award AST-2001866.
\bibliography{main} %
\bibliographystyle{spiebib} %
|
Title:
Li distribution, kinematics and detailed abundance analysis among very metal-poor stars in the Galactic halo from the HESP-GOMPA survey |
Abstract: We present a study on the detailed elemental abundances of newly identified
bright very metal-poor stars with the detection of lithium, initially observed
as part of the SDSS/MARVELS pre-survey. These stars were selected for
high-resolution spectroscopic follow-up as part of the HESP-GOMPA survey. In
this work, we discuss the Li abundances detected for several stars in the
survey, which include main-sequence stars, subgiants, and red giants. Different
classes of stars are found to exhibit very similar distributions of Li, which
point towards a common origin. We derive a scaling relation for the depletion
of Li as a function of temperature for giants and main-sequence stars; the
majority of the samples from the literature were found to fall within 1sigma
(0.19 and 0.12 dex/K for giants and dwarfs respectively) of this relationship.
We also report the existence of a slope of the Li abundances as a function of
distances from the Galactic plane, indicating mixed stellar populations. Most
Li-rich stars are found to be in or close to the galactic plane. Along with Li,
we have derived detailed abundances for C, odd-Z, alpha-, Fe-peak, and
neutron-capture elements for each star. We have also used astrometric
parameters from Gaia-EDR3 to complement our study, and derived kinematics to
differentiate between the motions of the stars; those formed in situ and
accreted. The stellar population of the Spite plateau, including additional
stars from the literature, is found to have significant contributions from
stars formed in situ and through accretion. The orbits for the program stars
have also been derived and studied for a period of 5 Gyr backward in time.
| https://export.arxiv.org/pdf/2208.13912 |
\title{Li distribution, kinematics and detailed abundance analysis among very metal-poor stars in the Galactic halo from the HESP-GOMPA survey}
\correspondingauthor{Avrajit Bandyopadhyay}
\email{[email protected], [email protected]}
\author[0000-0002-8304-5444]{Avrajit Bandyopadhyay}
\affiliation{Aryabhatta Research Institute of Observational Sciences, Nainital 263001, India}
\author{Thirupathi Sivarani}
\affiliation{Indian Institute of Astrophysics, Bangalore, India}
\author[0000-0003-4573-6233]{Timothy C. Beers}
\affiliation{Department of Physics and Astronomy and JINA Center for the Evolution of the Elements, University of Notre Dame, Notre Dame, IN 46556 USA}
\author{A. Susmitha}
\affiliation{Indian Institute of Astrophysics, Bangalore, India}
\author[0000-0002-4638-1035]{Prasanta K Nayak}
\affiliation{Tata Institute of Fundamental Research, Colaba, Mumbai, 400005, India}
\author[0000-0002-4331-1867]{Jeewan C Pandey}
\affiliation{Aryabhatta Research Institute of Observational Sciences, Nainital 263001, India}
\keywords{Galaxy: halo --- stars: abundances --- stars: Population II ---stars: individual --- nucleosynthesis }
\section{Introduction} \label{sec:intro}
The discovery of large numbers of very metal-poor(VMP; [Fe/H\ $< -2.0$) stars has provided great opportunities to study the pristine conditions that existed in the early Universe when these old stellar objects were formed \citep{beers2005,firststars6,frebelandnorris,frebelrev18}. Among the many studies that could be conducted with these stars, the detection and measurement of lithium are of particular importance. Lithium is the only element in the periodic table, apart from H and He, that owes its origin (at least in part) to Big-Bang nucleosynthesis. All other elements can be produced in stellar interiors or other exotic stellar events.
Lithium is also a very fragile element and is easily destroyed when exposed to higher temperatures, which can be inferred from the observed depletion of stellar Li content as a star ascends the giant branch and the stellar atmosphere is mixed with Li-depleted matter from the stellar interior, due to the convective channels that are opened during this phase. This so-called evolutionary mixing largely depletes lithium, lowering its observed absolute abundance, $A$(Li).
The pioneering study of \citet{spitenspite} reported the abundance of Li for a sample of the unevolved, older population of stars in the halo and disc of the Milky Way. A constant abundance of Li, $A$(Li) = 2.2, was obtained, and subsequently referred to as the "Spite Li plateau". Over time, it came to be recognized that this level was substantially lower than the cosmological predictions of $A$(Li) = 2.7, based on the baryon density determined by the CMB measurements of the WMAP satellite \citep{spergel2003,coc2004}. This discrepancy demonstrates the existence of physical processes that have resulted in the depletion of Li in metal-poor main-sequence turnoff (MSTO) stars. Since then, there have been many studies and efforts to understand the Li plateau and solve the Li problem (e.g., \citealt{Pinsonneault, ryan2002, korn2006, piau2006, firststars7}, among many others). A small, but statistically significant slope of the Li plateau was discovered by \citet{ryan1999} as more stars with Li detection were studied. The decreasing trend of Li abundances with a decrease in metallicity was confirmed by \cite{bonifacio2007} and \citet{sbordone2010}. Extremely metal-poor(EMP; [Fe/H] $<-3.0$) were also found to have Li abundances lower than the Spite plateau (e.g., \citealt{bonifacio2015}, and references therein), causing the "breakdown'' \citep{aoki2009} or "meltdown'' of the Spite plateau \citep{sbordone2010}.
Apart from Li, the abundances of other important elements among VMP and EMP stars are of considerable interest for constraining the pollution of their natal gas clouds by previous stellar generations. They also provide valuable constraints for improvements in models of stellar nucleosynthesis. Additional discoveries of, in particular, bright stars with [Fe/H] $<-2.0$, with or without chemical anomalies, are crucial for a better understanding of the nature of nucleosynthetic events in the early Universe.
In this paper, we report Li abundances for 12 metal-poor stars (including 10 VMP stars and 1 EMP star), 9 of which are studied for the first time. We have included three stars with measured Li reported earlier, and use them for investigations of their kinematics. The kinematics of these three stars were not reported previously and are included here for the sake of completeness of this chemo-dynamical study of Li abundances in stars from HESP-GOMPA survey. In Section 2, we describe the target selection and details of the high-resolution spectroscopic observations. Derivations of stellar parameters and the measurement of Li abundances are described in Section 3. Implications of these measurements, possible correlations with atmospheric parameters and other abundances, and the kinematics of our sample, supplemented by literature studies, are described in Section 4. Section 5 presents a brief summary and conclusions.
\section{Observations, target selection, and analysis}
High-resolution ($R \sim 30,000$ \& $60,000$) spectroscopic observations of our program stars were carried out as a part of the HESP-GOMPA (Hanle Echelle SPectrograph -- Galactic survey Of metal-poor stArs) survey, using the HESP \citep{sriram2018} on the 2-m Himalayan Chandra Telescope (HCT) at the Indian Astronomical Observatory (IAO). The targets were selected from the spectroscopic pre-survey for MARVELS \citep{ge2015}, which was carried out as a part of SDSS-III \citep{eisenstein}. This offers the chance to identify bright metal-poor halo stars which could be studied at high spectral resolution using moderate-aperture telescopes. %
We have used synthetic spectral fitting of the pre-survey data to identify the most metal-poor stars. Furthermore, the metal-poor candidates with weak CH $G$-bands were given preference for the high-resolution follow-up observations to remove the carbon-rich stellar populations. We have obtained high-resolution data for 60 metal-poor stars, out of which Li could be measured for the 12 program stars listed in Table 1. In this paper, there are 9 new stars with measured Li abundances, however, the abundance table has not been included for \sdssfiftythree. This object is a CEMP-no star, which will be discussed in an upcoming paper on CEMP stars (Bandyopadhyay et al., in prep.). Abundances for the remaining 8 stars are discussed below. Complete details for the others, including all of the observed stars in the HESP-GOMPA survey, will be discussed in a separate paper (Bandyopadhyay et al., in prep). The stars were observed at a spectral resolving power of $R \sim$ 30,000, spanning a wavelength range of 380 nm to 1000 nm. The coordinates and observation details, including duration of observation, signal-to-noise ratios, $V$ magnitudes, and radial velocities for the program stars are listed in Table 1.
\begin{table*}
\tabcolsep7.0pt $ $
\begin{center}
\caption{Observational Details for the Program Stars}
\begin{tabular}{ccccccccr}
\hline\hline
Star name &Object &RA &DEC &Exp & $SNR$ &$V$ mag & \r RV\\
& &J(2000) &J(2000) & (sec) & & &(km/s) \\
\hline
SDSS J002400.64+320311.4 &SDSS J0024+3203 &00 24 00.64 &32 03 11.40 &7200 &70.3 &11.58 & $-$434.0\\%
SDSS J031522.0+212324.6 &SDSS J0315+2123 &03 15 22.00 &21 23 24.60 &7200 &55.4 &11.35 & $-$49.0 \\%
SDSS J064301.86+593430.9 &SDSS J0643+5934 &06 43 01.86 &59 34 30.90 &8100 &71.6 &11.44 & 52.2\\%
SDSS J064655.60+411620.5 &SDSS J0646+4116 &06 46 55.60 &41 16 20.50 &9600 &43.1 &11.14 & $-$285.0\\%%
SDSS J065252.76+410506.0 &SDSS J0652+4105 &06 52 52.76 &41 05 06.00 &8100 &68.0 &11.36 &98.5\\%
SDSS J102411.84+415146.7 &SDSS J1024+4151 &10 24 11.84 &41 51 46.70 &7200 &49.5 &11.83 &194.0\\%
SDSS J114658.70+234357.2 &SDSS J1146+2343 &11 46 58.70 &23 43 57.20 &8100 &49.1 &11.06 & $-$9.5 \\%
SDSS J134144.60+474128.9 &SDSS J1341+4741 &13 41 44.60 &47 41 28.90 &7200 &47.0 &12.38 & $-$190.5\\%%
SDSS J172548.56+420241.9 &SDSS J1725+4202 &17 25 48.56 &42 02 41.90 &8100 &53.0 &11.66 & $-$266.5\\%
SDSS J193344.73+452410.9 &SDSS J1933+4524 &19 33 44.73 &45 24 10.90 &8100 &65.5 &11.48 &157.0\\
SDSS J193712.01+502455.5 &SDSS J1937+5024 &19 37 12.01 &50 24 55.50 &7200 &130.0 &10.44 & $-$184.0\\%%
SDSS J195344.22+422249.9 &SDSS J1953+4222 &19 53 44.22 &42 22 49.90 &7200 &245.0 &9.23 & $-$308.1\\% will not publish in this paper
\hline
\end{tabular}
\end{center}
\end{table*}
Data reduction was carried out using the IRAF echelle package, along with the publicly available data reduction pipeline for HESP\footnote{https://www.iiap.res.in/hesp/}, developed by Arun Surya. A cross-correlation analysis with a synthetic template spectrum was carried out to obtain the radial velocity (RV) for each star. The calculated RVs are listed in Table 1.
We have employed one-dimensional LTE stellar atmospheric models (ATLAS9; \citealt{castellikurucz}) and the spectral synthesis code TURBOSPECTRUM \citep{alvarezplez1998} for determining the abundances of the individual elements present in each spectrum. We have considered the equivalent widths of the absorption lines present in the spectra that are less than 120 m{\AA}, as they are on the linear part of the curve of growth. Version 12 of the TURBOSPECTRUM code for spectrum synthesis and abundance estimates was used for the analysis. The Kurucz database \footnote{http://kurucz.harvard.edu/linelists.html} was used for the compilation of the linelist. We have adopted the hyperfine splitting provided by \cite{mcwilliam1998}, along with Solar isotopic ratios.
The stellar atmospheric parameters of the program stars were derived iteratively. The first estimates for effective temperature were made using photometric colours, $V-K$. Gaia and SED \citep{bayo2008vosa} were also used to derive the values of $T_{\rm eff}$ and $\log g$. A grid for stellar models was prepared for a wide range of $T_{\rm eff}$, $\log g$, and [Fe/H]. The abundances of the clean Fe I and Fe II lines were measured for each spectrum by the method of equivalent-width analysis. The best fit was determined so that Fe I abundances do not vary with excitation potential, and similar abundances are obtained from Fe I and Fe II lines. The temperature estimates were then estimated using the wings of H$\alpha$ lines, which are sensitive to small variations in temperature. We have measured FWHM of telluric and ThAr lines to broaden the synthetic spectra using the Gaussian profile for the resolution(R~30000) of HESP. The logg estimated from the FeI/FeII lines was assumed for the calculation of the line profile for the Balmer line analysis. The corrections for the non-LTE effects in the estimations of effective temperature from Balmer lines was also incorporated in the adopted values. The color temperatures for the stars were also derived to check for consistency. The different estimates of $T_{\rm eff}$ are listed in table 2. Similarly, $\log g$ is determined by spectral fitting of the wings of the Mg I triplet in the 5173\,{\AA} region. Examples of the fitting for the H$\alpha$ and Mg wings are shown in Figure 1. Independent estimates of $logg$ were carried out by other methods as listed in table 3 - (i) Ionization equilibrium method using Fe I and Fe II abundances, (ii) using the parallax from Gaia as described below. Surface gravity log\,{g} is calculated using the relation \\ log(g/g$_{\odot}$) = log(M/M$_{\odot}$) + 4log(T$_{eff}$/T$_{eff\odot}$) + 0.4(M$_{bol}$ $-$ M$_{bol\odot}$)\\
The V magnitudes have been taken from SIMBAD and the parallaxes are taken from \textit{Gaia} \footnote{https://gea.esac.esa.int/archive/} whenever possible. We have used the evolutionary tracks \footnote{http://pleiadi.pd.astro.it/} to estimate the mass of the stars and are found to be close to 0.8 M$_{\odot}$ for metal-poor stars. The finally adopted stellar parameters are listed in table 4 where we have taken the estimations of $logg$ using Mg wings due to their sensitivity for small changes in $logg$ and $T_{\rm eff}$ using Fe I lines as large number of clean Fe I lines could be measured for every star. The parameters were consistent within the typical uncertainties of $\sim$150 K for temperature and 0.25 dex for $\log g$. However, we also report a discrepancy between the color temperatures and spectroscopic temperatures for the three stars SDSS J1341+4741, SDSS J1725+4202 and SDSS J1933+4524 as shown in table 2. %
Errors in the derived abundances primarily depend on the signal-to-noise ratio (SNR) of the observed spectra and deviations in the values of the adopted stellar parameters. We have used the relation given by \cite{cayrel1988} to calculate the uncertainty in the abundances due to the SNR. The typical uncertainties in the derived stellar parameters are taken to be $\sim$150 K for temperature and 0.25 dex for $\log g$.
\begin{table}
\begin{center}
\caption{Estimates of Effective Temperature (K) for the Program Stars}
\begin{tabular}{crrrrrrrrrrr}
\hline\hline
Object &H-$\alpha$ &Fe~I &$V-K$ &$J-H$ &$J-K$\\% &$SED$\\
\hline
SDSS J0024+3203 &5700 &5700 &5737 &5672 &5875\\% &gaia \\
SDSS J0315+2123 &5450 &5400 &5570 &5287 &5264\\% &gaia \\
SDSS J0643+5934 &4800 &4900 &4618 &4843 &4801\\% &gaia\\
SDSS J0646+4116 &5100 &5150 &5065 &5179 &5144\\% &gaia\\
SDSS J0652+4105 &4900 &5000 &5060 &5108 &5060\\% &gaia\\
SDSS J1024+4151 &4800 &4800 &4655 &4782 &4823\\% &gaia\\
SDSS J1146+2343 &5200 &5100 &5825 &5273 &5365\\% &gaia\\
SDSS J1341+4741 &5450 &5450 &5927 &5438 &5749\\% &gaia\\
SDSS J1725+4202 &5300 &5400 &6274 &5803 &6012\\% &gaia\\
SDSS J1933+4524 &5850 &5800 &6249 &6038 &6249\\% &gaia\\
SDSS J1937+5024 &4950 &4800 &4738 &4702 &4908\\% &gaia\\
SDSS J1953+4222 &5900 &6000 &6136 &5847 &5874\\% &gaia\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Different estimates of $\log g$ for the Program Stars}
\begin{tabular}{crrrrrrrrrrr}
\hline\hline
Object & $FeI/FeII$ & Mg-wings & $Gaia$ parallax\\
\hline
SDSS J0024+3203 &3.80 &3.75 &3.94 \\
SDSS J0315+2123 &4.50 &4.50 &4.27\\
SDSS J0643+5934 &2.25 &2.50 &2.28\\
SDSS J0646+4116 &2.25 &2.25 &2.46\\
SDSS J0652+4105 &2.75 &2.50 &2.49\\
SDSS J1024+4151 &2.50 &2.50 &2.38\\
SDSS J1146+2343 &3.10 &3.00 &3.12\\
SDSS J1341+4741 &2.60 &2.50 &2.97\\
SDSS J1725+4202 &3.75 &3.50 &3.90\\
SDSS J1933+4524 &4.40 &4.50 &4.32\\
SDSS J1937+5024 &1.50 &1.50 &1.97\\
SDSS J1953+4222 &3.75 &4.00 &3.97\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Adopted stellar parameters for the Program Stars}
\begin{tabular}{crrrrrrrrrrr}
\hline\hline
Object & $T_{\rm eff}$ (K) & $\log g$ (cgs) & $\xi$ &[Fe/H] &$A$(Li)\\
\hline
SDSS J0024+3203 &5700 &3.75 &1.50 &$-$2.45 &2.00\\%0
SDSS J0315+2123 &5400 &4.50 &1.00 &$-$2.30 &1.80\\%0
SDSS J0643+5934 &4900 &2.50 &1.50 &$-$2.90 &0.80\\%0.01
SDSS J0646+4116 &5150 &2.25 &1.50 &$-$1.90 &1.00\\
SDSS J0652+4105 &5000 &2.50 &1.50 &$-$2.56 &1.75\\%0.01
SDSS J1024+4151 &4800 &2.50 &1.50 &$-$2.25 &1.05\\%0.01
SDSS J1146+2343 &5100 &3.00 &1.00 &$-$2.60 &1.15\\%0.01
SDSS J1341+4741 &5450 &2.50 &1.80 &$-$3.20 &1.95\\
SDSS J1725+4202 &5400 &3.50 &1.20 &$-$2.50 &1.90\\%0.0
SDSS J1933+4524 &5800 &4.50 &1.80 &$-$1.80 &2.25\\%0.0
SDSS J1937+5024 &4800 &1.50 &1.50 &$-$2.20 &1.00\\
SDSS J1953+4222 &6000 &4.00 &1.75 &$-$2.25 &2.05\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Abundances}
The results of our abundance analysis for 8 of the program stars are provided in Tables 4-11 below. Here we discuss details of this analysis for various classes of elements.
\subsection{Lithium}
Lithium abundances were derived from the strong absorption features at 6707.76\,{\AA} and 6707.98\,{\AA}, using the method of spectrum synthesis. The continuum level for the observed spectra is estimated locally around the Li doublet. The observed spectra were fit iteratively with the synthetic spectra for different values of Li abundance, and the best fit was adopted for each star, keeping the Li abundances as the only free parameter in the synthesis. Examples of the spectral synthesis for Li are shown in Figure~\ref{fig:lith-line}.
Errors in the abundance analysis of Li primarily originate from uncertainties in estimates of effective temperature. A difference of $\sim$ 150\,K is found to alter the Li abundance by 0.14 dex, on average. For the determination of the abundances of neutral species like Li~I, uncertainties in surface gravity play a minimal role.
\subsection{The light and ${\alpha}$-elements}
Abundances of carbon could be derived for all of the program stars based on the molecular CH $G$-band in the 4315\,{\AA} region. Most of the stars have low C, the range varying between [C/Fe] = $-$0.53 to [C/Fe] = +0.22. Corrections to the measured carbon abundances due to the evolutionary effects were computed based on \cite{placco2014} and were found to be minimal (0.0 - 0.01 dex) for the program stars which have been incorporated in the final reported C abundances in the tables. The poor signal-to-noise (SNR) in the region of the CN band at 3883\,{\AA} did not allow for precise abundances for N, while the region containing the O lines at 6300\,{\AA} and 6363\,{\AA} were too weak and dominated by telluric contamination, which prevented a meaningful derivation of O abundances for most of the stars. Oxygen could be derived for \ten and was found to be enhanced, [O/Fe] = +1.56.
Among the odd-Z elements, Na and Al could be detected and measured for all of the program stars. The Na abundances were determined using the D1 and D2 lines at 5890\,{\AA} and 5896\,{\AA}; the Al abundances were measured based on the resonance line at 3961.5\,{\AA}. NLTE corrections for both the elements, based on \cite{andrievskyal, andrievskyna}, were also implemented, as reported in Tables 4-11. The mean abundances are shown in comparison to samples from \citet{cayrel2004} and \citet{cohen2004} in Figure \ref{fig:cohen}. In this study, Si abundances could be derived for 5 stars out of the total sample of 11 stars. The abundances are mostly based on the line at 410.29 nm (and also 390.55 nm in a few stars with high blending from the CH line) which falls at the wings of the $H-\delta$ line apart from having a very poor signal-to-noise ratio. However, the average Si abundances in our study agree well with \cite{cayrel2004} as demonstrated in figure 4 but the Si abundances for \cite{cohen2004} are lower due to the spectral syntheses to determine the abundance of Si in the C-rich stars. These yield abundances of Si are substantially lower than those obtained with the standard analysis and are indicated in Tables 4–7 and the lower panel of Figure 4 in \cite{cohen2004}. Similarly, carbon is also higher primarily due to the evolutionary effects. The spectral synthesis is known to yield more accurate abundances for the weaker lines and low SNR of the spectra and a mixture of both methods have been used in this study. The uncertainties in the abundances for each element have also been indicated in figure 4.
The $\alpha$-elements are produced in different astrophysical sites, such as the hydrostatic burning phases in the shells of massive stars, oxygen and neon burning in Type II supernovae, and hypernovae. Among the $\alpha$-elements, Mg, Ca, and Ti abundances could be derived for all of the program stars, but meaningful Si abundances could only be derived for a few stars due to the poor SNR towards the blue end of the spectra. Several lines of Mg, Ca, and Ti could be detected in the spectra; the method of equivalent widths was employed to determine the abundances for the stronger lines, while spectral synthesis was used to determine abundances from the weaker and blended features.
Uncertainties in the derived abundances primarily depend on the signal-to-noise ratio (SNR) of the observed spectrum and deviations in the values of the adopted stellar parameters. We have used the relation given by \cite{cayrel1988} to calculate the uncertainty in the abundances due to the SNR. Uncertainties due to possible temperature and $\log (g)$ deviations were derived using two different model spectra, the first differing in temperature by $\sim$150 K and the second one deviating in $\log (g)$ by 0.25 dex. The final values of the abundance errors were obtained by adding the uncertainties arising from all three sources in quadrature. However, the errors in the relative abundance ratios are less sensitive to the errors in the model parameters and mainly depend on the SNR.
\subsection{The Fe-peak elements}
The Fe-peak elements (Sc, Fe, Cr, Mn, Co, Ni, and Zn) are synthesized during complete and incomplete Si burning phases in pre-supernovae, as well as during the explosive phase of a Type II supernova. Iron abundances were derived on the basis of several Fe I and Fe II lines; a difference of 0.25 dex was noted, which is in agreement with other analyses of metal-poor stars. The iron abundance of the program stars varies from [Fe/H] = $-1.80$ to $-3.20$, with a mean value of [Fe/H] = $-2.40$. The abundances of Cr were measured from Cr I lines, which are known to suffer from strong NLTE effects \citep{lai2008, bonifacio2009}; a mean difference of 0.35 dex was obtained between the Cr I and Cr II lines in the current sample. Manganese abundances were primarily measured using the resonance triplet near 4030\,{\AA}, but the poor quality of the spectra in that region led to larger errors. The NLTE corrections for the Mn triplet region increase with decreases in metallicity \citep{bergemann2008,bergemann2019}. The mean value for the present sample is [Mn/Fe] = $-0.37$. \zerosix is very strongly depleted in Mn, with [Mn/Fe] = $-1.00$. Cobalt is a product of complete Si burning, and it tracks the iron content of the star, with the expected scatter due to observational uncertainties. The mean abundance of Co for the present sample is [Co/Fe] = +0.01. The nucleosynthesis pattern for the program stars, in comparison to the mean abundances of giant stars from \cite{cayrel2004} and dwarf stars from \cite{cohen2004}, is shown in Figure~\ref{fig:cohen}. As seen in the figure, the derived abundances agree well with the mean abundances from these samples.
\subsection{The n-capture elements}
Out of the several neutron-capture elements that could be measured in our spectra, abundances of Sr and Ba could be derived for all of the program stars by the method of spectral synthesis. Both the lines at 4077\,{\AA} and 4215\,{\AA} were used to derive the Sr abundances, while the line at 4554\,{\AA} was used to derive the Ba abundance. The other strong Ba line at 4934\,{\AA} was avoided, as analysis of this line is extremely difficult, and yields large errors due to Fe blends found in the wings of this line \citep{gallagher2010}. The average abundances for Sr and Ba for the present sample are [Sr/Fe] = +0.26 and [Ba/Fe] = +0.25. However, several additional n-capture elements could be detected in \sixfiftytwo, which exhibits a large and uniform enhancement of n-capture elements. The first n-capture peak species Sr, Y, and Zr could be detected among the lighter n-capture elements, with an average value of [ls/Fe] = +0.68. Among the heavier n-capture elements, Ba, La, Nd, and Eu could be measured, with an average value of [hs/Fe] = +0.90. The values of [hs/ls]= +0.22 and [Ba/Eu] = $-0.23$ indicates that \sixfiftytwo could have received contributions from both the $r$-process and $s$-process. However, the strong presence of Li, with $A$(Li) = 1.75, along with a low abundance of carbon ([C/Fe]= $-0.12$, corrected for evolutionary effects) rules out the possibility of $s$-process via mass transfer from a companion binary star or winds from a massive star \citep{susmitha2021}. Hence, the origin and evolution of \sixfiftytwo is more likely to be of $r$-process origin. The possibility of $i$-process origin \citep{hampel2016,den2017} needs to be further explored as well.
\subsection{Kinematics}
To obtain the stars' kinematics, we required distances, proper motions, and radial velocities. Fortunately, Gaia provides a unique opportunity to estimate the proper motions of the stars with high precision, which we have adopted from Gaia-EDR3 \citep{gaia_edr3}. Distances are obtained from the \citealt{bailerjones2021} catalogue, where these authors have estimated the distances from Gaia-EDR3 parallaxes using probabilistic methods. The radial velocities were derived from the observed spectra, as listed in Table 1. We used the Astropy module to convert into the Galactocentric system and determined Galactocentric distances (X, Y, Z) and velocities ($V_x$,$V_y$, $V_z$), where the XY plane is the Galactic plane. For this conversion, we considered the Galactocentric distance for the Sun to be 8.2 kpc \citep{dehnenbinney98,mcmillan2017}, the distance of the Sun from the Galactic plane as 0 kpc, and the Solar velocity components to be (12.9, 245.6, 7.78) km/s \citep{meingast21}.
We also derived the orbital characteristics of the observed program stars \citep{pinto2021} . For our computation, we have used $r$ to denote the Galactocentric distance. The parameter $V_r$ is the velocity component along $R$, while $V_z$ is the vertical component of the velocity of the stars. The parameter $L_z$ represents the $z$ component of angular momentum, and $L_{\perp}$ denotes the perpendicular component of angular momentum. The parameter $V_{\phi}$ corresponds to the azimuthal velocity and is given by $L_z/R$. All of the computed velocities and angular momenta for the program stars are listed in Table 3. The velocities are listed in km/s, whereas the angular momenta are listed in multiples of 10$^2$ kpc km/s. Following \cite{dimatteo2020}, assuming a clockwise rotation of the disc, a negative value of $V_{\phi}$ represents prograde motion, whereas a positive value of $V_{\phi}$ represents retrograde motion. As shown in Figure 5, the dotted black line separates prograde motions from retrograde motions. Our program stars are evenly distributed in both regions; four stars towards the bottom left of the diagram likely belong to the disc population. \sixfiftytwo, marked in red, also belongs to this group. The other stars on the Spite plateau from the literature are shown with green open circles. The blue semi-circle shows the expected location of the disc stars in this plane.
\begin{table*}
\tabcolsep6.0pt $ $
\begin{center}$ $
\caption{Kinematics for the Program Stars}
\begin{tabular}{crrrrrrrrrr}
\hline\hline
Object &X &Y &Z & $V_x$ & $V_y$ & $V_z$ & $V_R$ & $L_z$ & $V_{\phi}$ & $L_{\perp}$\\
& (kpc) & (kpc) & (kpc) & (km/s) & (km/s) & (km/s) & (km/s) & (10$^2$ kpc km/s) & (km/s) & (10$^2$ kpc km/s) \\
\hline
SDSS J0024+3203 &$-$8.19 &0.19 &$-$0.12 &292.7 &$-$83.0 &152.4 &$-$294.6 &6.2 & 76.0 &12.12 \\
SDSS J0315+2123 &$-$8.38 &0.09 &$-$0.17 &27.2 &158.4 &35.9 &$-$25.5 &$-$13.3 &$-$158.7 &2.98 \\
SDSS J0643+5934 &$-$9.00 &0.56 &0.56 &$-$96.5 &131.8 &13.2 &104.3 &$-$11.8 &$-$125.8 &0.96 \\
SDSS J0646+4116 &$-$9.14 &0.09 &0.31 &216.6 &$-$71.7 &$-$208.3 &$-$217.4 &6.3 &69.4 &18.36 \\
SDSS J0652+4105 &$-$9.21 &0.09 &0.35 &$-$91.0 &224.5 &12.6 &93.2 &$-$20.6 &$-$223.6 &1.15 \\
SDSS J1024+4151 &$-$9.06 &0.04 &1.46 &$-$278.6 &$-$5.6 &56.5 &278.5 &0.6 &7.1 &1.05 \\
SDSS J1341+4741 &$-$8.13 &0.16 &0.40 &202.4 &$-$79.2 &$-$49.9 &$-$204.0 &6.1 &75.1 &3.25 \\
SDSS J1725+4202 &$-$8.02 &0.17 &0.12 &$-$218.0 &7.8 &$-$7.1 &218.1 &$-$0.2 &$-$3.1 &0.84 \\
SDSS J1933+4524 &$-$8.05 &0.23 &0.05 &273.3 &340.2 &72.6 &$-$263.2 &$-$28.0 &$-$348.1 &5.98 \\
SDSS J1937+5024 &$-$7.96 &1.12 &0.27 &157.4 &58.6 &$-$81.0 &$-$147.6 &$-$6.4 &$-$80.1 &6.10 \\
SDSS J1953+4222 &$-$8.07 &0.12 &0.02 &$-$121.0 &$-$45.9 &33.2 &120.3 &3.8 &47.8 &2.65 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Results and Discussion}
\subsection{Lithium distribution in the metal-poor regime}
We have demonstrated the distribution of Li abundances as a function of temperature for different stellar families in figure 6. The stars have been further categorized into giants and dwarfs for the EMP, CEMP-no, and CEMP-$s$ classes. The literature data is compiled from the SAGA database \citep{sudasaga}, and the program stars with detections of Li are marked in red diamonds. As noted by several studies on Li abundances in metal-poor stars \citep{spitenspite,bonifacio2007,litospite}, the plateau is observed for warmer dwarf stars with $T_{\rm eff}$ $>$ 5800\,K. However, the scatter tends to increase as temperature decreases from 6500\,K to 5700\,K. %
The identical distribution of Li across the EMP and CEMP-no stars indicates the ISM to be well-mixed during the epochs of their formation. Hence, Li could not be used as a yardstick to differentiate between these different stellar populations. However, Li is often depleted for CEMP-s stars but they are not considered for deriving the fits in figure 6 as the Li in these stars is often accreted through mass transfer from a companion AGB star. Mass transfer from a low-mass AGB would produce large amounts of C and deplete Li, along with the production of $s$-process-enhanced material. There are models in which AGB stars could produce Li through the Cameron-Fowler mechanism \citep{cameronfowler1971}. Through this mechanism, the outer convective envelope comes in contact with the H-burning shell where \textsuperscript{3}He is being produced by proton-proton reactions. The \textsuperscript{3}He is burned to \textsuperscript{7}Be via \textsuperscript{3}He($\alpha$,$\gamma$)\textsuperscript{7}Be under convective conditions. The \textsuperscript{7}Be is then swept up to the stellar surface and decays to \textsuperscript{7}Li by electron capture.. Only three of our sample stars with measurable Li are MSTO stars, which is not adequate to test the consistency of the slope for our sample. Black and blue dots are used in this figure for the literature sample to homogeneously differentiate between and dwarfs (black) and giants (blue) for all classes of stars. The two outliers among the EMP stars shown in pink filled circles are CS22893-010 \citep{roederer2014} and C1012254-203007 \citep{ruchti2011}.
We have demonstrated the trends for Li abundances in RGB and MS stars \textit{(stars with logg values greater than 4 are considered to be MS stars)} with temperature; definite trends are present. A strong correlation is obtained for RGB stars, with a Pearson correlation coefficient of 0.89 while the dwarf MS stars exhibit a weaker correlation coefficient of 0.60. The probability of no-correlation for the RGB and dwarf MS stars are less than 10\textsuperscript{-5}. The best fits for these two populations are shown in blue and black solid lines, respectively in Figure 6. About 85$\%$ of the stars fall within the 1$\sigma$ width of the best fit shown in the solid lines in the plot. The errors for the Li abundances are taken to be 0.05 dex, while that for temperature is taken to be 150 K. The empirical relation governing the best fit for the trends of A(Li) with $T_{\rm eff}$ in the giant stars and dwarf stars are given in the relations $(i)$ and $(ii)$ below, respectively:
\begin{equation*}
\begin{split}
A{\rm (Li)} & = 0.00108\, T_{\rm eff} - 4.524~~~ (i)\\
A{\rm (Li)} & = 0.00037\, T_{\rm eff} - 0.392-~~ (ii)
\end{split}
\end{equation*}
Figure 7 shows the trend of Li abundance with metallicity. The cosmological value of Li and the observed value of the Spite plateau are shown in black lines. A small slope can be seen, and the scatter tends to increase for MS stars as metallicity decreases. The lowest-metallicity stars have lower values of $A$(Li), which reach the Spite plateau at [Fe/H] $> -3.50$, albeit with a large scatter. Our small sample could not yield significant results, but detection of Li in additional stars would provide better opportunities to understand the evolution and (possible) depletion of Li in the early Universe.
\subsection{Lithium abundances in halo and globular cluster stars}
\citet{pasquini} found a trend for Li abundances with other elements, such as Na, O, and N, for the MSTO stars in NGC 6752. $A$(Li) was found to correlate with [O/Fe] and anti-correlate with [Na/Fe] and [N/Fe]. \citet{bonifacio2007} confirmed the $A$(Li)-[Na/Fe] anti-correlations in 47 Tucanae. However, no such trend was noticed among the halo stars. In Figure 8, the halo stars have been divided into dwarfs (black dots) and giants (blue dots). The depletion of $A$(Li) due to evolutionary mixing can be seen in all four panels. No trends could be seen for $A$(Li) with [Na/Fe] (as reported in a few GCs), [Ca/Fe] (representative of $\alpha$-elements abundances), [Cr/Fe] (an Fe-peak element), and [Ba/Fe] (a n-capture element). We have also marked the abundances of the GC escapees \citep{ban_gce} and the CEMP-no star \citep{bandyopadhyay} with red-filled triangles and pink diamonds respectively in each of the panels.
Lithium could also be detected for both of the GC escapees of this study, \zerosixgc and \nineteengc \citep{ban_gce}, and is found to be normal. Lithium is a fragile element, which is completely destroyed in a temperature range much lower than that required for the operation of the Mg-Al cycle. Thus, the presence of Li in second-generation stars indicates a heavy dilution of the gas processed by p-capture reactions with unprocessed gas that still preserves the standard Population II lithium abundance \citep{dantona2019}. Lithium has been measured in several Galactic GCs \citep{dorazi2015, dantona2019}; the Li abundances exhibit a similar distribution as normal metal-poor halo stars.
\subsection{Li abundance as a function of distance from the Galactic plane}
When combined with stars from the literature, we have 337 stars with Li, [Fe/H], and RV information. In Figure 7, we showed that there is a plateau with a negative slope for the $A$(Li) distribution with [Fe/H]. We wanted to examine if there is any correlation between the distribution of stars with respect to distance from the Galactic plane as a function of the $A$(Li) or [Fe/H] abundances. As the present sample combined with the stars from literature are metal-poor and Population II stars it is expected that they will be found more often in the halo.
Figure 9 shows the relationship between $A$(Li) and absolute distance from the Galactic plane, $|{\rm Z}|$, colour-coded to indicate [Fe/H]. The figure suggests that the Galactic disc ($|{\rm Z}| < 2$ kpc) is mostly populated with relatively metal-rich stars, while a light trend can be noticed between stars' distance to the Galactic plane with their Li abundances. The Li abundances in the stars tend to decrease as their distances from the galactic plane increase. Most Li-rich stars are found to be in or close to the galactic plane. The new sample of stars in this study is demonstrated in filled diamonds with colors corresponding to their metallicity. They are found to populate the region within 2 kpc of the galactic plane.
\subsection{Light-element abundances}
The program stars exhibit the typical odd-even nucleosynthesis pattern exemplified by low Na, high Mg, and low Al \citep{truran1971,umeda2000,hegerandwoosley2002}. The trends for the abundances of odd-Z and $\alpha$-elements are shown in Figure 10. The giants and main-sequence stars exhibit a similar distribution for the light elements. They show the expected enhancement in $\alpha$-elements for the halo stars, with an average [$\alpha$/Fe]= +0.41. The $\alpha$-elements Mg, Ca, and Ti exhibit a consistent over-abundance, but a large scatter is observed in the case of Si. The star \sixfiftytwo (shown in red in Figure 10), apart from being rich in n-capture elements, also exhibits a higher abundance of the odd-Z element Na along with the $\alpha$-elements. The light element abundances for the program stars are found to agree with the previous investigations of metal-poor halo stars as shown in figure 10. The data for the metal-poor halo stars were compiled from the Saga database \citep{sudasaga}. The two GC escapees marked in filled blue diamonds exhibit an elevated abundances of Al as seen in the top right panel. Si could be determined only for 5 out of the 11 stars as shown in the bottom panel.
\subsection{Heavier-element abundances}
The heavier elements show the expected behaviour with respect to variation in metallicity. The Fe-peak elements appear to closely track the Fe abundances (see Figure 11). Cobalt exhibits the typical decline with respect to increasing metallicity, the Ni and Zn abundances are slightly enhanced, and Mn is mostly depleted, as seen in other VMP stars. The decreasing trend in Cr with metallicity observed in metal-poor stars is due to the NLTE effects on neutral Cr, which varies with metallicity \citep{bergemanncescutti2010}. For our sample, Cr varies by 0.3 dex over a 1.4 dex range in metallicity. A large dispersion in Sc abundances was expected from chemical evolution models that include significant contributions from a few supernovae with different masses, but this is not found for the current sample. The well studied trends in abundances for the Fe-peak elements in metal-poor stars are demonstrated in figure 11. The sample size for this study is not adequate to study the variation of individual elements with metallicity over a wide range but the program stars were found to follow the general trend. The large excess in Ni and Zn in few stars could be attributed to a progenitor population being massive stars exploding as Type II supernovae \citep{nakamura1999, nomoto2013}. The uncertainties in the abundances of the program stars are also marked in figure 11.
The distribution of the neutron-capture elements Sr and Ba are shown in Figure 12. Strontium belongs to the first-peak or lighter n-capture elements, while Ba belongs to the heavier n-capture elements. Large enhancements or depletion in n-capture elements is not found among our program stars, with the exception of \sixfiftytwo. The ratio for light-to-heavy n-capture elements depends largely on the mass and nature of the progenitors. Since the contribution of the $s$-process is minimal at lower metallicities, the origin of these elements is expected to be from the $r$-process. Following \cite{tsuji1,tsuji2,susmitha} and \cite{siegel2019} for $r$-process origin, the heavier element Ba is produced primarily by neutron star mergers (NSMs) or collapsars, whereas the lighter element Sr can be synthesized in NSMs as well as Type II supernovae. Thus, an excess of one over the other indicates the dominance of either NSMs or core-collapse SNe, and thus provides valuable information about the nature of the progenitors for the origin of the $r$-process \citep{ban_rp}. For the present sample, the distribution of [Sr/Ba], as a function of [Fe/H], is shown in the bottom panel of Figure 12. From inspection, the scatter is very much lower, with a mean value of $<Sr/Ba>$ = 0, making them likely to be polluted evenly from both the progenitor population during their star-forming epochs.
\subsection{Kinematics}
To study the assemblage history of the Milky Way, it is important to classify the origin of the stars based on their kinematics. The distribution of the Li-rich and Li-poor stars in the $L_z$ vs $L_{\perp}$ diagram can provide important clues towards the evolution of Li in the Milky Way. Following \cite{dimatteo2020}, stars with $L_z <$ –10 kpc km/s are dominated by those formed in situ while the stars in the region $L_{\perp} >$ 13 kpc km/s and $L_z >$ –10 kpc km/s are primarily accreted. However, the region $L_{\perp} <$ 13 kpc km/s and $L_z >$ –10 kpc km/s contains the stars from the Gaia-Sausage-Enceladus structure \citep{helmi2018,haywood2018}, as well as the disc stars with kinematics similar to the halo, and is therefore called the `mixed zone'. The three regions are shown by red dashed lines in Figure 13, while the previously discussed classification of prograde and retrograde motion stars is shown by the black dashed line. Four stars in the sample are found to have formed in situ, whereas one star is seen to be accreted. The rest of the sample belongs to the mixed zone. The position of the VMP and EMP stars in the Spite plateau from the SAGA database \cite{sudasaga} are shown by green circles in Figure 13. The filled circles indicate the stars in the Spite plateau with $A$(Li) $>$2.05, while the open green circles show the stars that are slightly depleted from the Spite plateau. The majority of the stars belong to the mixed zone, with very few populating either the in-situ or accreted zones. The Li-rich stars are primarily found towards the bottom of the mixed zone. Thus, the Li population in the Spite plateau has significant contributions from both stars formed in situ and those that are accreted. However, only two of the Li-rich stars in the Spite plateau are found in the accreted zone which is dominated by Li-depleted stars.
We have also calculated the trajectories for the stars in our sample for in-situ, accreted, and mixed zones in the classification as shown in Figure 13. The orbits are computed using $gala$ \footnote{The routine $gala$ is an Astropy affiliated package for Galactic dynamics. $gala$ provides functionality for representing analytic mass models that are commonly used in Galactic dynamics contexts for numerically integrating stellar orbits (e.g., Chapter 3 of Binney and Tremaine 2008). The gravitational potential models are defined by specifying parameters such as mass, scale radii, or shape parameters. Once defined, they can be used in combination with numerical integrators provided in $gala$ to compute orbits. $gala$ comes with a pre-defined, multi-component, but simple model for the Milky Way that can be used for orbit integrations.}. The trajectories were calculated for a time period of 5 Gyr. The orbits for the in-situ n-capture-rich star \sixfiftytwo is shown in Figure 14. The three panels indicate the motions in the XY, XZ, and YZ planes. The star is found to have almost no motion along the Z-axis, and thus is clearly formed in-situ as expected. Similarly, the orbit for the accreted star \zerosixgc is shown in Figure 15. However, the trajectory appears very similar to that of stars formed in-situ, which could be possible, as \zerosixgc is also likely to be a globular cluster escapee. The mixed zone program stars in this figure appear to have open orbits and hence are most likely to be accreted rather than formed in-situ. The orbits for a few representative cases for the stars in the mixed zones are shown in Figure 16.
\section{Conclusion}
A sample of 9 stars in the domain of very metal-poor stars with weak molecular carbon CH $G$-bands, selected from the low-resolution SDSS/MARVELS pre-survey, have been observed at a high spectral resolution to study their detailed abundances. The stars show typical $\alpha$-element enhancements, and the odd-even nucleosynthesis pattern for the light elements. The Fe-peak elements mostly track the iron content; the observed trends are consistent with other metal-poor stars. Lithium could be detected and measured in all the program stars, and several belong to the Spite plateau. The depletion of Li is observed as the stars ascend the giant branch. The trends for depletion in Li with temperature are quantified; 85\% of the stars were found to fall within the 1$\sigma$ (0.19 and 0.12 dex/K for giants and dwarfs respectively) width of the best fit. A small slope of the Spite plateau at the metal-poor end was also found when the Li abundance was studied as a function of Galactocentric distance, indicating that the Li-rich MSTO stars are not preferentially located at larger distances from the Galactic plane. Most Li-rich stars are found to be in or close to the galactic plane. The stars have also been classified on the basis of their motion into prograde and retrograde samples. The program stars, along with the Spite plateau population in the literature, are divided into those that were likely formed in situ, accreted, and in the mixed zones. The orbits for the program stars have also been derived and studied for a period of 5 Gyr backwards in time. The mixed zone is found to be the most populated, and thus neither formation in situ nor by accretion, and can be considered as an important contributor to the population of the Spite plateau.
\section{Acknowledgements}
We thank the staff of IAO, Hanle, and CREST, Hosakote, that made these observations possible. The facilities at IAO and CREST are operated by the Indian Institute of Astrophysics, Bangalore. T.C.B. acknowledges partial support from grant PHY 14-30152 (Physics Frontier Center/JINA-CEE), awarded by the U.S. National Science Foundation (NSF). We also thank the anonymous referee for the comments which improved the quality of our paper.
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J0024+3203} %
\begin{tabular}{cccccccc}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &2.00 \\
C &CH & \dots &6.00 &8.43 &$-$2.43 &0.02 &synth \\
Na$^b$ &Na I &2 &4.22 &6.21 &$-$1.99 &0.46 &synth \\
Mg &Mg I &4 &5.64 &7.59 &$-$1.95 &0.50 &synth\\
Al$^b$ &Al I &1 &3.90 &6.43 &$-$2.53 &$-$0.08 &synth \\
Si &Si I &2 &5.46 &7.51 &$-$2.05 &0.40 &0.09 \\
Ca &Ca I &8 &4.37 &6.32 &$-$1.95 &0.50 &0.06\\
Sc &Sc II &5 &1.03 &3.15 &$-$2.12 &0.33 &0.06\\
Ti &Ti I &7 &3.14 &4.93 &$-$1.79 &0.66 &0.09\\
&Ti II &6 &3.03 &4.93 &$-$1.90 &0.55 &0.05\\
Cr &Cr I &3 &3.21 &5.62 &$-$2.41 &0.04 &0.12\\
&Cr II &2 &3.52 &5.62 &$-$2.10 &0.35 &0.07\\
Mn &Mn I &4 &2.62 &5.42 &$-$2.80 &$-$0.35 &0.11\\
Co &Co I &2 &2.73 &4.93 &$-$2.20 &0.25 &0.06\\
Ni &Ni I &3 &4.16 &6.20 &$-$2.04 &0.41 &synth\\
Zn &Zn I &2 &2.71 &4.56 &$-$1.85 &0.60 &0.07\\
Sr &Sr II &2 &1.00 &2.83 &$-$1.83 &0.62 &synth \\
Ba &Ba II &2 &0.25 &2.25 &$-$2.00 &0.45 &synth \\
Eu$^u$ &Eu II &1 &$-$1.0 &0.52 &$-$1.52 &0.93 &synth \\
\hline
\end{tabular}
\end{center}
$\sigma$ indicates the random error.
\newline
$^b$ Values obtained after applying NLTE corrections.
\newline
$^u$ indicates an upper limit.
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J0315+2123} \label{c6t4}
\begin{tabular}{crrrrrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &1.80 \\
C &CH & \dots &6.10 &8.43 &$-$2.33 &$-$0.03 &synth \\
Na$^b$ &Na I &2 &3.74 &6.21 &$-$2.47 &$-$0.17 &synth \\
Mg &Mg I &4 &5.85 &7.59 &$-$1.74 &0.56 &synth\\
Si &Si I &1 &5.77 &7.51 &$-$1.74 &0.56 &synth \\
Ca &Ca I &8 &4.24 &6.34 &$-$2.10 &0.20 &0.08\\
Sc &Sc II &5 &$-$1.04 &3.15 &$-$2.11 &0.19 &0.04\\
Ti &Ti I &7 &3.11 &4.93 &$-$1.82 &0.48 &0.09\\
&Ti II &6 &2.98 &4.93 &$-$1.95 &0.35 &0.13\\
Cr &Cr I &3 &3.08 &5.62 &$-$2.54 &$-$0.22 &0.08\\
&Cr II &2 &3.19 &5.62 &$-$2.43 &$-$0.13 &0.09\\
Mn &Mn I &4 &2.60 &5.42 &$-$2.82 &$-$0.52 &0.12\\
Co &Co I &2 &2.70 &4.93 &$-$2.23 &0.07 &0.06\\
Ni &Ni I &3 &4.26 &6.20 &$-$1.94 &0.36 &synth\\
Cu &Cu I &2 &2.32 &4.19 &$-$1.87 &0.43 &0.12 \\
Zn &Zn I &2 &3.09 &4.56 &$-$1.47 &0.83 &synth\\
Sr &Sr II &2 &0.40 &2.83 &$-$2.43 &$-$0.03 &synth \\
Ba &Ba II &2 &0.25 &2.25 &$-$2.00 &$-$0.30 &synth \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J0643+5934} \label{c6t9}
\begin{tabular}{crrrrrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &0.80 \\
C &CH & \dots &5.75 &8.43 &$-$2.68 &0.22 &synth \\
Na$^b$ &Na I &2 &3.54 &6.21 &$-$2.67 &0.23 &synth \\
Mg &Mg I &4 &5.03 &7.59 &$-$2.56 &0.34 &synth\\
Al$^b$ &Al I &1 &2.70 &6.43 &$-$3.73 &$-$0.83 &0.09 \\
Ca &Ca I &8 &3.61 &6.32 &$-$2.71 &0.19 &0.06\\
Sc &Sc II &5 &0.47 &3.15 &$-$2.68 &0.22 &0.11\\
Ti &Ti I &7 &2.42 &4.93 &$-$2.51 &0.39 &0.10\\
&Ti II &6 &2.36 &4.93 &$-$2.57 &0.33 &0.07\\
Cr &Cr I &3 &2.54 &5.62 &$-$3.08 &$-$0.18 &0.08\\%%
&Cr II &2 &3.02 &5.62 &$-$2.60 &0.30 &0.08\\%%
Mn &Mn I &4 &1.52 &5.42 &$-$3.90 &$-$1.00 &0.12\\%%
Co &Co I &2 &2.32 &4.93 &$-$2.61 &0.29 &0.11\\
Ni &Ni I &3 &3.61 &6.20 &$-$2.59 &0.31 &synth\\
Zn &Zn I &2 &1.86 &4.56 &$-$2.70 &0.20 &0.06\\
Sr &Sr II &2 &0.00 &2.83 &$-$2.83 &0.07 &synth \\
Ba &Ba II &2 &$-$0.50 &2.25 &$-$2.75 &0.15 &synth \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J0652+4105} \label{c5t5}
\begin{tabular}{ccrrrrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &1.75 & & & &synth \\
C &CH & &5.75 &8.43 &$-$2.68 &$-$0.13 &synth \\
Na$^b$ &Na I &2 &4.23 &6.21 &$-$1.98 &0.57 &synth \\
Mg &Mg I &5 &5.62 &7.59 &$-$1.97 &0.58 &synth \\
Al$^b$ &Al I &1 &2.96 &6.43 &$-$3.47 &$-$0.92 &0.18\\
Ca &Ca I &11 &4.14 &6.32 &$-$2.18 &0.37 &0.08\\
Sc &Sc II &5 &1.00 &3.15 &$-$2.15 &0.40 &0.10\\
Ti &Ti I &4 &2.94 &4.93 &$-$1.99 &0.56 &0.15\\
&Ti II &13 &2.78 &4.93 &$-$2.15 &0.40 &0.11\\
Cr &Cr I &6 &3.24 &5.62 &$-$2.38 &0.17 &0.16\\%%
&Cr II &1 &3.69 &5.62 &$-$1.93 &0.62 &0.09\\%%
Mn &Mn I &5 &2.71 &5.42 &$-$2.71 &$-$0.16 &0.12\\
Co &Co I &2 &2.39 &4.93 &$-$2.54 &0.01 &0.08\\
Ni &Ni I &4 &3.82 &6.20 &$-$2.38 &0.17 &synth \\
Zn &Zn I &1 &2.57 &4.56 &$-$1.99 &0.56 &synth \\
Sr &Sr II &2 &1.00 &2.83 &$-$1.83 &0.72 &synth\\
Y &Y II &2 &0.25 &2.21 &$-$1.96 &0.59 &synth \\
Zr &Zr II &3 &0.75 &2.59 &$-$1.84 &0.71 &synth \\
Ba &Ba II &2 &0.50 &2.25 &$-$1.75 &0.80 &synth \\
La &La II &2 &$-$0.87 &1.11 &$-$1.98 &0.57 &synth\\
Nd &Nd II &2 &0.0 &1.42 &$-$1.42 &1.13 &synth \\
Eu &Eu II &1 &$-$1.0 &0.52 &$-$1.52 &1.03 &synth \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J1024+4151} \label{c6t12}
\begin{tabular}{crrrrrrrrrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &1.05 \\
C &CH & \dots &6.00 &8.43 &$-$2.43 &$-$0.18 &synth \\
O &O I & \dots &8.00 &8.69 &$-$00.69 &1.56 &synth \\
Na$^b$ &Na I &2 &3.90 &6.21 &$-$2.31 &$-$0.06 &synth \\
Mg &Mg I &4 &5.76 &7.59 &$-$2.03 &0.22 &synth\\
Al$^b$ &Al I &1 &2.89 &6.43 &$-$3.54 &$-$1.29 &synth \\
Ca &Ca I &8 &3.68 &6.32 &$-$2.64 &0.46 &0.09\\
Si &Si I &2 &5.42 &7.51 &$-$2.09 &0.14 &0.16 \\
Sc &Sc II &5 &1.08 &3.15 &$-$2.23 &0.12 &0.08\\
Ti &Ti I &7 &3.42 &4.93 &$-$1.51 &0.74 &0.12\\
&Ti II &6 &3.11 &4.93 &$-$1.82 &0.43 &0.09\\
Cr &Cr I &3 &3.25 &5.62 &$-$2.37 &$-$0.12 &0.13\\
&Cr II &2 &3.50 &5.62 &$-$2.12 &0.13 &0.09\\
Mn &Mn I &4 &2.51 &5.42 &$-$2.91 &$-$0.66 &0.10\\
Co &Co I &2 &2.46 &4.93 &$-$2.47 &$-$0.22 &0.06\\
Ni &Ni I &3 &4.22 &6.20 &$-$1.98 &0.27 &synth\\
Cu &Cu I &1 &2.89 &4.56 &$-$1.67 &0.58 &synth\\
Zn &Zn I &2 &2.89 &4.56 &$-$1.67 &0.58 &0.11\\
Sr &Sr II &2 &0.75 &2.83 &$-$2.08 &0.17 &synth \\
Ba &Ba II &2 &0.50 &2.25 &$-$1.75 &0.50 &synth \\
Eu$^u$ &Eu II &1 &$-$0.75 &0.52 &$-$1.27 &0.98 &synth \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J1146+2343} \label{c6t2}
\begin{tabular}{crrrrrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &1.15 \\
C &CH & \dots &6.00 &8.43 &$-$2.43 &$-$0.17 &synth \\
Na$^b$ &Na I &2 &3.67 &6.21 &$-$2.64 &$-$0.04 &synth \\
Mg &Mg I &4 &5.39 &7.59 &$-$2.00 &0.60 &synth\\
Al$^b$ &Al I &1 &2.90 &6.43 &$-$3.53 &$-$0.93 &synth \\
Ca &Ca I &8 &4.10 &6.32 &$-$2.22 &0.38 &0.06\\
Sc &Sc II &5 &$-$0.73 &3.15 &$-$2.42 &0.18 &0.01\\
Ti &Ti I &7 &2.83 &4.93 &$-$2.10 &0.50 &0.03\\
&Ti II &6 &2.62 &4.93 &$-$2.31 &0.29 &0.04\\
Cr &Cr I &3 &3.07 &5.62 &$-$2.55 &0.05 &0.05\\
&Cr II &2 &3.49 &5.62 &$-$2.13 &0.47 &0.05\\
Mn &Mn I &4 &2.84 &5.42 &$-$2.58 &$-$0.02 &0.02\\
Co &Co I &2 &2.36 &4.93 &$-$2.57 &0.03 &0.01\\
Ni &Ni I &3 &4.00 &6.20 &$-$2.20 &0.40 &synth\\
Zn &Zn I &2 &2.57 &4.56 &$-$1.99 &0.61 &0.05\\
Sr &Sr II &2 &0.75 &2.83 &$-$2.08 &0.52 &synth \\
Ba &Ba II &2 &0.25 &2.25 &$-$2.00 &0.60 &synth \\
Eu$^u$ &Eu II &1 &$-$1.25 &0.52 &$-$1.77 &0.83 &synth \\
\hline
\end{tabular}
\end{center}
$\sigma$ indicates the random error.
$^b$ Values obtained after applying NLTE corrections.
$^u$ indicates an upper limit.
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J1725+4202} %
\begin{tabular}{crrrrrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &1.90 \\
C &CH & \dots &6.00 &8.43 &$-$2.43 &0.07 &synth \\
Na$^b$ &Na I &2 &3.92 &6.21 &$-$2.29 &0.21 &synth \\
Mg &Mg I &4 &5.53 &7.59 &$-$2.06 &0.44 &synth\\
Al$^b$ &Al I &1 &3.37 &6.43 &$-$3.06 &$-$0.56 &synth \\
Ca &Ca I &8 &4.28 &6.32 &$-$2.02 &0.48 &0.08\\
Sc &Sc II &5 &0.63 &3.15 &$-$2.52 &$-$0.02 &0.09\\
Ti &Ti I &7 &2.88 &4.93 &$-$2.05 &0.45 &0.08\\
&Ti II &6 &2.87 &4.93 &$-$2.06 &0.44 &0.04\\
Cr &Cr I &3 &3.03 &5.62 &$-$2.59 &$-$0.09 &0.07\\
&Cr II &2 &3.46 &5.62 &$-$2.16 &0.34 &0.11\\
Mn &Mn I &4 &2.60 &5.42 &$-$2.92 &$-$0.32 &0.15\\
Co &Co I &2 &2.54 &4.93 &$-$2.39 &0.11 &0.09\\
Ni &Ni I &3 &4.08 &6.20 &$-$2.12 &0.38 &synth\\
Zn &Zn I &2 &2.71 &4.56 &$-$1.85 &0.65 &synth\\
Sr &Sr II &2 &0.75 &2.83 &$-$2.08 &0.42 &synth \\
Ba &Ba II &2 &0.00 &2.25 &$-$2.25 &0.25 &synth \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\tabcolsep3.0pt $ $
\begin{center}
\caption{Elemental-abundance determinations for SDSS J1933+4524} \label{c6t8}
\begin{tabular}{cccccrrr}
\hline\hline
Elements &Species & $N_{lines}$ & A(X) & Solar & [X/H] & [X/Fe] & $\sigma$ \\
\hline
Li &Li I &1 &2.25 \\
C &CH & \dots &6.50 &8.43 &$-$1.93 &$-$0.13 &synth \\
Na$^b$ &Na I &2 &4.23 &6.21 &$-$1.98 &$-$0.18 &synth \\
Mg &Mg I &4 &5.95 &7.59 &$-$1.64 &0.16 &synth\\
Al$^b$ &Al I &1 &3.85 &6.43 &$-$2.58 &$-$0.78 &synth \\
Ca &Ca I &8 &4.86 &6.32 &$-$1.46 &0.34 &0.08\\
Sc &Sc II &5 &1.62 &3.15 &$-$1.53 &0.27 &0.09\\
Ti &Ti I &7 &3.60 &4.93 &$-$1.33 &0.47 &0.08\\
&Ti II &6 &3.75 &4.93 &$-$1.18 &0.62 &0.04\\
Cr &Cr I &3 &4.18 &5.62 &$-$1.44 &0.36 &0.07\\
&Cr II &2 &4.15 &5.62 &$-$1.47 &0.33 &0.11\\
Mn &Mn I &4 &3.59 &5.42 &$-$1.83 &$-$0.03 &0.15\\
Co &Co I &2 &2.97 &4.93 &$-$1.96 &$-$0.16 &0.09\\
Ni &Ni I &3 &4.52 &6.20 &$-$1.68 &0.12 &synth\\
Zn &Zn I &2 &3.18 &4.56 &$-$1.38 &0.42 &synth\\
Sr &Sr II &2 &1.60 &2.83 &$-$1.23 &0.57 &synth \\
Ba &Ba II &2 &0.95 &2.25 &$-$1.30 &0.50 &synth \\
Eu$^u$ &Eu II &1 &$-$0.50 &0.52 &$-$1.02 &0.78 &synth \\
\hline
\end{tabular}
\end{center}
\end{table}
\bibliographystyle{aasjournal}
\bibliography{ms_li_apj}
|
Title:
Spin-Flavor Precession Phase Effects in Supernova |
Abstract: If the neutrino has a large magnetic moment, then a phase effect may appear
in its spin-flavor precession inside the supernova. It differs from the
ordinary flavor oscillation phase effect in two aspects: It can develop even if
there is only one partially adiabatic resonance and it effects a large part of
the neutrino energy spectrum. We examine the spin-flavor precession phase
effect both analytically and numerically for the Majorana neutrinos in a core
collapse supernova. Our analytical approach is based on the assumption that
spin-flavor precession and Mikheev-Smirnov-Wolfenstein resonances are
completely decoupled. Where this decoupling assumption fails, we present only
numerical results. We show that the sensitive phase dependence of the survival
probabilities can be treated as an uncertainty which smears the final neutrino
energy spectra to be observed at Earth.
| https://export.arxiv.org/pdf/2208.06926 |
\title{Spin-Flavor Precession Phase Effects in Supernova}
\author{T.~Bulmus}
\email{[email protected]}
\affiliation{Mimar Sinan Fine Arts University, Sisli, Istanbul, 34380, Turkey}
\author{Y.~Pehlivan}
\email{[email protected]}
\affiliation{Mimar Sinan Fine Arts University, Sisli, Istanbul, 34380, Turkey}
\date{\today}
\medskip
\pacs{14.60.Pq, %
95.85.Ry, %
97.60.Bw, %
13.40.Em %
}
\keywords{Neutrino magnetic moment, phase effects, supernova}
\preprint{}
\section{INTRODUCTION}
\label{sec:INTRODUCTION}
Neutrino's anomalous magnetic moment causes its spin to precess around a
magnetic field \cite{Pauli:1941zz, Lee:1977tib, SHROCK1982359}. Coupled with the
ordinary flavor evolution in vacuum, this gives rise to the phenomenon of
spin-flavor precession (SFP) \cite{Fujikawa:1980yx}. Ordinarily, the Standard
Model predicts the neutrino magnetic moment to be of the order of $10^{-20}
\mu_B$ where $\mu_B$ denotes the Bohr magneton \cite{Fujikawa:1980yx,
Balantekin:2013sda}. This is too small to be consequential in most settings.
However, it is possible that the neutrino magnetic moment is larger than the
Standard Model prediction \cite{Bell:2005kz, Babu:2020ivd}. Current experimental
upper bound for neutrino magnetic moment is of the order of $10^{-11} \mu_B$
\cite{Zyla:2020zbs}, but somewhat stringer bounds can be provided with
astrophysical arguments \cite{Raffelt:1999gv}. For a recent review of neutrino
electromagnetic properties, see Ref. \cite{Giunti:2014ixa}. Here, we assume that
the neutrino magnetic moment is of the order of $10^{-16} \mu_B$ or larger.
If the magnetic field is perpendicular to the velocity of the neutrino, then
spin precession causes neutrino's helicity to oscillate. In matter, positive and
negative helicity neutrinos interact differently and SFP is modified
\cite{Voloshin:1986ty, Okun:1986na, Lim:1987tk, Akhmedov:1988uk}. In particular,
each interaction forces the neutrino back into a flavor state so that SFP is
suppressed if the density is high. But, under the right conditions, the effects
of the matter interactions and the vacuum oscillations can cancel each other.
Around this cancellation region even a relatively weak magnetic field can cause
significant helicity transformation. This is known as the SFP resonance
\cite{Lim:1987tk, Akhmedov:1988uk}. It is analogous to the
Mikheev-Smirnov-Wolfenstein (MSW) resonance which happens when neutrinos undergo
ordinary flavor oscillations in a medium, and a similar cancellation leads to
significant flavor transformation even for a very small mixing angle
\cite{Wolfenstein:1977ue, Mikheev:1986wj, 1986PhRvL..57.1275P}. The phase effect
that we consider here appears when the SFP resonance is partially adiabatic.
Adiabaticity refers to the situation where the external conditions affecting a
system change slowly in comparison to the system itself. For the problem at
hand, this means that the distance scales over which the matter density and the
magnetic field change should be much longer than the distance scale over which
the neutrino oscillates. Adiabatic evolution can be described in terms of the
energy eigenstates of the Hamiltonian in a simple way: An initial state which is
nearly an energy eigenstate evolves approximately into the same eigenstate at
later times. Adiabaticity condition comes closest to being violated in the
resonance region where the energy eigenstates approach to each other. If it is
violated, one speaks of partial adiabaticity. In partially adiabatic case, even
if the initial state is approximately an energy eigenstate, it evolves into a
superposition of the two approaching energy eigenstates. This superposition is
described by the Landau-Zener jumping probability \cite{1932PhyZS...2...46L,
1932RSPSA.137..696Z, 1981PhRvA..23.3107R}. See Refs. \cite{Kuo:1989qe,
Smirnov:2004zv} for reviews in the context of ordinary flavor evolution.
If the neutrino is nearly in an energy eigenstate when it is produced, then the
above description is sufficient: The evolution of the neutrino is uniquely
determined by the initial energy eigenstate that it occupies, and the
Landau-Zener jumping probability. But the same cannot be said if the neutrino is
not initially in an energy eigenstate. This can happen, for example, near the
center of a core collapse supernova. Such a supernova emits a large flux of
neutrinos from the proto-neutron star forming at its center where both the
matter density and the magnetic field are expected to be high
\cite{2002RvMP...74.1015W, Kotake:2005zn, Janka:2006fh}. The magnetic field
drives the energy eigenstates close to spin projection eigenstates along itself,
and the high matter density drives them close to flavor eigenstates. Therefore a
neutrino born with a particular flavor and helicity is necessarily a
superposition of energy eigenstates. Each energy eigenstate component evolves
separately as described above, and they develop a relative phase between them.
If the neutrino subsequently passes through a partially adiabatic SFP resonance,
then each component separately evolves into a superposition according to the
Landau-Zener formulation. In this case, the relative phase that they develop up
until resonance point determines the final state of the neutrino and affects the
neutrino signal to be observed at Earth.
The situation described above is already familiar to neutrino physicists from
the \emph{phase effect} in ordinary flavor oscillations in matter. This effect
appears when neutrinos go through two subsequent partially adiabatic MSW
resonances when, for example, the shock wave in a supernova creates a sharp
local dip in matter density. Without a large neutrino magnetic moment, the
neutrino produced at the center of the supernova is nearly in an energy
eigenstate, but it evolves into a superposition of two eigenstates at the first
partially adiabatic MSW resonance. These two eigenstates then develop a relative
phase until the second partially adiabatic MSW resonance. The final state of
the neutrino after the second resonance depends sensitively on this relative
phase. This phenomenon was examined in detail in Ref. \cite{Dasgupta:2005wn}.
The main message of the present paper is that if the neutrino possesses a large
magnetic moment, then a phase effect appears between its production site and the
first partially adiabatic SFP resonance it encounters. We call this the
\emph{SFP phase effect}. The main difference between the SFP phase effect and
the ordinary phase effect examined in Ref. \cite{Dasgupta:2005wn} is in their
ubiquity: The ordinary phase effect requires the neutrino to go through two
partially adiabatic MSW resonances. For this reason, its effect is limited to
part of the neutrino energy spectrum depending on the particulars of the shock
wave propagation. SFP phase effect does not require a second partially adiabatic
resonance and therefore affects a larger part of the neutrino energy spectrum,
independently of the shock wave propagation. Nevertheless, the presence of a
subsequent adiabatic or partially adiabatic MSW resonance is also examined in
this paper.
In this paper, we assume that neutrinos are of Majorana nature and examine SFP
phase effect in a core collapse supernova where it can be potentially important.
We show that the sensitivity of the final state on the relative phases
effectively creates uncertainties in the neutrino survival probabilities. We
derive explicit analytical formulas to calculate the size of these
uncertainties. This analytical treatment is valid for any density and magnetic
field distribution as long as SFP and MSW resonances are decoupled in the sense
that each can be locally treated as a two-dimensional quantum mechanical
problem. In order to highlight a few basic features of the SFP phase effect, we
run illustrative simulations using an exponentially decreasing matter density and a
magnetic field which decreases with the square of the distance from the center.
We mimic the effect of a shock wave by decreasing the central density with
the post-bounce time. We show that the decoupling approximation (and our
analytical treatment with it) tends to fail at later post bounce times due to
the spreading of SFP resonance with dropping central density. This is especially
true if the neutrino magnetic moment is large or the magnetic field is strong.
We also run simulations with realistic density distributions at different
post-bounce times to demonstrate how the uncertainties in the survival and
transition probabilities may smear the final neutrino energy spectra to be
observed at Earth.
In Section II, we review the oscillations, precession and interactions of
Majorana neutrinos inside the supernova. In Section III we discuss SFP and MSW
resonances, focusing in particular on the conditions under which they can be
treated as two separate two-level problems. In Section IV, we discuss the
adiabatic evolution of the neutrinos inside the supernova, and their subsequent
decoherence on their way to Earth. In Section V we consider the hypothetical
case of the zero vacuum mixing angle. Although this case is not of practical
importance, it removes the MSW resonance from the picture and allows us to focus
on the phase effect between the production point and the SFP resonance. In
Section VI we consider the non-zero mixing angle case by including the MSW
resonance. In Section VII we present our discussion and conclusions.
\section{Neutrinos inside the supernova}
\label{sec:systemDynamics}
We work in the effective two flavor mixing scheme with 1-3 mixing parameters
which is the relevant part of the full mixing parameter space for supernova.
See, for example, Ref. \cite{Ahriche:2003wt} for three flavor effects. We
denote the negative helicity flavor degrees of freedom by $\ket{\nu_e}$ and
$\ket{\nu_x}$. They respectively correspond to the electron flavor and an
orthogonal flavor combination. Corresponding positive helicity states are
respectively denoted by $\ket{\bar\nu_e}$ and $\ket{\bar\nu_x}$. Majorana
neutrinos are their own antiparticles. However, as far as their production and
detection are concerned, a positive helicity Majorana neutrino behaves very
similar to a positive helicity Dirac antineutrino. For this reason, it is
conventional to refer to Majorana neutrinos with positive helicities as
antineutrinos. We also adopt this convention, but we refer to all degrees of
freedom as neutrinos when no distinction is necessary.
A magnetic field turns a Majorana neutrino into a Majorana antineutrino by
flipping its helicity. The resulting effect is described by the
Hamiltonian\footnote{We use outer product forms rather than matrix
representations because writing down $4\times 4$ matrices with uncertainties is
impractical in the two-column format.}
\begin{equation}
\label{Hmu}
H_{\mu}\mkern-1mu(r)\!=\! \mu B(r)
\qty(\dyad{\nu_e}{\bar\nu_x}\!+\!\dyad{\bar\nu_x}{\nu_e}\!-\!\dyad{\nu_x}{\bar\nu_e}
\!-\!\dyad{\bar\nu_e}{\nu_x}).
\end{equation}
This Hamiltonian only includes flavor off-diagonal terms because flavor-diagonal
terms vanish identically due to the reality condition of the Majorana spinors
\cite{Giunti:2014ixa, Pehlivan:2014zua}. $B(r)$ in Eq. (\ref{Hmu}) denotes the component of the
magnetic field perpendicular to the neutrino's direction of motion. For those
neutrinos to be detected in experiments, it depends on the orientation of the
supernova with respect to Earth in addition to the particulars of the supernova
dynamics. For this reason, it is difficult to be specific about $B(r)$.
Moreover, it is always the $\mu B(r)$ combination which appears in equations. We
find that, for the purposes of this paper the important parameters are the
values of $\mu B$ on the surface of the neutrinosphere and around the SFP
resonance region. In particular, how the magnetic field changes in between these
two points is less important. For definiteness, we use a magnetic field which
decreases with the distance $r$ from the center of the supernova as
\cite{deGouvea:2012hg, deGouvea:2013zp, Kharlanov:2020cti, Sasaki:2021bvu}
\begin{equation}
\label{B value}
B(r)=B_0\left(\frac{r_{\mbox{\footnotesize{mag}}}}{r}\right)^2
\end{equation}
where $r_{\mbox{\footnotesize{mag}}}=50$ km. Neutrino flavor evolution starts
from the surface of the proto-neutron star which we also take to be at $R=50$
km. Therefore, $B_0$ is effectively the magnetic field on the surface of the
neutrinosphere. Its value can range from a conservative $10^{12} \text{G}$ to
extreme values such as $10^{16} \text{G}$ \cite{Kotake:2005zn, Sawai:2005pr}.
For example, taking $B_0=10^{15} \text{G}$ and $\mu=3\times 10^{-16} \mu_B$
leads to an energy separation of
\begin{equation}
\label{muB value}
\mu B_0=1.7\times 10^{-9} \mbox{ eV}
\end{equation}
between the helicity eigenstates on the surface of the neutrinosphere. This is
larger than the typical separation of $10^{-11}$ eV between neutrino mass
eigenstates in vacuum but smaller than the typical separation of $10^{-8}$ eV
between the flavor eigenstates in matter. We discuss these figures below.
Neutrinos interact with other particles and with each other in the supernova
\cite{Flowers:1976kb, fuller&mayle}. Here, we ignore the neutrino-neutrino
interactions\footnote{Neutrino-neutrino interactions turn the neutrinos
streaming out of a supernova into a self interacting many-body system
\cite{Pantaleone:1992eq, Pantaleone:1992xh} with non-linear behavior. Without a
large neutrino magnetic moment, the effects of the neutrino-neutrino
interactions on neutrino flavor evolution is extensively studied. See Refs.
\cite{Duan:2010bg, Chakraborty:2016yeg} for reviews. For normal mass hierarchy
(NH) these interactions generally have minimal effect on the flavor evolution.
For inverted mass hierarchy (IH) they cause different flavors to swap parts of
their energy spectra with each other in the first few hundred km. With a large
neutrino magnetic moment, the possible effects are more complicated. See, for
example, Refs. \cite{deGouvea:2012hg, deGouvea:2013zp, Pehlivan:2014zua,
Kharlanov:2020cti, Sasaki:2021bvu}. We comment on how they can affect our
results in Conclusions.}. The interactions with the other particles can be
further limited to the forward scattering alone because these are the only terms
which add up coherently \cite{Wolfenstein:1977ue}. At the MeV energy scale
relevant for the supernova, all weak interactions can be treated within the
Fermi four point model in terms of the Fermi coupling constant $G_F$. With these
considerations, the Hamiltonian describing the vacuum oscillations and
interactions in an unpolarized neutral medium can be written as
\begin{equation}
\label{Hnunu}
\begin{split}
H_{\nu \leftrightarrow \nu}(r)
=&\qty(-\tfrac{\delta m^2}{2E}\cos{2\theta}+\tfrac{\sqrt{2}G_Fn(r)}{m_n}\tfrac{3Y_e-1}{2})
\dyad{\nu_e}{\nu_e}\\
+&\qty(\tfrac{\delta m^2}{2E_\nu} \cos{2\theta}-\tfrac{\sqrt{2}G_Fn(r)}{m_n}\tfrac{1-Y_e}{2})
\dyad{\nu_x}{\nu_x}\\
+&\,\tfrac{\delta m^2}{2E_\nu} \sin{2\theta}\,
\left(\dyad{\nu_e}{\nu_x}+\dyad{\nu_x}{\nu_e}\right)
\end{split}
\end{equation}
for neutrinos and
\begin{equation}
\label{Hnubarnubar}
\begin{split}
H_{\bar\nu \leftrightarrow \bar\nu}(r)
=&\qty(\tfrac{\delta m^2}{2E_\nu}\cos{2\theta}+\tfrac{\sqrt{2}G_Fn(r)}{m_n}\tfrac{1-Y_e}{2})
\dyad{\bar\nu_x}{\bar\nu_x} \\
+&\qty(-\tfrac{\delta m^2}{2E_\nu}\cos{2\theta}-\tfrac{\sqrt{2}G_Fn(r)}{m_n}\tfrac{3Y_e-1}{2})
\dyad{\bar\nu_e}{\bar\nu_e}\\
+&\,\tfrac{\delta m^2}{2E_\nu} \sin{2\theta}\,
\left(\dyad{\bar\nu_e}{\bar\nu_x}+\dyad{\bar\nu_x}{\bar\nu_e}\right)
\end{split}
\end{equation}
for antineutrinos. Here, the terms involving the vacuum mixing angle $\theta$
represent the mixing of flavor eigenstates. $E_\nu$ denotes the neutrino
energy, and $\delta m^2$ denotes the squared difference of the two neutrino
masses. In NH we have $\delta m^2>0$ whereas in IH we have $\delta m^2<0$. For
the $13$ mixing parameters adopted here, $\sin 2\theta=0.29$ and
\begin{equation}
\label{vacuum value}
\frac{|\delta m^2|}{2E_\nu}=\frac{6.4\times 10^{-11}\mbox{eV}}{E_\nu/10 \mbox{
MeV}},
\end{equation}
which is the energy separation between mass eigenstates in vacuum.
In Eqs. (\ref{Hnunu}) and (\ref{Hnubarnubar}), the environment is characterized
by its density $n(r)$ and its electron fraction $Y_e$. The electron fraction is
defined as the ratio of electron and baryon number densities. $m_n$ denotes the
average baryon mass. In our calculations we assume slightly neutron rich
conditions by taking $Y_e=0.45$. The density profile that we use is based on the
$6M_{\odot}$ helium core presupernova model of Ref. \cite{1987ESOC...26..325N}.
We adopt this as the density distribution at the shock bounce at $t=0$. At later
times, the shock wave modifies the density profile. We mimic this by
parametrically changing the $t=0$ matter density as described in Ref.
\cite{Fogli:2003dw} for up to post-bounce time $t=5$ s. We do this by using same
shock wave speed and contact discontinuity parameters as in Ref.
\cite{Fogli:2003dw}. The resulting density distributions are shown in Fig.
\ref{fig:baryonProfileShock} with solid lines. These are the density
distributions that we later use for our realistic calculations of the
neutrino energy spectrum reaching Earth. But before that, we run
some illustrative simulations by fitting these density
distributions to the functional form
\begin{equation}
\label{density profile}
n(r)=n_0 e^{-r/r_{\mbox{\footnotesize{mat}}}}.
\end{equation}
We find that, at all post bounce times and for the first few thousand
kilometers, Eq. (\ref{density profile}) fits the density distributions
reasonably well with $r_{\mbox{\footnotesize{mat}}}=200$ km and with different
central densities at different post bounce times. At $t=0$, the central density
is $n_0=1.0\times 10^{10} \mbox{ g/cm}^3$. At later times we have
\begin{equation}
\label{fits}
n_0=
\begin{cases}
1.8\times 10^7 \mbox{g/cm}^3 & t=1 \mbox{ s}, \\
4.4\times 10^6 \mbox{g/cm}^3 & t=2 \mbox{ s}, \\
2.3\times 10^6 \mbox{g/cm}^3 & t=3 \mbox{ s}, \\
1.5\times 10^6 \mbox{g/cm}^3 & t=4 \mbox{ s}, \\
1.0\times 10^6 \mbox{g/cm}^3 & t=5 \mbox{ s}.
\end{cases}
\end{equation}
The fitted densities are also shown in Fig. \ref{fig:baryonProfileShock} with
dashed lines. The practical reason for using the fitted distributions in
illustrative simulations is the necessity of running a large number of
simulations in order to discuss various features of the phase effect. But
another rationale is the fact that the SFP phase effect actually takes place in
the density scale: the results mostly depend on the value of the magnetic field
at a particular density rather than how the density and magnetic field are
spread over the physical space. In other words, it is the interplay between the
matter profile and the magnetic field profile that matters. It is in that sense
that Eqs. (\ref{B value}) and (\ref{density profile}) serve as a basis for
illustration. For example, Fig. \ref{fig:baryonProfileShock} clearly shows
that, in the realistic case, the density decreases more slowly and the
resonances take place in outer regions case where the magnetic field is weaker.
For this reason, a sizable spin-flavor phase effect requires either a larger
magnetic field or a larger neutrino magnetic moment in the realistic case than
it is in the illustrative case. Another shortcoming associated with using the
exponential fits is that this approach reduces the post-bounce supernova
dynamics to one variable, which is the central density given in Eq.
(\ref{fits}). In particular, with the fitted densities neutrinos pass through
MSW resonance only once whereas with actual densities some neutrinos pass it
three times. However, as we discuss further in our Conclusions, additional resonances can
only serve to increase the uncertainties associated with SFP phase effect.
On the surface of the proto-neutron star, the energy separation between flavor
eigenstates created by the interactions is of the order of
\begin{equation}
\label{numerical densities}
7.8 \times 10^{-9} \mbox{ eV} \leq\frac{\sqrt{2}G_Fn_0}{m_n}(1-2Y_e)
\leq 1.4\times10^{-7} \mbox{ eV},
\end{equation}
where the numerical values show its decrease from $t=1$ s and $t=5$ s. Comparing
Eq. (\ref{numerical densities}) with Eq. (\ref{muB value}) one might think that
the importance of the SFP phase effects increases at later post-bounce times.
However, as we discuss below, the situation less straightforward.
\section{The Resonances}
Since neutrinos of all flavors are emitted from the neutrinosphere, it is better
to work with a density operator rather than individual initial states. We denote
by $\hat{\rho}(E_\nu,r)$ the normalized density operator which describes all
neutrinos and antineutrinos with energy $E_\nu$ which are at a distance $r$ from
the center of the supernova. The energy dependence of $\hat\rho(E_\nu,r)$ comes
in part from vacuum oscillations as shown in Eq. (\ref{vacuum value}), and in
part from the energy dependence of emission from the neutrinosphere. For
brevity, we suppress the energy dependence of the density operator unless it is
necessary for discussion. But whenever we write $\hat\rho(r)$, a specific
neutrino energy is always implied.
We assume that all neutrinos represented by the density operator $\hat\rho(r)$
are emitted from the surface of the neutrinosphere at the same time and traveled
radially outward with the speed of light to reach $r$. This is a simplification
because in reality every point on the surface of the neutrinosphere emits
neutrinos uniformly in every direction pointing outward. Even in a spherically
symmetric supernova, which we assume to be the case, neutrinos would travel by
slightly different distances and experience slightly different conditions
depending on their direction of emission. This simplification is called the single
angle approximation\footnote{The geometry of this approximation is studied in
detail in Ref. \cite{Duan:2006an}. The focus of this reference is the
neutrino-neutrino interactions, but its geometrical treatment of neutrinos
emitted in all directions from a spherical source can be generally applied.} and
effectively reduces the problem to the one dimensional evolution equation
\begin{equation}
\label{eqn:EoM}
i\dv{}{r}\hat{\rho}(r) = \comm{H(r)}{\hat{\rho}(r)}.
\end{equation}
Here the flavor evolution is described in terms of the distance $r$ because
neutrinos essentially travel with the speed of light. Most of the non-trivial
flavor evolution takes place by the time neutrinos reach the low density outer
layers of the supernova, which takes only a fraction of a second. This is much
shorter than the time scale with which the supernova background changes. For
this reason, one can take a snapshot of the supernova at the post-bounce time
$t$, put this information into the Hamiltonian $H(r)$ in Eq. (\ref{eqn:EoM})
and solve it to find the flavor evolution of the neutrinos that are emitted at
time $t$. In this sense, Eq. (\ref{eqn:EoM}) depends on the post-bounce time
$t$. But this dependence is not explicitly shown in our equations. We solve Eq.
(\ref{eqn:EoM}) for the post bounce times from $t=1$ s to $t=5$ s, and clearly
label the corresponding results.
The total Hamiltonian $H(r)$ includes the vacuum oscillations of neutrinos, their
interactions with the matter background, and the effect of the magnetic field.
It can be written as
\begin{equation}
\label{distant decomposition}
H(r) = H_{\nu \leftrightarrow \nu}(r) + H_{\bar\nu \leftrightarrow \bar\nu}(r) +
H_{\mu}(r)
\end{equation}
with the individual terms given by Eqs. (\ref{Hmu}), (\ref{Hnunu}) and
(\ref{Hnubarnubar}). The evolution described by Eqs. (\ref{eqn:EoM}) and
(\ref{distant decomposition}) is not difficult to solve numerically. However,
much insight can be gained by analytically examining its evolution under
adiabatic conditions. Adiabaticity is quantified in terms of the energy
spectrum of the Hamiltonian. We denote the local energy eigenvalues of the
total Hamiltonian in Eq. (\ref{distant decomposition}) by $E_i(r)$ and the
corresponding energy eigenstates by $|r_i\rangle$. In other words, at every
point $r$, we have
\begin{equation}
\label{local eigenstates}
\hat H (r)= \sum_{i=1}^4 E_i(r)\dyad{r_i}{r_i}.
\end{equation}
We order the eigenvalues such that $E_1$ is the largest and $E_4$ is the
smallest. The adiabaticity condition can be expressed as
\begin{equation}
\label{adiabaticity condition}
|\mel{r_i}{\dv{H(r)}{r}}{r_j}|\ll|E_i(r)-E_j(r)|.
\end{equation}
As long as the adiabaticity requirement is met, the evolution is determined by
the energy spectrum of the Hamiltonian. The adiabatic approximation tells us
that if the initial state of a neutrino on the neutrinosphere ($r=R$) is an energy
eigenstate $|R_i\rangle$, then it evolves into
\begin{equation}
\label{adiabatic evolution}
|R_i\rangle \rightarrow e^{-i\int_R^r E_i(r)dr} |r_i\rangle
\end{equation}
at a later time.
Fig. \ref{eigenvalues} shows the eigenvalues of the Hamiltonian as functions of
the logarithm of the density. Since the eigenvalues are plotted against density,
the actual density distribution is irrelevant. The two points where the
eigenvalues approach to each other pairwise are SFP and MSW resonance points.
These are the points at which the adiabaticity condition comes closest to being
violated. As the density decreases, the first resonance occurs between $E_4$ and
$E_3$ for NH, and between $E_1$ and $E_2$ for IH. On the other hand, the density
is almost always large enough to force neutrinos into the flavor eigenstates so
that each energy eigenstate significantly projects on a particular flavor
eigenstate. The dominant flavor contents of the energy eigenstates are indicated
in Fig. \ref{eigenvalues}. In particular, noting that $3Y_e-1<1-Y_e$ for the
neutron rich conditions ($Y_e<1/2$), and substituting the numerical values given
in Eqs. (\ref{muB value}), (\ref{vacuum value}), and (\ref{numerical
densities}), into the Hamiltonians (\ref{Hnunu}) and (\ref{Hnubarnubar}), we
find
\begin{equation}
\label{energy eigenkets at R}
\left(\ket{R_1},\ket{R_2},\ket{R_3},\ket{R_4}\right)\approx
\left(\ket{\bar\nu_x},\ket{\nu_e},\ket{\bar\nu_e},\ket{\nu_x}\right)
\end{equation}
in both hierarchies on the neutrinosphere. In that sense, one can loosely say
that the SFP resonance occurs between $\bar\nu_e-\nu_x$ in NH, and between
$\nu_e-\bar\nu_x$ in IH. Indeed, one can re-write the Hamiltonian in Eq.
(\ref{distant decomposition}) in the following decomposition:
\begin{equation}
\label{early decomposition}
H(r) = H_{e \leftrightarrow \bar x}(r) + H_{x \leftrightarrow \bar e}(r) + H_{\theta}.
\end{equation}
Here, $H_{e \leftrightarrow \bar x}(r)$ and $H_{x \leftrightarrow \bar e}(r)$
are the parts of the Hamiltonian in Eq. (\ref{distant decomposition}) which live in
the orthogonal $\nu_e-\bar\nu_x$ and $\bar\nu_e-\nu_x$ subspaces, respectively.
They are given by
\begin{align}
\label{H_exbar}
H_{e \leftrightarrow \bar x}(r)
=-&\qty(\tfrac{\delta m^2}{4E}\cos{2\theta}\!-\!\tfrac{\sqrt{2}G_F n(r)}{m_n}\tfrac{3Y_e-1}{2})
\dyad{\nu_e}{\nu_e}\nonumber\\
+&\qty(\tfrac{\delta m^2}{4E}\cos{2\theta}\!+\!\tfrac{\sqrt{2}G_F n(r)}{m_n}\tfrac{1-Y_e}{2})
\dyad{\bar\nu_x}{\bar\nu_x}\nonumber\\
+& \, \mu B \, \left(\dyad{\nu_e}{\bar\nu_x} + \dyad{\bar\nu_x}{\nu_e}\right)
\end{align}
and
\begin{align}
\label{H_xebar}
H_{\bar e \leftrightarrow x}(r)
=-&\qty(\tfrac{\delta m^2}{4E} \cos{2\theta}\!+\!\tfrac{\sqrt{2}G_F n(r)}{m_n}\tfrac{3Y_e-1}{2})
\dyad{\bar\nu_e}{\bar\nu_e}\nonumber\\
+&\qty(\tfrac{\delta m^2}{4E} \cos{2\theta}\!-\!\tfrac{\sqrt{2}G_F n(r)}{m_n}\tfrac{1-Y_e}{2})
\dyad{\nu_x}{\nu_x}\nonumber\\
-& \, \mu B \, \left(\dyad{\nu_x}{\bar\nu_e}+\dyad{\bar\nu_e}{\nu_x}\right).
\end{align}
These two Hamiltonians describe flavor transition in two orthogonal channels.
The term $H_\theta$ in Eq. (\ref{early decomposition}) is given by
\begin{equation*}
\label{Htheta}
H_{\theta}=
\tfrac{\delta m^2}{2E_\nu} \sin{2\theta} \qty(\dyad{\nu_e}{\nu_x}+\dyad{\nu_x}{\nu_e}
+\dyad{\bar\nu_e}{\bar\nu_x}+\dyad{\bar\nu_x}{\bar\nu_e})
\end{equation*}
and it couples these two orthogonal channels. Since $H_{\theta}$ is proportional
to the small term $\tfrac{\delta m^2}{2E_\nu}\sin\theta$, its effect can be
ignored near the neutrinosphere. In this case, the spin-flavor evolution
proceeds through the decoupled $\nu_e-\bar\nu_x$ and $\bar\nu_e-\nu_x$ channels.
This is what we see on the high density part of Fig. \ref{eigenvalues} where the
large energy separation between the upper and lower pairs of eigenvalues forbids
the transition between them. In this decoupling approximation, SFP resonance
occurs when the diagonal elements of the Hamiltonians in Eqs. (\ref{H_exbar}) or
(\ref{H_xebar}) become equal. This happens when \cite{Lim:1987tk,
Akhmedov:1988uk}
\begin{equation}
\label{sf resonance}
\frac{\delta m^2}{2E_\nu} \cos 2\theta = \pm
\frac{\sqrt{2}G_Fn(r)}{m_n}(1-2Y_e),
\end{equation}
where $-$ sign is for $H_{e \leftrightarrow \bar x}(r)$ and $+$ sign is for
$H_{\bar e \leftrightarrow x}(r)$. Clearly, for $Y_e<0.5$ adopted in this paper,
the condition in Eq. (\ref{sf resonance}) can hold only for the former in NH,
and only for the latter in IH. In the rest of this paper, we focus on NH. When
the effect of the $H_{\theta}$ is included the location of the SFP resonance
shifts as discussed in Ref. \cite{Friedland:2005xh}. But for the small mixing
angle that we use, this is inconsequential.
In the decoupling limit, Eq. (\ref{H_xebar}) tells us that the lower two
eigenstates which undergo SFP resonance are combinations of $\ket{\nu_x}$ and
$\ket{\bar\nu_e}$. This can be expressed in terms of an effective mixing angle:
\begin{equation}
\begin{split}
\label{local eigenkets SFP}
\ket{r_3}&=\cos\theta_B(r) \ket{\nu_x} + \sin\theta_B(r) \ket{\bar\nu_e}, \\
\ket{r_4}&=-\sin\theta_B(r) \ket{\nu_x}+ \cos\theta_B(r) \ket{\bar\nu_e}.
\end{split}
\end{equation}
Here the effective mixing angle $\theta_B(r)$ is defined in
the range $0\leq \theta_B(r) \leq \pi/2$ with
\begin{equation}
\label{theta SFP}
\tan{2\theta_B(r)}=\frac{2\mu B(r)}{\frac{\delta
m^2}{2E}\cos\theta-\tfrac{\sqrt{2}G_Fn(r)}{m_n}(1-2Y_e)}.
\end{equation}
Well above the resonance density, $\theta_B(r)\simeq\pi/2$ so that the
eigenstates are $\ket{r_3}\approx\ket{\bar\nu_e}$ and
$\ket{r_4}\approx-\ket{\nu_x}$ as expected from Fig. \ref{eigenvalues}. Well
below the resonance density $\theta_B(r)\simeq 0$ in which case the flavor
contents of eigenstates are switched as is also shown in Fig. \ref{eigenvalues}.
At the resonance density where $\theta_B=\pi/4$, the energy eigenstates are
maximal mixtures of flavor eigenstates. In that sense
\begin{equation}
\label{sin 2theta}
\sin^2 \! 2\theta_B(r)=4\sin^2 \! \theta_B(r) \cos^2 \! \theta_B(r)
\end{equation}
can be used a measure of the width of the resonance because it will be different
from zero as long as energy eigenstates are mixtures of flavor eigenstates. This
quantity is plotted in Fig. \ref{resonance widths} against density at different
post bounce times. The solid blue line is for $t=1\mbox{ s}$, the red dashed
line is for $t=3\mbox{ s}$, and green dashed-doted line for $t=5\mbox{ s}$. The
points at which $\sin^2\!2\theta_B=1$ (i.e., the resonance points) coincide
because the plot is in density scale and SFP resonance occurs at a specific
density. But the physical location of the SFP resonance moves closer to the
center of the supernova at later post bounce times as the overall density drops.
This can also be seen in Fig. \ref{fig:baryonProfileShock} where the horizontal
line representing the SFP resonance crosses the density distributions at
increasingly smaller radii. While this is true for both the realistic and the
fitted density distributions, the latter is used in Fig. \ref{resonance widths}.
An important observation is that the SFP resonance region, i.e. the region for
which $\sin^2\!2\theta_B(r)$ is significantly different from zero, becomes
increasingly wider in density scale at later post-bounce times. It happens
because as the resonance moves inward it occurs in a region where the magnetic
field is stronger. According to Eq. (\ref{theta SFP}) a stronger magnetic field
at a given density means a larger $\theta_B(r)$. The widening of SFP resonance
with post-bounce time will be important in what follows.
There is no similar widening effect for the MSW resonance. MSW resonance occurs
in the outer regions where the magnetic field is weaker. In accordance with the
decomposition in Eq. (\ref{distant decomposition}), neutrino and antineutrino
oscillations decouple if $H_\mu(r)$ can be ignored. In this case, Fig.
\ref{eigenvalues} tells us that the dynamics of $\ket{r_1}-\ket{r_4}$ decouple
from the dynamics of $\ket{r_2}-\ket{r_3}$ in both hierarchies. With this
approximation, a resonance occurs when the diagonal elements of $H_{\nu
\leftrightarrow \nu}(r)$ or $H_{\bar\nu \leftrightarrow \bar\nu}(r)$ become
equal, i.e. when
\begin{equation}
\label{msw resonance}
\frac{\delta m^2}{2E_\nu} \cos 2\theta = \pm
\frac{\sqrt{2}G_Fn(r)}{m_n}Y_e.
\end{equation}
Here the $+$ sign is for neutrinos and $-$ signs is for antineutrinos.
Resonance condition in Eq. (\ref{msw resonance}) holds only for neutrinos
in NH, and only for antineutrinos in IH. Since we focus on NH,
the resonating eigenstates can be written as
\begin{equation}
\begin{split}
\label{local eigenkets MSW}
\ket{r_2}&=\cos\theta_M(r) \ket{\nu_e} + \sin\theta_M(r) \ket{\nu_x}, \\
\ket{r_3}&=- \sin\theta_M(r) \ket{\nu_e} + \cos\theta_M(r) \ket{\nu_x}
\end{split}
\end{equation}
in terms of an effective matter mixing angle $\theta_M(r)$ defined in the range
$0\leq \theta_M(r)\leq \pi/2$ and given by
\begin{equation}
\label{theta MSW}
\tan{2\theta_M(r)}=\frac{\tfrac{\delta m^2}{2E_\nu}\sin2\theta}{\tfrac{\delta
m^2}{2E_\nu}\cos2\theta-\frac{\sqrt{2}G_Fn(r)}{m_n}Y_e}.
\end{equation}
The quantity $\sin^2\!2\theta_M(r)$ is similarly a measure of the resonance
width and is plotted in Fig. \ref{resonance widths}. Like the SFP resonance, the
MSW resonance occurs at the same position in density scale but physically moves
inward with time. However, unlike the SFP resonance its width is fixed in
density scale. This is because at a given density the value of $\theta_M(r)$
depends only on the vacuum mixing parameters. This is referred to as the
universality. See, for example, Ref. \cite{Smirnov:2003da}. The MSW resonance is
universal in the sense that its particulars depend only on the vacuum mixing
parameters and the density, not on how this density is spread in physical space.
In that sense SFP resonance is not universal because it depends on how the
magnetic field changes with respect to the density.
The above description relies on the decoupling of the $\ket{r_1}-\ket{r_2}$ and
$\ket{r_3}-\ket{r_4}$ dynamics in the SFP resonance region, and the decoupling
of $\ket{r_1}-\ket{r_4}$ and $\ket{r_2}-\ket{r_3}$ dynamics in the MSW resonance
region. These two approximations are not independent. They are both true when
SFP and MSW resonances are well separated, and they both fail when these
resonances are not well separated. The eigenstate $\ket{r_3}$ which enters both
resonances is the culprit here. We write $\ket{r_3}$ as in the first line of Eq.
(\ref{local eigenkets SFP}) in the former decoupling, and as in the second line
of Eq. (\ref{local eigenkets MSW}) in the latter decoupling. Obviously both
equations can be true only if $\sin^2\!2\theta_B(r)$ drops nearly to zero by the
time $\sin^2\!2\theta_M(r)$ starts to become different from zero. Otherwise
$\ket{r_3}$ has to be combination of $\ket{\nu_x}$, $\ket{\bar\nu_e}$, and
$\ket{\nu_e}$. Therefore there is only one decoupling approximation, and it is
the decoupling of the SFP and MSW resonances as quantified by their widths. Fig.
\ref{resonance widths} tells us that in general the decoupling approximation can
be expected to be valid at early post-bounce times, but fail at late post-bounce
times. The degree to which it fails depends on how the magnetic field changes
with respect to matter density, and how large the neutrino magnetic moment is
because SFP resonance width is directly controlled by the value of $\mu B(r)$
around the resonance region.
\section{Evolution and Decoherence}
In this paper, we present two kinds of results. First, we present the survival
probabilities of electron antineutrinos. This serves to illustrate the
appearance of SFP phase effect and its relative importance under different
settings. We work with electron antineutrinos because they are relatively easy
to detect at water Cherenkov detectors, and because they undergo SFP resonance
in NH. To find their survival probabilities, we start by taking the initial
density operator as
\begin{equation}
\label{box}
\hat \rho (E_\nu,R)= \dyad{\bar\nu_e}{\bar\nu_e}
\end{equation}
and then calculate the corresponding density operator $\hat\rho(E_\nu,r)$ at
$r$. The survival probability is then given by its diagonal matrix element
\begin{equation}
\label{general survival probability}
P_{\bar\nu_e\to\bar\nu_e}(E_\nu,r) = \mel{\bar\nu_e}{\hat\rho(E_\nu,r)}{\bar\nu_e}.
\end{equation}
Here we temporarily reintroduce the neutrino energy $E_\nu$ into our notation to
emphasize the energy dependence of the survival probability. We find
$\hat\rho(E_\nu,r)$ and the corresponding survival probability numerically by
solving the evolution equation given in Eq. (\ref{eqn:EoM}). We also calculate
them analytically by using the decoupling approximation and the Landau-Zener
jumping probabilities.
Second, we consider the mixed ensemble of neutrinos as appropriate for the
cooling period of the proto-neutron star. For this, we take the initial
density operator to be diagonal in the flavor basis, i.e., in the form
\begin{equation}
\label{initial R in flavor basis}
\hat \rho (E_\nu,R)= \sum_{\alpha=e,\bar e, x, \bar x}
\rho_{\alpha\alpha}(E_\nu,R)\ket{\nu_\alpha}\bra{\nu_\alpha},
\end{equation}
where $\rho_{\alpha\alpha}(E_\nu,R)$ is the neutrino energy spectrum emitted
from the neutrinosphere in arbitrary units. Here, $\alpha= e,\bar e, x, \bar x$
and we use $\nu_{\bar \alpha}$ to mean $\bar\nu_\alpha$. We again calculate the
corresponding density operator $\hat\rho(E_\nu,r)$ at $r$ both numerically and
analytically as described above. Corresponding neutrino energy spectra at $r$
are given by its diagonal elements
\begin{equation}
\label{distributions}
\rho_{\alpha\alpha}(E_\nu,r)=\mel{\nu_\alpha}{\hat\rho(E_\nu,r)}{\nu_\alpha}.
\end{equation}
In both cases, analytical calculations start by expressing the initial density
operator in the energy eigenbasis at $R$ as
\begin{equation}
\label{initial R in eigenbasis}
\hat \rho (R)= \sum_{i,j=1}^4 \rho_{ij}(R)\ket{R_i}\bra{R_j}
\end{equation}
with $\rho_{ij}(R)=\mel{R_i}{\hat\rho(R)}{R_j}$ denoting the corresponding
components. Here, we again drop the energy dependence from our notation. We
always use Greek indices to refer to the flavor eigenbasis as in Eq.
(\ref{initial R in flavor basis}) and Latin indices to refer to the energy
eigenbasis as in Eq. (\ref{initial R in eigenbasis}).
Let us first assume that the adiabaticity condition holds through both
resonances. We consider the partially adiabatic evolution in the next two
sections. In the adiabatic case, the evolution is completely determined by Eq.
(\ref{adiabatic evolution}). The density operator evolves into
\begin{equation}
\label{later rho in eigenbasis}
\hat \rho (r)= \sum_{i,j=1}^4 e^{-i\int_R^r(E_i-E_j)dr} \rho_{ij}(R) \ket{r_i}\bra{r_j}
\end{equation}
at a later $r$. As the neutrinos reach to the surface of the supernova, the
Hamiltonian is reduced to the vacuum term alone and the energy eigenstates
reduce to the mass eigenstates. Which energy eigenstate reduces to which mass
eigenstate depends on the hierarchy. We have
\begin{equation}
\label{eigenstates in vacuum}
\begin{split}
\ket{r_1}\!,\ket{r_2}\!,\ket{r_3}\!,\ket{r_4} \! \xrightarrow{n(\!r\!)\to 0} \!
\ket{\bar\nu_2}\!,\ket{\nu_2}\!,\ket{\nu_1}\!,\ket{\bar\nu_1}
& \!\mbox{ in NH,}\\
\ket{r_1}\!,\ket{r_2}\!,\ket{r_3}\!,\ket{r_4} \! \xrightarrow{n(\!r\!)\to 0} \!
\ket{\nu_1}\!,\ket{\bar\nu_1}\!,\ket{\bar\nu_2}\!,\ket{\nu_2}
& \!\mbox{ in IH.}\\
\end{split}
\end{equation}
These are also indicated in Fig. \ref{eigenvalues}.
After that, neutrinos travel a long distance to Earth and during this time they
decohere. Decoherence happens because neutrino mass eigenstates travel with
different speeds in vacuum and a gap opens up between them over long distances.
When neutrino mass eigenstates do not overlap in physical space, they cannot
interfere and the flavor oscillations stop \cite{GIUNTI199287}. The standard
mathematical formulation of neutrino oscillations, which we also use here, is
based on the assumption that neutrinos are plain waves with infinite wavepackage
size. The formulation of the decoherence requires the finite size of the
wavepackage to be taken into account. In this case, the off diagonal elements of
the density matrix in mass basis pick up an exponential term
$e^{-(r/r_{\mbox{\tiny coh}})^2}$ \cite{Hansen:2016klk}. The coherence length
$r_{\mbox{\tiny coh}}$ depends on the wave package size, which in turn depends
on the circumstances of the neutrino's creation. For a physically intuitive
discussion, see Section 8 of Ref. \cite{Giunti}. For supernova neutrinos the
coherence length can be estimated to be of the order of $10^6$ km \cite{Giunti,
Nussinov:1976uw}. For this reason, plane wave formulation is a good
approximation inside the star. But even for a galactic supernova, neutrinos have
to travel several kilo parsecs (about $10^{17}$ km) to reach Earth. Therefore, one
can take $r\to\infty$ limit where $e^{-(r/r_{\mbox{\tiny coh}})^2}\to 0$.
Therefore the decoherence of neutrinos over long distances can be implemented in
practice by removing the off-diagonal elements of the density operator in the
mass basis. As a result, in the fully adiabatic case the neutrinos arriving
Earth is described by
\begin{equation}
\label{adiabatic rho earth}
\hat\rho(\infty) = \sum_{i=1}^4 \rho_{ii}(R) \ket{r_i} \bra{r_i}
\end{equation}
with $\ket{r_i}$ given by Eq. (\ref{eigenstates in vacuum}) according to mass
hierarchy.
In particular, for the initial condition given in Eq. (\ref{box}) one finds
\begin{equation}
\label{rho ebar -> infinity adiabatic}
\hat\rho(\infty)=\sin^2\!\theta_B(R)\dyad{\nu_1}{\nu_1}+\cos^2\!\theta_B(R)\dyad{\bar\nu_1}{\bar\nu_1}.
\end{equation}
Here we used Eq. (\ref{local eigenkets SFP}) to calculate $\rho_{ii}(R)$ in
terms of the effective mixing angle $\theta_B(R)$ on the surface of the
neutrinosphere. Using the fact that $\bra{\nu_1}\ket{\bar\nu_e}=0$ and
$\bra{\bar\nu_1}\ket{\bar\nu_e}=\cos\theta$, the survival probability of an
initial $\bar\nu_e$ is found from Eq. (\ref{general survival probability}) as
\begin{equation}
\label{Pebarebar}
P_{\bar\nu_e\to\bar\nu_e}(\infty)
=\left(\frac{1}{2}+\frac{1}{2}\cos2\theta_B(R)\right)\cos^2\!\theta,
\end{equation}
which is the standard result. See Ref. \cite{1992PhR...211..167P}, for
example.
\section{Zero mixing angle}
\subsection{General treatment}
Here and in the next section, we consider the partially adiabatic evolution of
neutrinos. In this section we take $\theta=0$, which removes the MSW resonance
from the picture and allows us to focus on the phase effects between the
production point and the partially adiabatic SFP resonance. $\theta\neq 0$ case
is discussed in the next section where additional phase effects from the MSW
resonance also enter into the picture.
The adiabaticity of SFP resonance is quantified by \cite{Lim:1987tk,
1992PhR...211..167P, Nunokawa:1996gp}
\begin{equation}
\label{gamma B}
\Gamma_B=\left(
\frac{(\mu B)^2}
{\sqrt{2}G_F \frac{1}{m_n}\left\lvert\tfrac{dn(r)}{dr}\right\lvert(1-2Y_e)}
\right)_{\mbox{\footnotesize res}},
\end{equation}
where the subscript ``res'' indicates that the expression should be calculated
at the SFP resonance. The Landau-Zener approximation tells us that the probability
that the system jumps between the resonating energy eigenstates $\ket{r_3}$ and
$\ket{r_4}$ in NH is
\begin{equation}
\label{Landau-Zener probability}
P_B=e^{-2\pi \Gamma_B}.
\end{equation}
Therefore the evolution is described by
\begin{equation}
\label{Landau-Zener transfer}
\begin{pmatrix}
\ket{r_3} \\ \ket{r_4}
\end{pmatrix}
\to
\begin{pmatrix}
\sqrt{1-P_B} & -e^{-i\alpha}\sqrt{P_B} \\
e^{-i\alpha}\sqrt{P_B} & \sqrt{1-P_B}
\end{pmatrix}
\begin{pmatrix}
\ket{r_3} \\ \ket{r_4}
\end{pmatrix}
\end{equation}
through the resonance\footnote{Strictly speaking, Eqs. (\ref{gamma
B})-(\ref{Landau-Zener transfer}) assume that the magnetic field is constant
around the resonance region. This is not the case for our magnetic field
profile. But the comparison between our numerical and analytical results
indicate that this variation can be ignored here.}. Here $\alpha$ is called the
Stoke's phase \cite{PhysRevA.50.843, PhysRevA.55.R2495}. If $P_B\approx 0$ then
no jumping occurs from one eigenstate to the other which is the adiabatic limit.
In this limit the Stoke's phase becomes irrelevant. The Stoke's phase is also
irrelevant in the opposite (sudden) limit where $P_B\approx 1$ because it can be
included in the definition of the local eigenstates. But if $0<P_B<1$, the
Stoke's phase should be included in the calculation. However, as we discuss
below, it becomes important only when the state entering the resonance is a
combination of the energy eigenstates $\ket{r_3}$ and $\ket{r_4}$.
Neutrinos evolve adiabatically until the SFP resonance point that we denote by
$r_B$. Before the resonance the density operator is given by Eq. (\ref{later rho
in eigenbasis}). After the resonance, it is given by
\begin{eqnarray}
\label{rho after SFP resonance}
\hat \rho (r)
&=&\!\!\rho_{11}(R)\dyad{r_1}{r_1} +\rho_{22}(R) \dyad{r_2}{r_2}\nonumber\\
&+&\!\! \left((1-P_B)\rho_{33}(R)+P_B\rho_{44}(R)\right)\ket{r_3}\bra{r_3}\nonumber\\
&+&\!\!\left(P_B\rho_{33}(R)+(1-P_B)\rho_{44}(R)\right)\ket{r_4}\bra{r_4}\\
&+&\!\!\left(e^{i\phi_B}\! \sqrt{P_B(1\!-\!P_B)} \rho_{34}(R) \! +
\!\mbox{cc}\right)\left(\dyad{r_3}{r_3}\!-\!\dyad{r_4}{r_4}\right)\nonumber\\
&+&\!\!(\dots)
e^{-i\int_{r_B}^r(E_3-E_4)dr} \dyad{r_3}{r_4} + \mbox{hc}\nonumber
\end{eqnarray}
in accordance with Eq. (\ref{Landau-Zener transfer}). Here, $\rho_{ij}(R)$ are
the matrix elements of the density operator in the energy eigenbasis at $R$
defined in Eq. (\ref{initial R in eigenbasis}). The last line of this result is
off-diagonal in the energy eigenbasis and fluctuates very fast around zero. Here
hc denotes the hermitian conjugate. The coefficients of these terms are not
shown explicitly because they will eventually die due to decoherence as
explained in the previous section. For this reason, we focus on the first four
lines which are diagonal. These terms change smoothly as the energy eigenstates
slowly vary with external conditions and determine the survival probability over
long distances. First three lines among them represent the ``classical'' outcome
in the sense that they depend only on the Landau-Zener transition probabilities.
In contrast, the fourth line is an ``interference'' term which depends on the
relative phase between $\ket{r_3}$ and $\ket{r_4}$ acquired from the production
point $R$ to the SFP resonance point $r_B$. This dependence comes through the
phase $\phi_B$ which is defined by
\begin{equation}
\label{phi_B}
\phi_B=-\int_R^{r_B}(E_3(r)-E_4(r))dr +\alpha.
\end{equation}
Notice that this is a fixed phase: it does not lead to oscillations. It
represents the cumulative effects of the oscillations from the production point
$R$ to the SFP resonance point $r_B$, and determines the precise state which
enters the resonance. This phase is is important only when the terms that
multiply it in the fourth line are different from zero, i.e., when
$\rho_{34}(R)\neq 0$ and $0<P_B<1$. If these conditions are satisfied then the
interference term affects the survival probabilities in $r\to\infty$ limit and
creates the SFP phase effect. The phase $\phi_B$ depends sensitively on the
details of how the neutrino undergoes from production point to the resonance
point. But, as explained at the beginning of Section III, neutrinos represented
by $\hat\rho(r)$ would be subject to slightly different evolution conditions. In
practice $\phi_B$ will be different for every neutrino. Therefore, it should be
treated as an \emph{uncertainty} by taking $-1\leq e^{i\phi_B}\leq 1$.
We illustrate this point by considering the survival probability of an initial
$\bar\nu_e$. For this, we start with the density operator given in Eq.
(\ref{box}) for which Eq. (\ref{local eigenkets SFP}) gives
$\rho_{11}(R)=\rho_{22}(R)=\rho_{12}(R)=0$ and
\begin{equation}
\label{rho(R) for initial ebar}
\begin{split}
&\rho_{33}(R)=\sin^2\!\theta_B(R), \quad \rho_{44}(R)=\cos^2\!\theta_B(R),\\
&\rho_{34}(R)=\sin\theta_B(R)\cos\theta_B(R).
\end{split}
\end{equation}
Substituting this in Eqs. (\ref{later rho in eigenbasis}) and (\ref{rho after
SFP resonance}), and taking the matrix element as described in Eq. (\ref{general
survival probability}) leads to
\begin{align}
\label{P ebar with SFP}
P_{\bar\nu_e\to\bar\nu_e}\!(r)&\!=\!
\left[(1\!-\!P_B)\!\sin^2\!\theta_B(R)\!+\!P_B\!\cos^2\!\theta_B(R)\right]\abs{\bra{r_3}\ket{\bar\nu_e}}^2\nonumber\\
&+\!\left[P_B\!\sin^2\!\theta_B(R)\!+\!(1\!-\!P_B)\!\cos^2\!\theta_B(R)\right]\abs{\bra{r_4}\ket{\bar\nu_e}}^2\nonumber\\
&\!\!\!\!\!\!\pm\!\sqrt{P_B(1\!-\!P_B)}\sin
2\theta_B(R)(\abs{\bra{r_3}\ket{\bar\nu_e}}^2\!-\!\abs{\bra{r_4}\ket{\bar\nu_e}}^2)\nonumber\\
&\!\!\!\!\!\!+\mbox{terms oscillating around zero}.
\end{align}
This formula is valid with $P_B=0$ before the resonance and with $P_B$ given by
Eq. (\ref{Landau-Zener probability}) after the resonance. The first two lines
come from the classical probability part of Eq. (\ref{rho after SFP resonance})
and the third line comes from its interference term. Here we treat the
interference term as an uncertainty as explained above, which brings in the
$\pm$ sign to the third line. The last line contains fast oscillations around
zero which we do not explicitly show. This formula tells us that the survival
probabilities are expected to oscillate around a value which is in the region
described by the first three lines.
Fig. \ref{errors} shows the survival probability of a $15$ MeV electron
antineutrino as a function of distance with the exponentially fitted density
distribution at $5s$ post-bounce time under slightly different evolution
conditions. These conditions are created by slightly varying the radius of the
neutrinosphere $R$ and the distance scale $r_{\mbox{\footnotesize mag}}$ with
which the magnetic field decreases. The rows correspond to $R/\mbox{km}=49.95$,
$50.00$, $50.05$ and the columns correspond to $r_{\mbox{\footnotesize
mag}}/\mbox{km}=49.95$, $50.00$, $50.05$. The variation of $R$ mimics possible
decoupling at slightly different radii from the neutron star, or traveling
slightly different distances due to being emitted at different angles. The
variation of $r_{\mbox{\footnotesize mag}}$ mimics experiencing slightly
different magnetic field profiles due to changing conditions or traveling at
different angles with it. The grey lines in Fig. \ref{errors} show the
solutions obtained numerically by solving the evolution equation given in Eq.
(\ref{eqn:EoM}). The thick red curves show the analytical result obtained from
the first two lines (i.e., classical probability part) of Eq. (\ref{P ebar with
SFP}). These curves are the same in every panel because the small variations
mentioned above are almost irrelevant for the values of $P_B$ and $\theta_B(R)$.
The red shaded regions represent the analytical uncertainty obtained from the
third line (i.e., interference part) of Eq. (\ref{P ebar with SFP}), which is
likewise the same in every panel. The SFP resonance takes place around $850$ km.
Before the resonance we have no uncertainty region because $P_B=0$. As expected,
the numerical solution oscillates around the classical average represented by
the red line. After the resonance $P_B\neq 0$ and we have an uncertainty region.
We see that the numerical solution sometimes end up oscillating above the red
line, and sometimes below it. However, the average of the oscillations is always
within the red shaded uncertainty region as predicted by Eq. (\ref{P ebar with
SFP}). This can be seen better in the insets where the low density parts are
enlarged.
\subsection{At \texorpdfstring{$r\to\infty$ limit}{r to infinity limit}}
The decoherence over long distances can be implemented by discarding the
off-diagonal terms of the density operator given in Eq. (\ref{rho after SFP
resonance}), which removes all oscillations. Using Eq. (\ref{eigenstates in
vacuum}) for NH we find the following analytical expression for the density
operator in $r\to\infty$ limit:
\begin{eqnarray}
\label{rho at infinity after SFP}
\hat\rho(\infty)\!\!
&=&\!\!\rho_{11}(R)\dyad{\bar\nu_2}{\bar\nu_2} +\rho_{22}(R) \dyad{\nu_2}{\nu_2}\\
&+&\!\!\left((1-P_B)\rho_{33}(R)+P_B\rho_{44}(R)\right)\ket{\nu_1}\bra{\nu_1}\nonumber\\
&+&\!\!\left(P_B\rho_{33}(R)+(1-P_B)\rho_{44}(R)\right)\ket{\bar\nu_1}\bra{\bar\nu_1}\nonumber\\
&\pm&\!\!
2\sqrt{P_B(1-P_B)}\abs{\rho_{34}(R)}(\dyad{\nu_1}{\nu_1}-\dyad{\bar\nu_1}{\bar\nu_1}).\nonumber
\end{eqnarray}
In particular, we find the analytical expression for the limiting survival
probability of an initial $\bar\nu_e$ from Eqs. (\ref{rho(R) for initial ebar})
and (\ref{rho at infinity after SFP}) as
\begin{align}
\label{P ebar -> infinity with SFP}
P_{\bar\nu_e\to\bar\nu_e}(\infty)
&=\left(P_B\sin^2\!\theta_B(R)+(1-P_B)\cos^2\!\theta_B(R)\right)\nonumber\\
&\pm\sqrt{P_B(1-P_B)}\sin 2\theta_B(R).
\end{align}
Here we also use Eq. (\ref{eigenstates in vacuum}) with
$\bra{\nu_1}\ket{\bar\nu_e}=0$ and $\bra{\bar\nu_1}\ket{\bar\nu_e}=1$ since the
mixing angle is taken to be zero.
However, in our numerical simulations the oscillations do not die over long
distances because our equation of motion given in Eq. (\ref{eqn:EoM}) does not
take decoherence into account. We approach this situation as follows: First, we
run the simulation until the density is low enough to be considered vacuum. This
happens somewhere between a few to several thousands of kilometers depending on
the post-bounce time and the neutrino energy. Once we observe that the survival
probability starts to oscillate steadily around a fixed average value, we stop
the simulation and remove the off diagonal elements of the resulting density
operator in mass basis. Doing this brings the survival probability to the
average value around which the numerical result steadily oscillates in vacuum.
We show the $\bar\nu_e$ survival probability at $r\to\infty$ limit as a function
of energy for the exponentially fitted density distribution at $t=5$ s in Fig.
\ref{errors_energy}. The solid red line is the classical probability result
obtained from the first line of Eq. (\ref{P ebar -> infinity with SFP}) and the
red shaded region is the uncertainty from the second line of the same equation.
Each black dot in this figure represents a numerical run. To obtain them, we
divide the energy range into $250$ bins, and run $9$ simulations for each bin
with the same $R$ and $r_{\mbox{\footnotesize mag}}$ parameters as in Fig.
\ref{errors}. But unlike in Fig. \ref{errors} where we stop at $1600$ km, in
Fig. \ref{errors_energy} we continue the simulations until the vacuum is reached
and obtain the numerical value of the limiting survival probability as described
above. The result of each run is shown with a black dot in Fig.
\ref{errors_energy}. Note that, if we used the same set of external conditions
for each energy bin, the survival probability would display fast oscillations
with energy. But, since even those neutrinos with the same energy are likely to
experience slightly different external conditions as discussed above, ours is a
more generic approach. We also note that $2250$ numerical simulations were
carried out to generate this figure. Using exponential fits for the density
profiles substantially reduces the running time. Fig. \ref{errors_energy} tells
us that the survival probability of an initial $\bar\nu_e$ can change by as much
as $\%30$ depending on the energy under the adopted conditions. But in each
numerical run the limiting survival probability always falls into the range
predicted by Eq. (\ref{P ebar -> infinity with SFP}).
\section{Non-zero mixing angle}
When the mixing angle is not zero, neutrinos go through both SFP and MSW
resonances. For $Y_e>1/3$, which is most often the case, MSW resonance takes
place after the SFP resonance. The adiabaticity of the MSW resonance is
quantified by the parameter
\begin{equation}
\label{gamma M}
\Gamma_M=\left(
\frac{(\tfrac{\delta m^2}{2E}\sin 2\theta)^2}
{\sqrt{2}G_F \frac{1}{m_n}\left\lvert\tfrac{dn(r)}{dr}\right\lvert Y_e}
\right)_{\mbox{\footnotesize res}},
\end{equation}
where the subscript ``res'' indicates that the expression should be calculated
at it. The Landau-Zener approximation tells us that the jumping probability
between the resonating energy eigenstates $\ket{r_2}$ and $\ket{r_3}$ in NH is
\begin{equation}
\label{Landau-Zener probability MSW}
P_M=e^{-2\pi \Gamma_M}.
\end{equation}
Therefore the evolution through the MSW resonance is described by
\begin{equation}
\label{Landau-Zener transfer MSW}
\begin{pmatrix}
\ket{r_2} \\ \ket{r_3}
\end{pmatrix}
\to
\begin{pmatrix}
\sqrt{1-P_M} & -e^{-i\beta}\sqrt{P_M} \\
e^{-i\beta}\sqrt{P_M} & \sqrt{1-P_M}
\end{pmatrix}
\begin{pmatrix}
\ket{r_2} \\ \ket{r_3}
\end{pmatrix},
\end{equation}
where $\beta$ is the Stoke's phase. For a completely adiabatic ($P_M=0$) or
completely nonadiabatic ($P_M=1$) resonance the Stoke's phase is unimportant.
But in partially adiabatic case with $0<P_M<1$, it should be taken into
account.
In between the SFP and MSW resonances the density operator is given by Eq.
(\ref{rho after SFP resonance}). After the MSW resonance Eq. (\ref{Landau-Zener
transfer MSW}) must also be applied to it which results in a complicated generic
form. Here we do not reproduce the full result because it is not relevant for
our purposes. Instead, we give its $r\to\infty$ limit which we obtain by
removing its off-diagonal components and using Eq. (\ref{eigenstates in vacuum}).
The result is
\begin{eqnarray}
\label{rho at infinity theta}
&\hat \rho&\!\!\! (\infty)=\rho_{11}(R)\dyad{\bar\nu_2}{\bar\nu_2}
\\&+&\!\!\!\!
\begin{multlined}[t][0.9\columnwidth]
[(1\!-\!P_M)\rho_{22}(R)\!+\!P_M((1\!-\!P_B)\rho_{33}(R)\!+\!P_B\rho_{44}(R))]\\ \times \dyad{\nu_2}{\nu_2}
\end{multlined}
\nonumber
\\&+&\!\!\!\!
\begin{multlined}[t][0.9\columnwidth]
[P_M \rho_{22}(R)\!+\!(1\!-\!P_M)((1\!-\!P_B)\rho_{33}(R)\!+\!P_B\rho_{44}(R))] \\ \times \dyad{\nu_1}{\nu_1}
\end{multlined}
\nonumber
\\&+&\!\!\![P_B \rho_{33}(R)\!+\!(1\!-\!P_B) \rho_{44}(R) ]\dyad{\bar\nu_1}{\bar\nu_1}
\nonumber
\\&\pm &\!\!\!\!
\begin{multlined}[t][0.9\columnwidth]
2\sqrt{(1\!-\!P_B) P_B}\,\abs{\rho_{34}(R)}\\ \times[(1\!-\!P_M)\dyad{\nu_1}{\nu_1}\!+\!P_M\dyad{\nu_2}{\nu_2}\!-\!\dyad{\bar\nu_1}{\bar\nu_1}]
\end{multlined}
\nonumber
\\&\pm &\!\!\!\!
\begin{multlined}[t][0.9\columnwidth]
2\sqrt{(1\!-\!P_M)P_M}[\sqrt{P_B}\,\abs{\rho_{24}(R)}\!+\!\sqrt{1\!-\!P_B}\,\abs{\rho_{23}(R)}]
\\\times(\dyad{\nu_2}{\nu_2}\!-\!\dyad{\nu_1}{\nu_1}).
\nonumber
\end{multlined}
\end{eqnarray}
The first four lines of this equation involve only the probabilities and
represent the classical result. The last two lines give the uncertainty due to
the phases. In addition to the one given in Eq. (\ref{phi_B}), other phases also
enter the result: the relative phases acquired by $\ket{r_3}$ and by $\ket{r_4}$
with respect to $\ket{r_2}$ from production to the SFP resonance, the relative
phase between $\ket{r_2}$ and $\ket{r_3}$ from the SFP resonance to the MSW
resonance, and the Stoke's phase $\beta$ associated with the MSW resonance.
These phases show up in combinations so that there are only two uncertainty
terms in the final result. Note that Eq. (\ref{rho at infinity theta}) reduces
to Eq. (\ref{rho at infinity after SFP}) when the MSW resonance is fully
adiabatic, i.e., when $P_M=0$. In particular, for an initial electron
antineutrino, the limiting survival probability can be calculated by using Eq.
(\ref{rho(R) for initial ebar}) and the relations $\bra{\nu_1}\ket{\bar\nu_e}=0$
and $\bra{\bar\nu_1}\ket{\bar\nu_e}=\cos\theta$. The result is
\begin{align}
\label{rho ebar -> infinity theta}
P_{\bar\nu_e\to\bar\nu_e}(\infty)
&\!=\!\left(P_B\sin^2\!\theta_B(R)\!+\!(1\!-\!P_B)\cos^2\!\theta_B(R)\right)\cos^2\theta \nonumber\\
&\pm\sqrt{P_B(1\!-\!P_B)}\sin 2\theta_B(R)\cos^2\theta,
\end{align}
which reduces to Eq. (\ref{P ebar -> infinity with SFP}) when $\theta=0$. This
formula does not contain $P_M$. This is expected because we focus on NH in which
case antineutrinos do not experience the MSW resonance. However, from Fig.
\ref{eigenvalues} we see that the component of the initial $\bar\nu_e$ which
turns into $\nu_x$ in SFP resonance later experiences MSW resonance, and here it can
partially or fully turn into $\nu_e$. Therefore we expect the
$\bar\nu_e\to\nu_e$ transition probability to involve both $P_B$ and $P_M$.
Substituting Eq. (\ref{rho(R) for initial ebar}) into Eq. (\ref{rho at infinity
theta}) and taking the matrix element $\mel{\nu_e}{\hat\rho(\infty)}{\nu_e}$
leads to the result
\begin{align}
\label{rho ebar -> e infinity theta}
&\begin{multlined}[t][0.95\columnwidth]
P_{\bar\nu_e\to\nu_e}(\infty)
=\left(P_B\sin^2\!\theta_B(R)+(1-P_B)\cos^2\!\theta_B(R)\right)\\
\times\left((1-P_M)\cos^2\theta+P_M\sin^2\theta\right)
\end{multlined}
\nonumber\\
&
\quad\quad
\quad\quad
\quad
\pm\sqrt{P_B(1\!-\!P_B)}\sin 2\theta_B(R)\cos^2\theta
\end{align}
Fig. \ref{random initial for 1e16} shows the survival probability of an electron
antineutrino at $r\to\infty$ limit as a function of neutrino energy for
$\mu=1\times 10^{-16}\mu_B$ and for the exponentially fitted density
distributions at different post-bounce times. The solid red lines
show the average survival probability and the red shaded areas show the uncertainty
region, both calculated from Eq. (\ref{rho ebar -> infinity theta}). The black
dots represent the results obtained by numerically solving Eq. (\ref{eqn:EoM})
under very similar but slightly different conditions as described in the context
of Fig. \ref{errors_energy}. But this time, for each neutrino energy we solve
the evolution equation three times. In each run we choose both $R$ and
$r_{\mbox{\tiny mag}}$ randomly and independently in the interval $(49.95 \mbox{
km}, 50.05 \mbox{ km})$. As can be seen, all numerical results are within the
uncertainty bounds predicted by Eq. (\ref{rho ebar -> infinity theta}).
Fig. \ref{random initial for 5e16} shows the same results for $\mu=5\times
10^{-16}\mu_B$. Here we see a few differences from Fig. \ref{random initial for
1e16}. First, the survival probability of $\bar\nu_e$ drops to significantly
lower values. This is because for larger $\mu$ the SFP resonance is more
adiabatic according to Eq. (\ref{gamma B}), which makes the $\bar\nu_e \to \nu_x$
transition more efficient. Second, some numerical results are slightly out of
the analytical uncertainty region at late times. This is because our analytical
treatment is based on the decoupling approximation in which SFP and MSW
resonances can be treated as two separate two-level problems within the
Landau-Zener formalism. Due to the widening of SFP resonance examined in Section
III, this approach eventually fails at late times, especially for larger $\mu B$
values.
Fig. \ref{random initial for 1e15} shows our results for an even larger magnetic
moment of $\mu=10\times 10^{-16} \mu_B$. In this figure two things can be
observed: First, the uncertainty predicted by our analytical approach becomes
zero at late post-bounce times. This is because the SFP moves inside and happens
under a stronger magnetic field. $\mu B$ value eventually becomes large enough
to render SFP resonance completely adiabatic at late times. MSW resonance is
also adiabatic for the model that we work with. In this case, the predicted
uncertainty vanishes. In other words since both resonances are adiabatic
analytical approach suggests that there is no phase effect. However, (and this
is the second observation) we see that the numerical results still indicate the
presence of a phase effect. This is due to the failure of the decoupling scheme.
As discussed in Section III, the SFP resonance becomes wider at late post bounce
times. This widening effect is even more pronounced for large $\mu$. We
conclude that if SFP and MSW resonances overlap, the phase effect may still be
there even when individual resonances seem to be adiabatic from a
straightforward application of the Landau-Zener approach.
The important question is how the neutrino energy spectra to be observed at
Earth would be affected by the SFP phase effect. To answer this, we start with
Fermi-Dirac type initial neutrino energy distributions\footnote{If the neutrino
magnetic moment is large, then the neutrino spectra emerging from the
proto-neutron star may be different from Fermi-Dirac type due to neutrino
electromagnetic interactions inside the core. See, for example, Refs.
\cite{Dar:1987yv, Kuznetsov:2009we, Alok:2022ovy}. But, we do not consider this
effect here.} on the proto-neutron star surface, and evolve them using realistic
density profiles, i.e., the solid lines shown in Fig.
\ref{fig:baryonProfileShock}. As discussed before (see the discussion following
Eq. (\ref{fits})), the realistic density profiles decrease more slowly than the
analytical fits and the resonances take place in outer regions where the
magnetic field is weaker. As a result, magnetic moments of the order of
$10^{-16} \mu_B$ used in illustrative discussion create smaller amounts of
smearing with realistic density profiles. However, a stronger magnetic moment
can offset this effect and create larger smearing.
Fig. \ref{fermi dirac} shows our results for $\mu=5\times10^{-15} \mu_B$ with
the realistic density profiles. In this figure, the panels show
post-bounce times $t=1,3,5$ s from top to bottom. The dashed lines show the
initial neutrino flux on the proto-neutron star surface in arbitrary units.
They correspond to Fermi-Dirac distributions with $kT_{\nu_e}=3.0$ MeV,
$kT_{\bar\nu_e}=5.0$ MeV, and $kT_{\nu_x}=kT_{\bar\nu_x}=7.0$ MeV where
$T_{\nu_\alpha}$ denotes the temperature for the $\nu_\alpha$ flavor and $k$
denotes the Boltzmann constant. Red, black, blue, and green colors respectively
correspond to $\nu_e,\nu_x,\bar\nu_e$, and $\bar\nu_x$ flavors. The dots
represent the expected distribution at Earth. We obtain them by solving the evolution
described by Eq. (\ref{eqn:EoM}) for the realistic density profiles up
until the vacuum and then by applying decoherence in $r\to\infty$ limit. As
before, for each energy bin we do this three times by choosing $R$ and
$r_{\mbox{\tiny mag}}$ randomly and independently in the interval $(49.95 \mbox{
km}, 50.05 \mbox{ km})$. Unlike in previous figures, the shaded regions in
Fig. \ref{fermi dirac} do not represent analytically calculated uncertainty
ranges because we are using realistic density profiles rather than their
analytical fits. Instead, these regions are drawn to guide the eye about the
spreading of the points. Color coding for the dots and the shaded regions
are the same as those for the initial distributions.
Fig. \ref{fermi dirac} shows a sizable amount of smearing in expected neutrino
fluxes at late times. As discussed in illustrative examples with the fitted
density profiles, the phase effect makes no impact at the early time of $t=1$ s
because the matter density is relatively high and flavor eigenstates are close
to energy eigenstates. However, at later times decreasing density near the
neutrinosphere drives energy eigenstates away from flavor eigenstates. As a
result, neutrinos experience SFP phase effect and end up with significantly
smeared energy distributions. This is particularly true in the mid-energy
region, i.e., around $5-15$ MeV. This is due to the fact that the original
distributions differ most from each other in this energy region. After all, any
uncertainties in the survival and transition probabilities make the strongest
impact where the distributions differ most.
\section{Discussion and Conclusions}
In this paper we discussed the phase effects caused by neutrino magnetic
moment in a core collapse supernova. We assumed that neutrinos are of Majorana
type and have a magnetic moment which is larger than the Standard Model
prediction. The large magnetic moment shifts the energy eigenstates away from
the flavor eigenstates so that each neutrino is emitted from the neutrinosphere
as a superposition of energy eigenstates. We argued that the relative phases
developed by these energy eigenstates before partially adiabatic SFP and/or MSW
resonances should be treated as uncertainties in the survival probabilities over
long distances.
We approached the problem both analytically and numerically. Our analytical
approach is based on the assumption of complete decoupling between SFP and MSW
resonances. We argued that the size of the uncertainty depends on two factors:
(i) How far away the initial state of the neutrino is from being a pure energy
eigenstate and (ii) how far away the resonances are from being completely
adiabatic or completely nonadiabatic. We derived explicit analytical formulas
to estimate the uncertainties in the presence of one SFP and one MSW resonance.
Our Eq. (\ref{rho at infinity theta}) is the generic result whereas Eqs.
(\ref{rho ebar -> infinity theta}) and (\ref{rho ebar -> e infinity theta}) are
particular cases. The first factor above is represented by the nondiagonal
elements of the initial density matrix in energy eigenbasis. These are
$\rho_{ij}(R)$ in the generic result and $\sin^2\!\theta_B(R)$ in the particular
cases. The second factor is captured by the terms $P_B(1-P_B)$ and $P_M(1-P_M)$.
If the neutrino magnetic moment is as small as the Standard Model predicts, then
neutrinos will be closer to being pure energy eigenstates at production. At the
same time the SFP resonance will be almost completely nonadiabatic ($P_B\simeq
1$), in which case no flavor transformation happens in this resonance and its
presence is physically unimportant. As a result both factors (i) and (ii) above
would work together to reduce SFP phase effect to zero. However, the phase
effect does not simply grow with increasing $\mu B$. A large value of $\mu B$
always enhances factor (i) above by moving the energy eigenstates away from
flavor eigenstates. It also enhances factor (ii) by making the SFP somewhat more
adiabatic ($P_B\lesssim 1$). But an even larger $\mu B$ makes the SFP resonance
too adiabatic ($0\lesssim P_B$) in which case the phase effect once again
disappears because it is proportional to $P_B(1-P_B)$. In order to discuss and
illustrate these details, we used simple exponential fits for supernova density
profiles. We showed that analytically predicted uncertainties are larger for the
intermediate value of $\mu=5\times10^{-16}\mu_B$ than they are for both
$\mu=1\times10^{-16}\mu_B$ and $\mu=10\times10^{-16}\mu_B$. In fact for the
latter, analytically predicted uncertainty is (incorrectly) zero at late times
because the SFP resonance is almost completely adiabatic, but we still observe
an uncertainty in numerical calculations. Here the issue is the overlap between
the SFP and the MSW resonances, which is more likely for a larger $\mu B$. This
overlap breaks down our analytical approach and brings us to the regime of a
three dimensional quantum mechanical problem, which is more difficult to solve
analytically.
Our analytical treatment included one SFP and one MSW resonances. However, for
a realistic density profile, some neutrinos go through more resonances (see Fig.
\ref{fig:baryonProfileShock}). With the inclusion of these additional
resonances, the survival probabilities may change in any way but the
uncertainties around them can only increase. This is because once
the neutrino is produced, its energy eigenstate components evolve independently
and always continue to accumulate relative phases.
If the additional resonances are partially adiabatic, then they mix the
energy eigenstates and increase the uncertainties. If not, then the
uncertainties remain the same. Another issue is the fact that a realistic density
profile pushes the resonances to the outer regions where the magnetic field
is weaker. For this reason, a sizable SFP phase effect is possible only for a larger
neutrino magnetic moment.
We carried out numerical calculations for the realistic density distributions
with shock wave profile, and for a neutrino magnetic moment of
$5\times10^{-15}\mu_B$. The results presented in Fig. \ref{fermi dirac} indicate
that SFP phase effect can significantly smear the neutrino energy spectra
arriving Earth. This effect is possibly observable if it goes beyond the
intrinsic detector response smearing due to finite energy resolution. See, for
example, Ref. \cite{Jana:2022tsa} for the sensitivity of DUNE and
Hyper KamiokaNDE on the magnetic moment of supernova neutrinos. If this happens,
then the telltale sign would be the growth of the smearing a few
seconds after the bounce, which is a feature that we observe in all of our
numerical simulations. In our analytical discussion, we argued that this is due
to the dropping central density. A full analysis of the observability of SFP
phase effect should take into account several factors, including not only the
size of the neutrino magnetic moment, the density profile and the magnetic field
profile, but also the electron fraction. Here we assumed that the electron
fraction is constant at $0.45$ but a larger value, for example, moves SFP
resonance inward where the magnetic field is stronger. In this case, a sizable
smearing may be expected for a smaller value of neutrino magnetic moment. A
thorough analysis of observability is left for a future publication. The purpose
of the present paper is to demonstrate that, beyond astrophysical model
dependency and detection issues, there is fundamental uncertainty associated
with the nature of the SFP in supernova.
In this paper we omitted neutrino-neutrino interactions, which is an important
aspect of neutrino flavor evolution in supernova. We plan to include them in our
future work, but here we note that neutrino-neutrino interaction potential
causes additional resonances which give rise to spectral splits
\cite{Raffelt:2007cb,Raffelt:2007xt}. If these resonances are partially
adiabatic \cite{Ekinci:2021miy}, then it is likely that they enhance the SFP
phase effect in the same way as the additional resonances discussed above.
\vspace*{5mm}
\noindent T. B. acknowledges the 2214A travel fellowship from the
Scientific and Technological Research Council of Turkey (T{\"{U}}B{\.{I}}TAK)
and thanks the GSI Helmholtz Centre for Heavy Ion Research for their
hospitality, where part of this work was carried out. Numerical calculations
reported in this paper were partially performed at T{\"{U}}B{\.{I}}TAK
ULAKB{\.{I}}M, High Performance and Grid Computing Center (TRUBA resources).
This work was supported in part by a grant from Mimar Sinan Fine Arts University
under project number 2018/48.
\bibliography{TB_nuMag}
|
Title:
Deepest Sensitivity to Wavelike Dark Photon Dark Matter with SRF Cavities |
Abstract: Wavelike, bosonic dark matter candidates like axions and dark photons can be
detected using microwave cavities commonly referred to as haloscopes.
Traditionally, haloscopes consist of tunable copper cavities operating in the
TM$_{010}$ mode, but ohmic losses have limited their performance. In contrast,
superconducting radio frequency (SRF) cavities can achieve quality factors of
$\sim 10^{10}$, perhaps five orders of magnitude better than copper cavities,
which would lead to more sensitive dark matter detectors. In this paper, we
first derive that the scan rate of a haloscope experiment is proportional to
the loaded quality factor $Q_L$, even if the cavity bandwidth is much narrower
than the dark matter halo lineshape. We then present a proof-of-concept search
for dark photon dark matter using a nontunable ultrahigh quality SRF cavity. We
exclude dark photon dark matter with kinetic mixing strengths of $\chi >
1.8\times 10^{-16}$ for a dark photon mass of $m_{A^{\prime}} = 5.37\mu$eV,
achieving the deepest exclusion to wavelike dark photons by almost an order of
magnitude.
| https://export.arxiv.org/pdf/2208.03183 |
\preprint{APS/123-QED}
\title{Deepest Sensitivity to Wavelike Dark Photon Dark Matter with SRF Cavities}%
\author{R. Cervantes}%
\email[Correspondence to: ]{[email protected]}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{C. Braggio}
\affiliation{Dip. di Fisica e Astronomia, Universit\`{a} di Padova, 35100 Padova, Italy}
\affiliation{INFN - Sezione di Padova, 35100 Padova, Italy}
\author{B. Giaccone}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{D. Frolov}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{A. Grassellino}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{R. Harnik}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{O. Melnychuk}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{R. Pilipenko}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{S. Posen}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\author{A. Romanenko}
\affiliation{Fermi National Accelerator Laboratory, Batavia IL 60510}
\date{\today}%
\emph{Introduction.}---There is overwhelming evidence that 84.4\% of the matter in the universe is made out of dark matter (DM)~\cite{Rubin:1982kyu, 10.1093/mnras/249.3.523,1998gravitational_lensing,10.1093/mnras/stw3385, Markevitch_2004, 2020Planck, Zyla:2020zbs}. The $\Lambda$CDM model describes dark matter as feebly interacting, nonrelativistic, and stable on cosmological timescales. Not much else is known about the nature of dark matter, particularly what particles beyond the standard model it is made of.
The dark photon (DP) is a compelling dark matter candidate. It is a spin-1 gauge boson associated with a new Abelian U(1) symmetry and is one of the simplest possible extensions to the Standard Model (SM)~\cite{essig2013dark, PhysRevD.104.092016, PhysRevD.104.095029}. The dark photon, having the same quantum numbers as the SM photon, generically interacts with the SM photon through kinetic mixing~\cite{HOLDOM198665, HOLDOM1986196} described by the Lagrangian
\begin{align}
\mathcal{L} = -\frac{1}{4}(F_1^{\mu \nu}F_{1\mu \nu} +F_2^{\mu \nu}F_{2\mu \nu} - 2\chi F_1^{\mu \nu}F_{2\mu \nu} - 2 m_{\Ap}^{2} A^{\prime 2}),
\end{align}
where $F_1^{\mu \nu}$ is the electromagnetic field tensor, $F_2^{\mu \nu}$ is the dark photon field tensor, $\chi$ is the kinetic mixing strength, $m_{\Ap}$ is the DP mass, and $\Ap$ is the DP gauge field.
If both $m_{\Ap}$ and $\chi$ are sufficiently small, then it is stable on cosmological timescales~\cite{PhysRevD.78.115012}. The dark photon is then an attractive dark matter candidate. If its mass is less than an eV, the DPDM is in the wavelike regime, where it is best described as a coherent wave oscillating at the frequency of its rest mass rather than a collection of particles. The dark matter kinetic energy distribution sets the degree of coherence of wavelike dark matter to be of order $v_\mathrm{DM}^2\sim 10^{-6}$~\cite{PhysRevD.42.3572, 10.1046/j.1365-8711.2003.06165.x}.
Several mechanisms could produce a relic of cosmic dark photons. One simple example is the displacement of the DP field through quantum fluctuations during inflation~\cite{PhysRevD.93.103520}. These fluctuations in the DP field serve as the initial displacement for dark photon field oscillations, which commence once the Universe's expansion rate falls below the DP mass. Other mechanisms are possible and are described in~\cite{PhysRevD.104.095029, Arias_2012}.
Dark photon dark matter (DPDM) can be detected through its mixing with the SM photon. If dark photons oscillate into SM photons inside a microwave cavity with a large quality factor, then a feeble EM signal accumulates inside the cavity, which can be read by ultralow noise electronics. This type of detector is called a haloscope and is often deployed to search for axionic DM~\cite{PhysRevLett.51.1415}. The SM photon frequency $f$ is related to the dark photon energy $E_{\Ap}$ by $f = E_{\Ap} \approx m_{\Ap}$.
The dark photon signal power is~\cite{PhysRevD.104.092016, OrpheusPRD, Kim_2020, PhysRevD.32.2988}
\begin{align}
& P_{S} = P_0 \betaterm L(f, f_0, \ql)\label{eqn:dp_power} \\
& P_{0} = \begin{cases}
\eta \chi^2 m_{\Ap} \rho_{\Ap} \veff \ql, & \text{if $\ql << \qdm$}\\
\eta \chi^2 m_{\Ap} \rho_{\Ap} \veff \qdm, & \text{if $\ql >> \qdm$}\\
\end{cases}
\label{eqn:dp0_power} \\
& V_{eff} = \frac{\left (\int dV \vb{E}(\vec{x}) \vdot \vb{\Ap}(\vec{x})\right )^2}{\int dV \epsilon_r |\vb{E}(\vec{x})|^2|\vb{\Ap}(\vec{x})|^2}\label{eqn:veff}
\end{align}
where $\eta$ is a signal attenuation factor, $\rho_{\Ap}$ is the local density of dark matter, $\veff$ is the effective volume of the cavity, $\ql$ is the cavity's loaded quality factor, $\qdm$ is the dark matter ``quality factor'' related to the dark matter coherence time, and $\beta$ is the cavity coupling coefficient. The Lorentzian term is $L(f, f_0, \ql) = 1/(1+4\Delta^2)$, where $\Delta \equiv \ql (f-f_0)/f_0$ is a detuning factor that depends on the SM photon frequency $f$, cavity resonant frequency $f_0$, and $\ql$. $\veff$ is the overlap between the dark photon field $\vb{\Ap}(\vec{x})$ and the dark photon-induced electric field $\vb{E}({\vec{x}})$. Equations~\ref{eqn:dp_power}, \ref{eqn:dp0_power}, and \ref{eqn:veff} assume that the cavity size is much smaller than the DP de Broglie wavelength.
The dark photon mass is unknown, so haloscopes must be tunable to search through the $\chi$ vs. $m_{\Ap}$ parameter space. Thus the scan rate is a key figure of merit for haloscope experiments. Most of the haloscope literature has focused on the case where $\ql << \qdm$ because copper cavities have been traditionally used. However, superconducting niobium cavities with $\ql \sim \num{1e10}$~\cite{PhysRevApplied.13.034032} are readily available for DPDM haloscope searches, and superconducting cavities resistant to multi-Tesla magnetic fields with $\ql > \qdm$ will soon be readily available~\cite{ALESINI2021164641, https://doi.org/10.48550/arxiv.2201.10733, PhysRevApplied.17.054013, PhysRevApplied.17.L061005}.
This Letter first derives that the haloscope scan rate is proportional to $\ql$, even in the case that where $\ql >> \qdm$ and the DP signal power saturates (Eq.~\ref{eqn:dp_power}). This conclusion strongly motivates the pursuit of ultrahigh Q haloscopes. This Letter then reports a DPDM search using a nontunable \SI{1.3}{GHz} cavity with $\ql \sim \num{1e10}$. The search demonstrates superior sensitivity enabled by the ultrahigh quality factor and achieves the deepest exclusion to wavelike DPDM to date by almost an order of magnitude.
\emph{The scan rate for an ultrahigh Q haloscope.}---The haloscope scan rate is strongly dependent on the SNR, where $\snr = (P_S/P_n)\sqrt{b \Delta t}$~\cite{doi:10.1063/1.1770483, Peng:2000hd}. $P_n$ is the noise power, $b$ is the frequency bin width, and $\Delta t$ is the integration time. $P_n$ is the combination of the cavity's blackbody radiation and the receiver's Johnson noise. The noise power can be expressed as $P_n= k_b b T_n$, where $k_b$ is the Boltzmann constant, and $T_n$ is the system noise temperature referenced to the cavity.
It is common for a microwave haloscope experiment to implement an isolator at nearly the same temperature as the cavity. For such a system, $T_n$ is constant and independent of the cavity detuning $\Delta$ and cavity coupling $\beta$~\cite{ALKENANY201711}.
If $\ql >> \qdm$, the cavity width is smaller than the dark matter halo lineshape width $\dfdm$. The resulting dark matter signal will follow the Lorentzian cavity response with bandwidth $\Delta f_c = f_0/\ql$. Fortunately, a haloscope is sensitive to a distribution of possible dark photon rest masses corresponding to the cavity resonant frequency $f_c$. In other words, a single cavity tuning step can probe the entire dark matter lineshape bandwidth, and the tuning steps need only to be comparable to $\dfdm$.
The frequency bin width $b$ is typically chosen to be comparable to the dark matter signal bandwidth. Typical haloscope experiments use copper cavities with $\ql << \qdm$, so $b \sim \dfdm = f_0/\qdm$. However, if $\ql >> \qdm$, the signal bandwidth is the same as the cavity bandwidth and $b \sim f_0/\ql$. This means that the noise power is inversely proportional the $\ql$, i.e., $P_n \sim k_b \left(f_0/\ql\right) T_n$. The higher $\ql$ is, the lower the noise power.
An estimate of the integration time can be obtained by rearranging the SNR equation ${\Delta t = 1/b \left(\snr \times P_n/P_S \right)^2}$. The tuning steps are $\Delta f \sim f_0/\qdm$. Putting all this together, the scan rate for a dark photon haloscope consisting of an ultrahigh Q microwave cavity, i.e., $\ql >> \qdm$ is
\begin{align}
\dv{f}{t} = \frac{\Delta f}{\Delta t} \sim \ql \qdm \left (\frac{\eta \chi^2 m_{\Ap} \rho_{\Ap} \veff \beta}{\snr T_n(\beta+1)}\right )^2. \label{eqn:scan_rate}
\end{align}
The scan rate equation, Equation~\ref{eqn:scan_rate}, happens to be the same whether $\ql >> \qdm$ or $\ql << \qdm$~\cite{OrpheusPRL}. In both cases, the scan rate is directly proportional to $\ql$~\footnote{Ref.~\cite{Kim_2020} also addresses the scan rate for ultrahigh Q haloscopes, but our treatment differs in a few major ways. For reasons explained in the text, we set the tuning step to $\Delta f \sim f_0/\qdm$ instead of $\Delta f \sim f_0/\ql$ and $b \sim f_0/\ql$ instead of $b \sim f_0/\qdm$. Finally, the derived $T_n$ in Ref.~\cite{Kim_2020} does not appear to apply systems ubiquitous to microwave haloscope experiments that implement circulators between the cavity and first-stage amplifier. For such a system, $T_n$ is independent of $\beta$~\cite{ALKENANY201711}. This independence is recognized in Fig.~5 but is not reflected in their equations.}.
As a comparison, ADMX uses copper cavities with $\ql \sim \num{8e4}$~\cite{PhysRevLett.127.261803}, whereas niobium SRF cavities can achieve $\ql \sim \num{1e9}- \num{1e11}$ depending on the temperature and cavity treatment~\cite{PhysRevApplied.13.034032}. This suggests that SRF cavities can increase the instantaneous scan rate of haloscope experiments by as much as a factor of $\num{1e5}$.
\emph{Dark Photon Dark Matter Search with an SRF Cavity.}---The Superconducting Quantum Materials and Systems (SQMS) Center, hosted by Fermilab, performs a wide range of multidisciplinary experiments with SRF cavities. An experiment studying the material properties of an SRF cavity for temperatures below \SI{1}{K} was underway when it was decided that a fixed-frequency dark photon dark matter search should be performed with the same cavity. This proof-of-principle search using a ${\ql \approx \num{8e9}}$, a HEMT amplifier, and the standard haloscope analysis is enough to demonstrate superior sensitivity to wavelike dark photons compared to previous searches.
The haloscope consists of a TESLA-shaped single-cell niobium cavity~\cite{Aune:2000gb} with TM$_{010}$ resonant frequency ${f_0 \approx \SI{1.3}{GHz}}$. The cavity is made of fine-grain bulk niobium with a high residual resistivity ratio of $\simeq 300$. The cavity volume is \SI{3875}{mL}, and the effective volume calculated from Equation~\ref{eqn:veff} is $\veff = \SI{669}{mL}$, assuming a randomly polarized DP field. Electromagnetic coupling to the cavities is performed using axial pin couplers at both ends of the beam tubes.
The cavity underwent heat treatments in a custom-designed oven to remove the niobium pentoxide (Nb$_2$O$_5$) and to mitigate the two-level system (TLS) dissipation~\cite{PhysRevLett.119.264801, PhysRevApplied.13.014024}. The central cell \SI{1.3}{GHz} cavity is heat treated at $\sim 450$\degree C in vacuum for eight hours.
The cavity is cooled to $\approx\SI{45}{mK}$ using a BlueFors XLD 1000 dilution refrigerator. A double-layer magnetic shielding around the entire cryostat is used, and magnetometers placed directly on the outside cavity surfaces indicate that the DC ambient magnetic field level is shielded to below \SI{2}{mG}.
A diagram of the microwave electronics is shown in Fig.~\ref{fig:electrtonics}. A series of attenuators on the cavity input line attenuates the room-temperature noise. The power from the cavity is first amplified by a cryogenic HEMT amplifier (LNF-LNC0.3\_14A~\cite{lnf_0p314}). At \SI{1.3}{GHz}, the amplifier noise temperature is \SI{4.9 \pm 0.5}{K} and the gain is about \SI{36}{dB}. These values are obtained from the manufacturer's calibration. The uncertainty in the amplifier noise temperature comes from private correspondence with Low Noise Factory, and additional references include~\cite{1603866_tamp_uncertainty, 5540248_tamp_uncertainty}.
Between the HEMT and SRF cavity, there is a series of three microwave isolators (QuinStar QCY-G0110151AS circulators with the third port terminated) and a low pass filter (Mini-Circuits VLFX-1350+). The isolators prevent the HEMT amplifiers from injecting noise into the cavity. According to the manufacturer datasheets, the combined insertion loss is at maximum \SI{2.5}{dB}.
The signal is further amplified at room temperature using a Fairview FMAM1028 amplifier. The signal is then injected into the appropriate measurement device (spectrum analyzer, network analyzer, or phase noise analyzer).
The cavity's resonant frequency is identified using a self-excited loop~\cite{Fong2011SELFEO, delayen1978phase}. The thermal noise from the output of the cavity is amplified, phase-shifted, and fed back into the input of the cavity. The phase shifting is performed with an ATM P2506. A power splitter feeds the cavity's output power to the spectrum analyzer to monitor the response to the self-excited loop. The peak of the power spectrum corresponds to the cavity resonance.
The cavity's loaded quality factor $\ql$ is measured using a decay measurement~\cite{padamsee}. This decay measurement is implemented with a Keysight ENA Network Analyzer E5080B. At the resonant frequency, the network analyzer injects a \SI{15}{dBm} signal with a bandwidth of \SI{1}{kHz} into the input transmission line. Port 2 of the VNA measures the absolute power from the cavity output line. The network analyzer source is then turned off, and the output power is observed to decay over several seconds until it reaches an equilibrium. The measured power is fitted using $P_t = A\exp(-t/\tau_L)$, and $\ql = 2\pi f_0 \tau_L$. Three decay measurements were taken. Of the three trials, smallest loaded quality factor measured is $\ql = \num{8.7e9}$.
The antenna external quality factors are measured beforehand in a separate test stand following the procedure outlined in Reference~\cite{doi:10.1063/1.4903868}. From the measured external quality factors $Q_e$ and the measured $\ql$, the cavity coupling coefficient of the cavity output port can be determined using $\beta = \left(\ql/Q_e\right)/\left(1-\ql/Q_e \right)$. From the measured $\ql$ and $Q_e$, $\beta = 0.68\pm 0.11$.
For this proof-of-principle measurement, there is no tuning mechanism. A single power spectrum is measured. In the absence of a discovery, an exclusion on the kinetic mixing strength $\chi$ is determined from the measured power spectrum, the system noise temperature, and cavity properties. The relevant properties for determining the dark photon signal power and system noise temperature are shown in Table~\ref{tab:operating_parameters}.
\begin{table}[htp]
\begin{tabular}{c|c|c|c|c|c|c|c}
$\eta$ & $\beta$ & $\veff$ & $m_{\Ap}$ & $\Delta t$ & $\ql$ & $T_n$ & b \\ \hline
\num{0.56} & \num{0.68 \pm 0.11} & \SI{669}{mL} & \SI{5.3}{\mu eV} & \SI{1000}{s}& \num{8.7e9} & \SI{5.0 \pm 0.5}{K} & \SI{0.1}{Hz}
\end{tabular}
\caption{Operating parameters for the dark photon dark matter search with the SQMS SRF cavity.}
\label{tab:operating_parameters}
\end{table}
The signal attenuation factor $\eta$ is determined from cascaded insertion loss of the three isolators and low-pass filter in between the cavity and cryogenic amplifier (Fig.~\ref{fig:electrtonics}). From the manufacturer datasheets, the maximum combined loss is \SI{2.5}{dB}. This leads to a signal attenuation factor of $\eta=0.56$. Since this is derived from cascading the maximum loss of four devices, this value is treated as a conservative lower bound.
For modeling the system noise temperature, the cryogenic electronics in Fig.~\ref{fig:electrtonics} can be approximated as a cavity connected to the first-stage amplifier by a transmission line. The system noise temperature is then
\begin{align}
T_n = T_{cav} + T_{amp}
\label{eqn:orpheus_tsys}
\end{align}
where $T_{cav}$ is the physical temperature of the cavity and $T_{amp}$ is the noise temperature of the cryogenic amplifier. Boson statistics need not be considered in the Raleigh Jeans limit ($k_b T_{cav} >> hf$). The three isolators and the low-pass filter are not included in the thermal model because they are at the same temperature as the cavity. The electronics beyond the cryogenic amplifier are also not included in the thermal model because their contribution is suppressed by the Friis cascade equation~\cite{friis, pozar}. The amplifier noise temperature $T_{amp} = \SI{4.9\pm0.5}{K}$ dominates the system noise temperature. The cavity temperature, measured to be $T_{cav} = \SI{45\pm5}{mK}$, is much less than the uncertainty in $T_{amp}$.
The detector sensitivity is estimated from the operating parameters shown in Table~\ref{tab:operating_parameters}. The equation for sensitivity in $\chi$ is
\begin{align}
\chi &= \sqrt{\frac{\beta+1}{\beta}\frac{\snr \times T_n}{\eta m_{\Ap}\rho_{\Ap}V_{eff}\qdm}}\left ( \frac{\Delta f_c}{\Delta t} \right )^{1/4}\label{eqn:dp_power_estimate}.
\end{align}
Equation~\ref{eqn:dp_power_estimate} is derived from rearranging the SNR equation. For this estimate, the bandwidth $b$ is set to the cavity bandwidth, $\rho_{\Ap} = \SI{0.45}{GeV/cm^3}$, and $\qdm = \num{1e6}$. The SNR is also chosen to be two as this approximates a 90\% exclusion limit. The parameters in Table~\ref{tab:operating_parameters} are converted to natural units, and the sensitivity is estimated to be $\chi = \num{2.1e-16}$.
For the dark photon search, power from the cavity is measured using the Rohde \& Schwarz FSW-26 Signal and Spectrum Analyzer. The cavity $\ql = \num{8.7e9}$, so the frequency resolution needs to be $b \sim \SI{100}{mHz}$. This sub-Hertz resolution is achieved using the spectrum analyzer's I/Q-analyzer mode. For these measurements, a \SI{312}{Hz} sample rate is used, and the sweep time of \SI{10}{s} is used. The spectrum's center frequency is set to the cavity's resonant frequency. The frequency bin size is \SI{100}{mHz}. After the data is recorded, the spectrum is truncated even further so that the span is about \SI{31}{Hz} centered around the cavity resonance. On this frequency scale, the power fluctuations are Gaussian without the need for the application of a low pass filter (the Savitzky-Golay filter is typical of haloscope analysis~\cite{PhysRevD.96.123008, PhysRevD.103.032002}). The resulting power spectrum is shown in Fig.~\ref{fig:power}.
The manufacturer's calibrated $T_{amp}$ is corroberated with the measured power $P_m$ in Fig~\ref{fig:power}. Because $T_n$ is dominated by $T_{amp}$, $T_{amp} \approx P_m/(k_b b G_{sys})$, where $G_{sys}$ is the system gain and $b = \SI{0.1}{Hz}$. $T_{amp}\sim \SI{5}{K}$ is obtained if $G_{sys} = \SI{73}{dB}$. This $G_{sys}$ is consistent with the cascaded amplifier gain of \SI{76}{dB} (Fig.~\ref{fig:electrtonics}) and a reasonable insertion loss of \SI{3}{dB} from the cables, connectors, and a \SI{1300}{MHz} bandpass filter (Lorch 6BC-1300/75-S) installed immediately before the spectrum analyzer (not shown).
Once the power spectrum is measured, the standard haloscope analysis is applied to either find a spectrally narrow power excess consistent with a dark photon signal or to exclude parameter space. The procedure for deriving the exclusion limits follows the procedure developed by ADMX and HAYSTAC~\cite{PhysRevD.64.092003, PhysRevD.96.123008, PhysRevD.103.032002}, and is adapted for dark photon searches~\cite{PhysRevD.104.095029, PhysRevD.104.092016, OrpheusPRD}.
There are a few important deviations from the standard haloscope analysis for this search. First, only one spectrum was measured, so combining many spectra at different RF frequencies is not necessary. Second, the frequency range of interest is narrow enough such that the measured power is unaffected by the frequency-dependent gain variation of the electronics. So, the Savitzky-Golay filter is not needed to remove this gain variation. This is advantageous because the Savitzky-Golay filter is known to attenuate the dark matter signal by as much as 10\%-20\%~\cite{PhysRevD.96.123008, PhysRevD.103.032002}.
Third, most of the data points in Fig.~\ref{fig:power} used to verify the noise's Gaussianity and determine the statistical parameters are well outside the cavity bandwidth. Fortunately, in the absence of a dark matter signal, the statistical distribution of the power fluctuations is the same inside and outside of the cavity bandwidth.
Fourth, past haloscope experiments with $\ql << \qdm$ typically convolved the spectra with the dark matter halo lineshape to account for the signal being spread across multiple bins. For this search, $\ql >> \qdm$, so the signal will be Lorentzian from the cavity response. So the spectrum is convolved with the cavity lineshape $L(f, f_0, \ql) = 1/(1+4\Delta^2)$.
Fifth, a single measurement is sensitive to a range of dark photon masses. Thus the excluded power on resonance is convolved with the dark matter halo lineshape. This convolution was also performed in other dark photon searches with $\ql > \qdm$~\cite{PhysRevLett.126.141302}. When performing this convolution, it should be noted that the photon frequency (which corresponds to the cavity frequency) is fixed, and it is the dark photon mass that varies.
No spectrally narrow power excess with an $\snr > 4$ is found in the measured power spectrum. The excluded parameter space using a 90\% confidence limit is shown in Fig.~\ref{fig:dp_limits}. The derived limit assumes dark photon dark matter is randomly polarized and that the dark photon energy distribution follows the standard halo model. The excluded kinetic mixing strength is $\chi_{90\%} = \num{1.8e-16}$ for a dark photon mass of $m_{\Ap} = \SI{5.370}{\mu eV}$. This is consistent with the expected sensitivity estimated from Equation~\ref{eqn:dp_power_estimate}. This is also the deepest exclusion to wavelike dark photon dark matter by almost an order of magnitude.
\emph{Outlook and the Potential of SRF Cavities for Axion Dark Matter Searches}---The exclusion in Fig.~\ref{fig:dp_limits} is an impressive demonstration of how SRF cavities can benefit dark matter searches. But an honest dark matter search with a finite probability of making a discovery requires that the haloscope tune through a broader range of frequencies. SQMS is currently developing experiments using tunable SRF cavities that will search through a broader range of parameter space.
In addition to dark photons, there is a growing interest in dark matter axions. Axions are particularly well motivated because they solve the strong CP problem~\cite{PhysRevLett.38.1440}. Axion haloscope searches require multi-Tesla magnetic fields to be sensitive enough to the QCD axion. The scan rate for axion haloscope searches is still directly proportional to $\ql$. Unfortunately, the performance of superconductors degrades under an external magnetic field. Achieving high quality factors is thus a very active area of research~\cite{ALESINI2021164641, https://doi.org/10.48550/arxiv.2201.10733, PhysRevApplied.17.054013, PhysRevApplied.17.L061005}, and it seems likely that axion haloscopes with $\ql > \num{1e7}$ are achievable in the near future.
This experiment also demonstrates that axion haloscopes with $\ql \sim \num{1e10}$ are worth striving for. Applying a hypothetical \SI{8}{T} magnetic field to the dark photon data in Fig.~\ref{fig:power} would have led to an exclusion on the axion-photon coupling constant at $\gagg \sim \num{4e-16}$, well below DFSZ coupling ($\gagg = \num{8e-16}$).
Despite decades of searching for the axion, only a small fraction of the QCD axion parameter space has been explored. Perhaps a combination of ultrahigh Q cavities, subSQL metrology~\cite{Backes2021, PhysRevLett.126.141302}, multiwavelength detector designs~\cite{OrpheusPRL, Brun2019, PhysRevLett.118.091801, PhysRevD.98.035006, PhysRevLett.128.231802, PhysRevApplied.9.014028, PhysRevApplied.14.044051, PhysRevLett.123.141802}, and innovations in multi-Tesla continuous magnets will enable experiments to probe most of the post-inflation QCD axion parameter space within the next few decades.
\FloatBarrier
\emph{Acknowledgements}---The authors thank Asher Berlin, Yonatan Kahn, Akash Dixit, and Benjamin Brubaker for fruitful discussions regarding scanning strategies and data analysis with ultrahigh Q cavities. The authors thank Andrew Penhollow and Theodore C. Ill for the cavity assembly. This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359%
\bibliography{orpheus_thesis}%
|
Title:
Starspots, chromospheric emission lines, and flares of zero-age main-sequence stars |
Abstract: Zero-age main-sequence (ZAMS) stars are considered to have enormous starspots
and show strong chromospheric emission lines because of their strong surface
magnetic field. We discuss the dynamo activities of ZAMS stars with respect to
their periodic light variation caused by a starspot and with respect to the
strength of the chromospheric emission lines. The light curves of $33$ ZAMS
stars in IC 2391 and IC 2602 were obtained from \textit{TESS} photometric data.
The light curves can be grouped into the following four categories: single
frequency, possible shape changer, beater, and complex variability. The
amplitudes of the light curves are $0.001-0.145\,\mathrm{mag}$, similar to
those of ZAMS stars in Pleiades. The starspot coverages are $0.1-21\%$. We
found that the light variations and Ca\,\emissiontype{II} emission line
strength of ZAMS stars in IC 2391, IC 2602, and the Pleiades cluster are as
large as those of the most active superflare stars and two orders larger than
those of the Sun, and are located on the extensions of the superflare stars.
These results suggest that superflare stars link the properties of the Sun to
those of the ZAMS stars of ages between $30$ and $120\,\mathrm{Myr}$. ZAMS
stars with a single frequency or possible shape change in the light curve tend
to have both large light variation, indicating large spot coverage, and
saturated Ca\,\emissiontype{II} emission line strength. ZAMS stars with beat or
complex variability have small spot coverage and a faint Ca\,\emissiontype{II}
emission line. We also detected $21$ flares in the \textit{TESS} light curves
of $12$ ZAMS stars in IC 2391 and IC 2602, where most of these stars have
saturated chromospheric Ca\,\emissiontype{II} emission lines. The energies of
the flares are estimated to be $\sim 10^{33}-10^{35}\,\mathrm{erg}$, which is
comparable with the energy of a superflare.
| https://export.arxiv.org/pdf/2208.05175 |
\title{ Starspots, chromospheric emission lines, and flares of zero-age main-sequence stars }
\author{Mai Yamashita${}^1$, Yoichi Itoh${}^1$, Yumiko Oasa${}^2$}%
\altaffiltext{}{${}^1$Nishi-Harima Astronomical Observatory, Center for Astronomy, University of Hyogo, 407-2 Nishigaichi, Sayo, Sayo, Hyogo 679-5313 }
\altaffiltext{}{${}^2$Faculty of Education, Saitama University, 255 Shimo-Okubo, Sakura, Saitama, Saitama, Japan}
\email{[email protected]}
\KeyWords{stars: chromospheres --- stars: activity --- techniques: photometric}%
\input{c1} %
\input{c2} %
\clearpage
\input{c3} %
\input{c4} %
\input{c5} %
\begin{ack}
We wish to thank Dr. Notsu Y. and Dr. Namekata K. for comments. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). M. Yamashita was supported by a scholarship from the Japan Association of University Women and would like to acknowledge them. This research was supported by a grant from the Hayakawa Satio Fund
awarded by the Astronomical Society of Japan. Y. I. is supported by JSPS KAKENHI grant number 17K05390.
\end{ack}%
|
Title:
Exploring the Limits of Synthetic Creation of Solar EUV Images via Image-to-Image Translation |
Abstract: The Solar Dynamics Observatory (SDO), a NASA multi-spectral decade-long
mission that has been daily producing terabytes of observational data from the
Sun, has been recently used as a use-case to demonstrate the potential of
machine learning methodologies and to pave the way for future deep-space
mission planning. In particular, the idea of using image-to-image translation
to virtually produce extreme ultra-violet channels has been proposed in several
recent studies, as a way to both enhance missions with less available channels
and to alleviate the challenges due to the low downlink rate in deep space.
This paper investigates the potential and the limitations of such a deep
learning approach by focusing on the permutation of four channels and an
encoder--decoder based architecture, with particular attention to how
morphological traits and brightness of the solar surface affect the neural
network predictions. In this work we want to answer the question: can synthetic
images of the solar corona produced via image-to-image translation be used for
scientific studies of the Sun? The analysis highlights that the neural network
produces high-quality images over three orders of magnitude in count rate
(pixel intensity) and can generally reproduce the covariance across channels
within a 1% error. However the model performance drastically diminishes in
correspondence of extremely high energetic events like flares, and we argue
that the reason is related to the rareness of such events posing a challenge to
model training.
| https://export.arxiv.org/pdf/2208.09512 | command.
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\newcommand{\SubItem}[1]{
{\setlength\itemindent{18pt} \item[*] #1}
}
\shorttitle{Synthetic image generation}
\shortauthors{Salvatelli et al.}
\graphicspath{{./}{figures/}}
\begin{document}
\title{Exploring the Limits of Synthetic Creation of Solar EUV Images via Image-to-Image Translation}
\correspondingauthor{Valentina Salvatelli}
\email{[email protected]}
\author[0000-0002-3232-4101]{Valentina Salvatelli}
\affiliation{Microsoft Research, Cambridge CB12FB, UK}
\affiliation{Frontier Development Lab, Mountain View, CA 94043, USA}
\affiliation{SETI Institute, Mountain View, CA 94043, USA}
\author[0000-0001-5190-442X]{Luiz F. G. dos Santos}
\affiliation{Shell Global Solutions International B.V., Grasweg 31, 1031 HW Amsterdam, The Netherlands}
\affiliation{nextSource Inc, New York, NY 10018, USA}
\author[0000-0002-2180-1013]{Souvik Bose}
\affiliation{Rosseland Center for Solar Physics, University of Oslo,P.O. Box 1029 Blindern, NO-0315 Oslo, Norway}
\affiliation{Institute of Theoretical Astrophysics, University of Oslo,P.O. Box 1029 Blindern, NO-0315 Oslo, Norway}
\affiliation{Lockheed Martin Solar \& Astrophysics Laboratory, Palo Alto, CA 94304, USA}
\affiliation{Bay Area Environmental Research Institute, NASA Research Park, Moffett Field, CA 94035, USA}
\author{Brad Neuberg}
\affiliation{Frontier Development Lab, Mountain View, CA 94043, USA}
\affiliation{SETI Institute, Mountain View, CA 94043, USA}
\affiliation{Planet, San Francisco, CA 94107, USA}
\author[0000-0003-2110-9753]{Mark C. M. Cheung}
\affiliation{Lockheed Martin Solar \& Astrophysics Laboratory, Palo Alto, CA 94304, USA}
\author[0000-0002-6203-5239]{Miho Janvier}
\affiliation{Universit\'e Paris-Saclay, CNRS, Institut d'astrophysique spatiale, Orsay, France}
\author[0000-0002-9672-3873]{Meng Jin}
\affiliation{SETI Institute, Mountain View, CA 94043, USA}
\affiliation{Lockheed Martin Solar \& Astrophysics Laboratory, Palo Alto, CA 94304, USA}
\author[0000-0002-2733-2078]{Yarin Gal}
\affiliation{OATML Group, Department of Computer Science, University of Oxford, UK}
\author[0000-0001-9854-8100]{At{\i}l{\i}m G\"{u}ne\c{s} Bayd{\rlap{\.}\i}n}
\affiliation{Department of Computer Science, University of Oxford, Oxford OX1 3QD, UK}
\keywords{Sun: activity, UV radiation, and general - Techniques: image processing, GPU computing - Methods: data analysis, telescopes - Open-source software}
\section{Introduction}
Since its launch in 2010, NASA's Solar Dynamics Observatory~\citep[SDO;][]{SDO_primary} has monitored the evolution of the Sun. SDO data has enabled researchers to track the evolution of the Sun's interior plasma flows over solar cycle 24 and beyond. It has also continuously monitored the evolution of the solar corona, capturing dynamical evolution at time-scales of seconds and minutes. This capability is due to the suite of four telescopes on the Atmospheric Imaging Assembly~\citep[AIA;][]{AIA} instrument, which captures full-Sun images at two ultraviolet (UV) bands, seven extreme UV (EUV) bands, and one visible band. The seven EUV channels are designed to capture photons from emission lines in highly ionized metals in plasmas at transition region (TR; $10^5$ K $\lesssim T \lesssim 10^6$ K) and coronal temperatures ($10\gtrsim 10^6$ K). This combination of channels with sensitivity to different temperatures allows researchers to track how transition regions and coronal plasmas heat and cool~\citep[e.g.,][]{Cheung:2015}, and to use these thermal histories to test theories of coronal heating and of flares.
The high spatial resolution ($\bf \sim 1.5\arcsec$, $4096\times 4096$ pixels), high cadence (12 s for EUV channels) full-disk observing capability is possible because of SDO's ground system providing a sustained downlink rate of $\sim~67$ Mbps. The collection of continuous data, over more than one solar cycle, provides not only numerous opportunities to perform data-driven scientific studies but also research with the potential to help optimize future solar physics missions.
For instance, the idea of using SDO images for image-to-image translation has been explored in several papers, most notably by \cite{Baso2018, SDOML, Szenicereaaw6548, Park_2019, salvatelli2019}. Image-to-image translation can potentially provide a way to enhance the capabilities of solar telescopes with fewer channels or less telemetry than is available to SDO. The \emph{SDO image translation problem} can be defined as follows: given a set of $N$ (nearly) contemporaneous images taken in different EUV channels, can a model be developed which maps the $N$ input images to the image of a missing (not in input) EUV channel?
Notably, \cite{Lim_2021} adopted a widely used image translation method \citep[Pix2Pix,][]{pix2pix} to tackle the SDO image-translation problem and to understand which subset of channels can better translate other channels. They trained and evaluated models for all combinations of input channels for both $N=2$ and $N=3$ variants of the problem, and compared global image quality metrics to pick out the channel combinations that perform the best. For some channel combinations, the reported pixel-to-pixel correlation coefficient approaches unity.
In this paper, we build on the method presented in \cite{salvatelli2019} for one single channel and we delve deeper into the opportunities and the limitations of applicability of such ``virtual telescopes''. We focus on a permutation of a subset of channels (4 out of 10) and we explore in greater detail what is the quality of this synthetic generation on a number of scientifically-motivated metrics (figures of merit) and in relation to periods and regions of different level of activity of the Sun.
Together with this paper we also open source the code we used for the analysis~\cite{salvatelli_valentina_2022_6954828}\footnote{\href{https://zenodo.org/record/6954828.YumocezMJ-U}{Zenodo: ML pipeline for Solar Dynamics Observatory (SDO) data}} and that can be used by the community to train and evaluate similar models on the publicly available SDO dataset released by \cite{SDOML} .
\section{Data}
\label{sec.data}
The work presented in this project is based on data from SDO's AIA. The AIA instrument takes full-disk,
$4096 \times 4096$ pixel, imaging observations of the solar photosphere, chromosphere and corona in two UV channels and in seven extreme UV (EUV) channels. The original SDO dataset was processed in \cite{SDOML} into a machine-learning ready dataset of $\sim6.6$ TB (hereafter SDOML) that we leveraged for the current work.
The SDOML dataset is a subset of the original SDO data ranging from 2010 to 2018. Images are spatially co-registered, have identical angular resolutions, are corrected for the instrumental degradation over time and have exposure corrections applied. All the instruments are temporally aligned. AIA images in the SDOML dataset are available at a sampling rate of 6 min. The $512\times512$ pixel full-disk images have a pixel size of $\thicksim 4\farcs8$.
The images are saved in single-precision floating point to preserve the high dynamic range ($\gtrsim 14$ bits per channel per pixel). For numerical performance purposes, the images of each channel are re-scaled by a per-channel constant factor which is approximately the average count rate for that channel. The per-channel constant factors can be found at Tab.\ref{tab:average_channels}.
\section{Methodology}
\label{sec.meth}
Our approach of synthesizing solar EUV images is to perform image translation from multiple input channels to one single output channel. For the development of this work we focused on the permutations of four channels (94, 171, 193, 211 \AA). These channels are sensitive to coronal plasmas at different temperatures~\citep{Cheung:2015}.
To perform the image translation we used a deep neural network \citep[DNN, ][]{goodfellow2016deep}, more specifically we adopted a U-Net architecture \citep{Unet15}, an encoder--decoder with skip connections that was first designed for image segmentation on medical images. We used Adam optimizer \citep{Optimizer} and Leaky ReLU~\citep{Maas13rectifiernonlinearities} activations, and implement the code using the open source library PyTorch \citep{paszke_2017}. The full details of the adopted architecture is given in Fig. \ref{fig.u-net}.
We limit the number of channels to four for computational resources constraints. For the training and inference of the architecture presented above we used $4 \times$ NVIDIA Tesla T4s. We trained each model for 600 epochs.
For comparison we experimented also with a simpler baseline model, described by the following equation:
\begin{equation}
\label{Eq.liner_model}
Y_{\rm pred} = \alpha X_{1} + \beta X_{2} + \gamma X_{3} + \delta
\end{equation}
where $Y_{\rm pred}$ is the reconstructed pixel of the output channel, $X_i$ are the pixel values of the input channels; $\alpha$, $\beta$, $\gamma$ are the weights and $\delta$ the bias of the linear combination of the channels. $\alpha$, $\beta$, $\gamma$, $\delta$ are trainable parameters of the model.
The metrics we use to evaluate the accuracy of our results for each permutation are:
\begin{itemize}
\item The difference between predicted and ground truth images in the form of normalized mean squared error (NMSE; Eq.~\ref{nmse}) and normalized root mean squared error (NRMSE; Eq.~\ref{rnmse}). \\
\begin{equation}
{\rm NMSE}(y, \hat{y}) = \frac{\sum_{i=1}^{N}(y_i-\hat{y}_i)^2}{\sum_{i=1}^{N}{y_i}^2}
\label{nmse}
\end{equation}
\begin{equation}
{\rm RNMSE}(y, \hat{y}) = \frac{\sqrt{\frac{\sum_{i=1}^{N}(y_i-\hat{y}_i)^2}{N}}}{\overline{y}}
\label{rnmse}
\end{equation}
\item The structural similarity index \citep[SSIM;][]{Wang04imagequality}, a metric commonly used in computer vision to compute similarity between images, measuring the difference in terms of visually perceived texture and morphology. Identical images have SSIM equal to 1.
\item The average of NRMSE and SSIM, as described in Eq.~\ref{metric}. Lower values mean better performance in this metric. \\
\begin{equation}
{\rm Err}(y, \hat{y}) = \frac{{\rm NRMSE}(y, \hat{y}) + \left[1-|{\rm SSIM}(y, \hat{y})|\right]}{2}
\label{metric}
\end{equation}
\item The average pixel-to-pixel Pearson correlation coefficient.
\end{itemize}
In order to assess how much the DNN is able to learn the physical correlations between channels and to correctly reproduce them in the synthetic images, we also evaluate the difference between the real and the synthetic covariance of the channels. With the aim of better understanding the error, in addition to the standard covariance we compute the neighborhood covariance. In this case the output is a map of the same size of the input images where each value in the map corresponds to the covariance on a squared patch centered in the pixel and of size $20\times20$ pixels as described in Eq.~\ref{eq.patch_covariance}. \\
\begin{equation}
\label{eq.patch_covariance}
cov_{patch} = \frac{\sum_{i}^{N}[(y_{i}-\bar{y})(\hat{y}_{i}-\bar{\hat{y}})]}{N-1}
\end{equation}\\
where $N$ is the total number of pixels in the patch.
Each model has been trained on $6,444$ images ($1,611$ timestamps, one image per channel for each timestamp) in the intervals January $1^{st}~ 2011$ to July $31^{st}~ 2011$ and January $1^{st}~ 2012$ to July $31^{st}~ 2012$. For testing $2,668$ images ($667$ timestamps) have been used, taken in the intervals August $1^{st} 2011$ to October $31^{st} 2011$ and August $1^{st} 2012$ to October $31^{st}~ 2012$. Each timestamp is at least $6$1 hours apart from the closest ones. These time ranges have been selected to ensure we were testing on images significantly different from the training ones. Only timestamps for which all the channels of interest were available have been included in the above datasets.
\section{Experiments}
\label{sec.exp}
For this analysis we trained eight models using the data and architecture described in Sec.~\ref{sec.data} and Sec.~\ref{sec.meth}, two models for each of the four channels permutation. For each channel permutation we trained (1) a model where the input data was scaled by a constant factor (cf. Tab.~\ref{tab:average_channels}) and (2) a model where the square root of the input data was taken, in addition to the constant scaling. The second scaling technique is to explore the impact of pixels with extreme ranges on the training. Each model has been evaluated by studying both the aggregated performance on the full test data and the performance on specific timestamps. Namely timestamps in the neighborhood of Valentine's Day flare (2011-2-15:1:50:00 UT) and in a quiet day of the same month (2011-02-10 00:00:00). The focus of these experiments is to evaluate the robustness of the image-to-image translation approaches in normal and extreme conditions of the Sun's activity. For comparison, we trained also four linear models, one model for each of the four channels permutation, using Eq.~\ref{Eq.liner_model}and input scaled by a constant factor.
\section{Results}
\label{results}
\begin{table}
\begin{tabular}{lcc|cc|cc|cc}
\toprule
\textbf{Deep Neural Network} & \textbf{211\_sqr} & \textbf{211} & \textbf{193\_sqr} & \textbf{193} & \textbf{171\_sqr} & \textbf{171} & \textbf{94\_sq}r & \textbf{94} \\
\midrule
NMSE & 0.010024 & 0.008748 & 0.013414 & 0.013015 & 0.015270 & 0.010151 & 0.009482 & 0.013643 \\
NRMSE & 0.195127 & 0.182286 & 0.225717 & 0.222332 & 0.240829 & 0.196360 & 0.189773 & 0.227641 \\
$|$1 - SSIM $|$ & 0.040844 & 0.046189 & 0.022866 & 0.024522 & 0.030636 & 0.034892 & 0.114447 & 0.138455 \\
(NRMSE + $|$1 - SSIM$|$)/2 & 0.117985 & 0.114237 & 0.124292 & 0.123427 & 0.135732 & 0.115626 & 0.152110 & 0.183048 \\
\bottomrule
\end{tabular}
\caption{Performance of the DNN on different permutations of input/output channels in the set (94, 171, 193, 211 \AA{}) and for different scaling of the input data. In every column the input channels are all but the one indicated in the column name that corresponds to the output channel. Each value is the mean over the whole test dataset. For each metric in this table lower is better. For 94 \AA{} the similarity index is higher than for the others channels, this can be explained by the fact the average value in this channel is higher and the metric is affected by the absolute values. See Sec.~\ref{sec.meth} for explanation of the metrics.}
\label{Tab-permutation}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccccc}
\toprule
\textbf{Deep Neural Network} & \multicolumn{4}{c}{Model output} \\ \cline{2-5}
{Scaling} & 211~\AA & 193~\AA & 171~\AA & 94 ~\AA \\
\midrule
Non Root & $0.994 \pm 0.004$ & $0.991 \pm 0.006$ & $0.993 \pm 0.003$ & $0.991 \pm 0.003$ \\
Root & $0.993 \pm 0.004$ & $0.996 \pm 0.004$ & $0.990 \pm 0.005$ & $0.994 \pm 0.004$ \\
\bottomrule
\end{tabular}
\caption{DNN model. Average Pearson correlation coefficient pixel-to-pixel, mean and standard deviation over the full test dataset for permutations of input/output channels in the set (94, 171, 193, 211 Г…). For each channel combination the average Pearson correlation coefficient pixel-to-pixel was calculated for both trained models, with and without root scaling. The results observed are impressive and in all cases the performance is superior to $0.99$.}
\label{Tab-correlation}
\end{table}
\begin{table}
\centering
\begin{tabular}{lc|c|c|c}
\toprule
\textbf{Linear Model} & \textbf{211} & \textbf{193} & \textbf{171} & \textbf{094} \\
\midrule
NMSE & 0.749594 & 0.742833 & 0.741476 & 0.875264 \\
NRMSE & 1.687336 & 1.679708 & 1.678174 & 1.823300 \\
1 - SSIM & 0.588910 & 0.441623 & 0.490644 & 0.976495 \\
(NRMSE + $|$1 - SSIM$|$)/2 & 1.138123 & 1.060665 & 1.084409 & 1.399897 \\
\bottomrule
\end{tabular}
\caption{For comparison with Tab.~\ref{Tab-permutation}, performance of the linear model on different permutations of input/output channels in the set (94, 171, 193, 211 \AA{}) for standard (no square root) scaling. The DNN consistently improves results of one order of magnitude in each of these metrics. The comparison demonstrates non-linear patterns between channels are important for a correct reconstruction of the images.}
\label{Tab-permutation-linear}
\end{table}
In Tab.~\ref{Tab-permutation} we explore the permutations of three input channels and one output channel and the effect of applying a root scaling transformation to the input images. In addition in Tab.~\ref{Tab-correlation} we show the correlation pixel by pixel for each of the permutations. We found that the same architecture produces similar reconstruction errors and correlation values over all the channels with a NMSE of about 0.01. We observe the similarity index of 94 \AA{} is worse of an order of magnitude with respect to the other channels, this can be explained by the fact SSIM is a not normalized metric and the average test value for this channel is higher than for the others (see Appendix, Tab.~\ref{tab:average_scaled_channels}). The results are remarkable, for example for 94 \AA{} the peak emission lies at a considerably higher temperature than the input channels \citep[see Fig.~1 of][]{Cheung:2015} that makes the reconstruction task a particularly challenging one. These results are in agreement with the results in \cite{salvatelli2019} and \cite{Lim_2021}. Please note that the values reported in Tab.1 of \cite{salvatelli2019} are not normalized. The squared-root scaling model shows roughly equivalent performance with the model with no squared-root applied to input data except for the channel 94~\AA{}.
It is interesting to compare the results in Tab.~\ref{Tab-permutation} with those in Tab.~\ref{Tab-permutation-linear} where the same set of metrics are computed for the linear model. The DNN consistently improves by one order of magnitude over the linear model performance. This result clearly displays the value of using a DNN over a simpler model for the synthesis of the image. The comparison also demonstrates the strength of non-linearity between EUV channels and the fact it cannot be neglected for a meaningful reconstruction.
In order to further evaluate the performance of both models, we calculate in Tab.~\ref{Tab-correlation} the average pixel-to-pixel Pearson correlation for pixels inside the solar disk for each channel combination. Agreeing with Tab.~\ref{Tab-permutation} results, the average pixel-to-pixel correlation shows both models have a remarkable performance where none of the channel combinations had a performance lower than 0.99. These results outperform all the channel combinations presented in \cite{Lim_2021}, which tries several combinations of EUV channels translations using the DL method ``Model B'' from \cite{Park_2019} and \cite{pix2pix}.
Notably \cite{Lim_2021} did not report on other metrics we can use to compare the quality of the corresponding synthetic images. We demonstrate in the following analysis that the elevate visual quality of the images and the excellent pixel-to-pixel Pearson correlation values are not enough to guarantee the absence of artifacts which may impact the scientific utility of the synthetic images. This is illustrated in Fig.~\ref{fig.combined_plots}, Tab.~\ref{Tab-covariance211_flare} and Fig.~\ref{fig.covariance_plots}. Whether the discrepancies between real and synthetic images are sufficiently small to neglect clearly depends on the science case. For this reason, we argue that metrics such as covariance between real and synthetic image and accuracy by intensity should be standard metrics to be considered when reporting on models for the synthesis solar images.
While useful to evaluate the overall performance of the algorithm, the aggregated metrics do not provide insights about the range of validity of the algorithm and the reasons behind its errors. Firstly, to understand how to possibly improve the model, and secondly, to clarify what could be a concrete use of the algorithm in future missions, it is helpful to evaluate the prediction uncertainty at different intensities. For all the permutations, in Fig.\ref{fig.combined_plots} we show the uncertainty on the predicted count rate (top) and the pixel distributions (bottom) as a function of the real count rate. These plots highlight three important factors:
\begin{itemize}
\item The algorithm does well over about three orders of magnitude of true count rate (intensity) and it largely increases its error when trying to predict the highest and lowest count rates. It means the global metrics would be much more favorable if removing these extreme pixels. This behavior also implies the algorithm could be used with confidence for applications that do not require accuracy on the most extreme values of count rates.
\item The difficulty in predicting the pixels with the highest and the lowest count rate is not surprising if looking at the count rate distributions (histograms in Fig.\ref{fig.combined_plots}). The tails of the distributions, where the model's accuracy and uncertainty increase, are severely underrepresented in the distribution. This implies the image-to-image translation algorithm has not been trained or trained in a very limited way on pixels having these count rate values. This observation also provides a clear indication of which strategies can improve the algorithm performance, i.e., techniques to compensate the magnitude imbalance rather than larger architectures.
\item Applying root scaling to the input images during the training tends to improve the results for low count rate pixels and reduces the uncertainty on the prediction. Some channels (193, 211 \AA{}) are more positively impacted than others by this change. This behavior is explained by the fact root scaling improves the sensitivity to small values during the training. We hypothesize that further exploration of different scaling strategies for the training can also be a way to extend the accuracy of the algorithm over more orders of magnitude.
\end{itemize}
Examples of the resulting recovered images when adopting the DNN architecture described in Sec.~\ref{sec.meth} and a model with root scaled input, is given in Fig.~\ref{fig.gt_real_diff_quiet_root} and Fig.~\ref{fig.gt_real_diff_active_root}. The root scaling is reverted in the illustrated images. The first are example of reconstructions on a quiet day, where the Sun shows less activity, while the second are during the well known Valentine's Day flare. In these figures, the first column corresponds to the original images, while the second column corresponds to the ones generated by the DNN. Based on visual inspection, the synthetic image reproduces the morphology of coronal loops in the ground truth image for channels 211 and 171 \AA{}, and the prediction is instead a bit less realistic for 193 \AA{} for both quiet and active days. Clearly during the quiet day the all three channels have better performance than in the Valentine's day. It also interesting to observe that 94 \AA{} is the best performing channel during the quiet day, but the worst performing channel during the active day. This aligns to the results showed at Tab.\ref{Tab-permutation} and \ref{Tab-correlation}. It is unsurprising since the input AIA channels 94, 171 and 193 \AA{} channels have sensitivity to the plasma observed in the 211 \AA{} channel. This outperforms previous results in \cite{Park_2019}, where a conditional generative adversarial network (CGAN) had been trained to translate HMI magnetograms to AIA images.
In the third column of Fig.~\ref{fig.gt_real_diff_quiet_root} and Fig.~\ref{fig.gt_real_diff_active_root} we included the residuals relative to the real image and in the fourth column of the same figures we display the differences between the real and generated images. Dark blue and bright red correspond to the regions where the differences are the largest, and can be seen to be located where the active regions (shown as the brightest regions in the original and generated images) are.
Interestingly, the model well reconstructs coronal holes (CH) in both the active and quiet Sun cases described above, despite the low signal in these regions. This could be due to the fact that the physics of these regions is easier to model than active region coronal loops as the field lines are open and have relatively simpler configuration. A quantitative comparison between CH and full-disk is shown in Fig.~\ref{fig.CH_analysis_193} for channel 193 \AA{} (for the quiet Sun data represented in Fig.~\ref{fig.gt_real_diff_quiet_root}), where CHs are most distinctly visible due to their contrast. The segmentation mask identifies the CH regions based on the simple but robust adaptive intensity threshold technique \citep[similar to the technique employed in][]{2012SoPh..281..793R,2015SoPh..290.1355R}, and the histograms show the difference between the ground truth and the predicted intensities (on a pixel-by-pixel basis) for pixels both within the CH boundaries and the full-disk. It is to be noted that the segmentation mask is constructed for both the predicted and ground-truth images independently using the same intensity threshold criterion. Clearly, the predicted AIA intensities are well constrained not just over the full disk but also on the relatively quieter CH areas.
\begin{table}
\centering
\begin{tabular}{l|c|cccc}
\toprule
Timestamp & Channel & True Cov & Pred Cov & Diff & \%Diff \\
\midrule
2011-2-15-0-0 & 94 & 0.278 & 0.256 & 0.022 & 7.9 \\
2011-2-15-1-0 & 94 & 0.262 & 0.246 & 0.016 & 5.9 \\
2011-2-15-2-0 & 94 & 13.9 & 92.3 & -78.5 & -565 \\
2011-2-15-3-0 & 94 & 1.69 & 1.54 & 0.150 & 8.9 \\
2011-2-15-4-0 & 94 & 0.392 & 0.375 & 0.017 & 4.4 \\ \\
2011-2-15-0-0 & 171 & 0.117 & 0.115 & 0.002 & 2.1 \\
2011-2-15-1-0 & 171 & 0.114 & 0.112 & 0.002 & 1.9 \\
2011-2-15-2-0 & 171 & 1.29 & 13.1 & -11.8 & -913 \\
2011-2-15-3-0 & 171 & 0.186 & 0.178 & 0.008 & 4.3 \\
2011-2-15-4-0 & 171 & 0.139 & 0.136 & 0.003 & 2.3 \\ \\
2011-2-15-0-0 & 193 & 0.048 & 0.047 & 0.001 & 1.4 \\
2011-2-15-1-0 & 193 & 0.047 & 0.047 & 0.001 & 1.3 \\
2011-2-15-2-0 & 193 & 0.191 & 0.605 & -0.414 & -216 \\
2011-2-15-3-0 & 193 & 0.065 & 0.063 & 0.003 & 4.0 \\
2011-2-15-4-0 & 193 & 0.055 & 0.054 & 0.001 & 2.1 \\
\bottomrule
\end{tabular}
\caption{Errors in reconstructing the covariance between 211 \AA~ and the other 3 channels when using the synthetically produced image for 211 \AA{} in correspondence of a highly energetic event (Valentine's Day flare on 2011-2-15:1:50:00 UT). Interestingly the reconstructed covariance has a much higher error than what seen in a quiet period, cf. Tab.~\ref{Tab-covariance211_quiet}, at least 1h before the flare has been detected.}
\label{Tab-covariance211_flare}
\end{table}
\begin{table}
\centering
\begin{tabular}{l|c|cccc}
\toprule
Timestamp & Channel & True Cov & Pred Cov & Diff & \%Diff \\
\midrule
2011-2-13-0-0 & 94 & 0.1506 & 0.1504 & 0.0002 & 0.1 \\
2011-2-13-1-0 & 94 & 0.1672 & 0.1654 & 0.0018 & 1.1 \\
2011-2-13-2-0 & 94 & 0.1601 & 0.1588 & 0.0013 & 0.8 \\
2011-2-13-3-0 & 94 & 0.1713 & 0.1718 & -0.0004 & -0.3 \\
2011-2-13-4-0 & 94 & 0.1652 & 0.1650 & 0.0002 & 0.1 \\ \\
2011-2-13-0-0 & 171 & 0.1213 & 0.1210 & 0.0002 & 0.2 \\
2011-2-13-1-0 & 171 & 0.1261 & 0.1254 & 0.0007 & 0.5 \\
2011-2-13-2-0 & 171 & 0.1227 & 0.1223 & 0.0004 & 0.3 \\
2011-2-13-3-0 & 171 & 0.1241 & 0.1244 & -0.0002 & -0.2 \\
2011-2-13-4-0 & 171 & 0.1226 & 0.1219 & 0.0007 & 0.6 \\ \\
2011-2-13-0-0 & 193 & 0.0449 & 0.0448 & 0.0000 & 0.1 \\
2011-2-13-1-0 & 193 & 0.0470 & 0.0468 & 0.0002 & 0.4 \\
2011-2-13-2-0 & 193 & 0.0439 & 0.0439 & -0.0000 & -0.1 \\
2011-2-13-3-0 & 193 & 0.0465 & 0.0468 & -0.0003 & -0.7 \\
2011-2-13-4-0 & 193 & 0.0471 & 0.0470 & 0.0001 & 0.2 \\
\bottomrule
\end{tabular}
\caption{Errors in reconstructing the covariance between 211 \AA{} and the other 3 channels when using the synthetically produced image for 211 \AA{} in correspondence of a quiet period few days before Valentine's Day flare. The percentage difference is below 1\% for all the channels.}
\label{Tab-covariance211_quiet}
\end{table}
In Tab.~\ref{Tab-covariance211_flare} and Tab.~\ref{Tab-covariance211_quiet} we report the reconstruction error on the covariance between channels, over four hours, for the case 94, 171, 193 \AA{} to 211 \AA{} in correspondence of a flare and on a normally quiet day. Not surprisingly, in light of the results above, the reconstructed covariance has great accuracy (less than 1\% of error) on a quiet day but its error increases in several orders of magnitude in correspondence of the extreme event. The results reported in Tab.~\ref{Tab-covariance211_flare} and Tab.~\ref{Tab-covariance211_quiet} are obtained using the model without square root scaling, the most sensitive to extreme values. They should therefore be interpreted as an upper bound on the error that a similar image translation would have.
With the aim of better understanding the source of error, in addition to the standard covariance, we compute a covariance map with spatial mean on a rolling squared window of $20\times20$ pixels, see Eq.~\ref{eq.patch_covariance} for definition. The resulting covariance map in correspondence of a flare is shown in Fig.~\ref{fig.covariance_plots}. The map clearly shows the error of the model is localised in the area of the flare and it does not affect the rest of the map, in agreement with the localized reconstruction error shown in Fig.~\ref{fig.gt_real_diff_active_root}. This result confirms the results of the ``virtual telescope" would be accurate for most of the pixels, also in presence of an extremely energetic event, but for the specific area where the event happens. Similar results hold for the covariance in other channel permutations.
Incidentally, the above covariance result suggests an increase in its reconstruction error could also be used as a method for early detection of flares as the error starts to increase before the actual flare's event. Variations in reconstruction errors are commonly used in machine learning as anomaly detection methods (e.g. ~\cite{An2015VariationalAB, anomaly_detection}. While directly detecting an increase in the data count could be found to be more effective, the sensitivity to non-linearity of the reconstruction task could produce a stronger or complementary signal that we think is interesting to consider in future work.
\section{Concluding remarks}
\label{section:conlusions}
In this study, we analyzed the performance of an image-to-image translation DNN model in accurately reconstructing extreme ultra-violet images from a solar telescope, focusing on the permutations of four channels. We found that the reconstruction error is extremely accurate over three orders of magnitude in pixel intensity (count rate) and it rapidly increases when considering extremely low and high range of intensities. This behavior is explained by the pixel count rate distribution in the training set, the rarer the value the more difficult for the DNN to provide an accurate prediction. Similarly, when looking at the reconstruction error on the covariance at different times, we found the model can synthetically predict the covariance with less than 1\% of error on quiet days but its performance is severely affected in correspondence of flares, in the active regions.
The results show that a virtual telescope would produce accurate estimations on a range of intensities but, if built following the methodology here described, would not be able to accurately reproduce extremely energetic events like flares. How and in which limit the reconstruction error for such specific events could be improved is an area of research that we leave for future work. The rareness of flare events poses a challenge in training machine learning algorithms to accurately reproduce such events. Based on the results above, we think adopting oversampling techniques and different scaling strategies would improve at least in some measure the performance. To overcome this challenge, other strategies like automatic detection of anomalies could also be adopted in combination with image-to-image translation, in the design of a virtual solar telescope.
In this paper, we did not explore the dependence of model performance from spatial resolution. In principle smaller subpixel scales could have information that improve the global performance of image synthesis and we think this is an important question to be addressed in future work. Importantly, we expect the deterioration of the synthetic accuracy for rare events to happen regardless of the adopted scale because it is caused by the scarcity of examples for training.
\hspace{2cm}
\textbf{Acknowledgments}
This project has been initiated during the 2019 NASA Frontier Development Lab (FDL) program, a public/private partnership between NASA, SETI and industry partners including Lockheed Martin, IBM, Google Cloud, NVIDIA Corporation and Intel. We thank all our FDL mentors for useful discussion in the early stage of the projec, as well as the SETI Institute for their support during the program and beyond. L.F.G.S acknowledges support from NASA under Grant No. 80NSSC20K1580. M.C.M.C. and M.J. acknowledge support from NASA’s SDO/AIA (NNG04EA00C) contract to the LMSAL. S.B. gratefully acknowledges support from NASA contracts NNG09FA40C (IRIS) and 80NSSC20K1272. We thank the NASA’s Living With a Star Program, which SDO is part of, with AIA, and HMI instruments on-board.
Software: We acknowledge for CUDA processing cuDNN \citep{cudnn}, for data analysis and processing we used Numpy \citep{numpy}, Pandas \citep{pandas}, SciPy \citep{scipy} and scikit-learn\citep{scikit-learn}. All plots were done using Matplotlib \citep{matplotlib}.
\clearpage
\appendix
\section{Scaling units for each AIA channel}
\label{section:appendix_average}
\begin{table}[htb!]
\centering
\begin{tabular}{cc}
\toprule
AIA channel (\AA) & Scaling unit [DN/s/pixel] \\
\midrule
94 & 10 \\
171 & 2000 \\
193 & 3000 \\
211 & 1000 \\
\bottomrule
\end{tabular}
\caption{Table of AIA channel scaling units.}
\label{tab:average_channels}
\end{table}
\begin{table}[htb!]
\centering
\begin{tabular}{cc}
\toprule
AIA channel (\AA) & $\overline{Y_{test}}$ \\
\midrule
94 & 26 \\
171 & 0.13 \\
193 & 0.087 \\
211 & 0.26 \\
\bottomrule
\end{tabular}
\caption{Table of average values over the test set after scaling by channel}
\label{tab:average_scaled_channels}
\end{table}
\section{Code description}
\label{section:App_A_codebase}
In this appendix we describe the modular software used to produce the analysis and made freely available online on GitHub under GPL licence. Users are invited to consult the code documentation for additional detail.
\begin{itemize}
\item \textit{src/sdo} - contains all the modules required to run the pipeline plus additional functionalities that can be used as standalone library to interact with the SDO-ML dataset v1.
\item \textit{config} - contains some configuration templates.
\item \textit{scripts} - contains some analysis scripts specific to the paper, they can be used to reproduce the results.
\item \textit{notebooks} - contains some notebooks specific to the paper that can be used to reproduce some of the plots in the paper and some examples to show how to use some functionalities (e.g. how to use the dataloader to load timestamps of interest).
\end{itemize}
The most relevant modules under \textit{src} are:
\begin{itemize}
\item \textit{src/sdo/datasets/sdo\_dataset.py} this module contains the \textit{SDO\_Dataset} class, a custom Dataset class compatible with \textit{torch.utils.data.DataLoader}. It can be used to flexibly load a train or test dataset from the SDO local folder. Data can be selected according to the 3 criteria:
\SubItem{asking for a specific range of years and a specific frequency in months, days, hours, minutes}
\SubItem{passing a file that contains all the timestamps of interest}
\SubItem{passing two timestamps ranges and a desired step}
This class assumes a pre-computed inventory of the SDO dataset exists.
\item \textit{src/sdo/pipelines/virtual\_telescope\_pipeline.py} this module contains the \textit{VirtualTelescopePipeline} class, the class that contains all the training and test logic of the modeling approach. This class also handles the metrics logging and the files saving. Beyond being used for reproducing the results of this work, this class can be used as example of how to integrate the dataloader above with other PyTorch models for a different set of experiments.
\item \textit{src/sdo/parse\_args.py} this module contains the description of all the parameters that can be passed as input to the pipeline and their default values.
\end{itemize}
\section{Additional Figures}
In this appendix we report some additional results not included in the main text.
\clearpage
\bibliography{virtual_telescope}{}
\bibliographystyle{aasjournal}
|
Title:
Anomalies in Physical Cosmology |
Abstract: The $\Lambda$CDM cosmology passes demanding tests that establish it as a good
approximation to reality. The theory is incomplete, of course, and open issues
are being examined in active research programs. I offer a review of less widely
discussed anomalies that might also point to hints to a still better
cosmological theory if more closely examined.
| https://export.arxiv.org/pdf/2208.05018 |
\begin{keywords}
{cosmological tests; astrophysical anomalies; Local Supercluster; supermassive black holes; cosmic structure}
\end{keywords}
\section{\bf Introduction}\label{intro}
\noindent{\it The world is so full of a number of things,}
\noindent{\it I'm sure we should all be as happy as kings.}
{\qquad\qquad\qquad\qquad Robert Louis Stevenson}\medskip
\noindent In the empiricist philosophy of this essay\footnote{This is a much revised version of the unpublished sentiments expressed in the draft paper, Peebles (2021).} experiments and observations have two key functions: the puzzles they reveal inspire theories, and the theories are judged by the degree of empirical success of their predictions. The relative importance of the two functions can be quite different in different cases. Phenomena played a small though certainly significant role in forming Einstein's intuition about a philosophically satisfactory theory of gravity, the general theory of relativity. It has been said that general relativity was ``there, to be recognized,'' essentially by pure thought. But in the empiricist philosophy Einstein's great accomplishment is to be particularly celebrated because the theory Einstein found passes many tests that make the empirical case that general relativity is a remarkably useful approximation to the reality whose nature we seek to discover. Phenomenology played a much greater role when Maxwell gave up seeking a mechanical model for the ether and took the intuitive step of completing the field equations that pass such broad empirical tests. Quantum physics, and the standard model for particle physics, grew in an intermediate manner, a combination of brilliant ideas and helpful phenomenology. The same is true of our present physical cosmology, which grew out of suggestive phenomenology and intuitive leaps, some of which failed while others prove to be remarkably successful. The result, the $\Lambda$CDM theory, agrees with a broad range of well-checked predictions. But the thought that there is an even better classical cosmology to be found is encouraged by intuition, as in the feeling that the stark simplicity of the dark sector of the theory surely is inadequate, and from curious issues in the phenomenology, which is the topic of this paper.
To reduce the chance of misunderstanding I emphasize that the empirical case that the $\Lambda$CDM theory is a good approximation to reality remains compelling. But I argue in this paper that we have empirical evidence that there is a still better theory to be found.
The goal of an improved cosmology, and the far greater great goal of a full reconciliation of the quantum and relativity principles, might be approached by strokes of insight, following Einstein's example, or by incremental advances, following Maxwell: searches for improvements inspired by anomalies. By this I mean empirical evidence that seems likely to disagree with what is expected from standard and accepted theories and ideas. An anomaly might prove to be only apparent, resolved by improved empirical evidence or a better appreciation of the predictions of the theory we already have. Past experience suggests that others will prove to be real, and will be a valuable stimulus to the exploration of new or previously neglected ideas about physical theory. This includes cosmology, and maybe even unification of the basic principles of physics. But a final theory, if there is such a thing, cannot be empirically established, only checked to the extent that world economies can afford.
Another consideration follows from the phenomenon of multiples in scientific discovery: apparently independent appreciations of the same idea (as reviewed by Merton 1961). Examples are common; examples in the development of the present standard cosmology are reviewed in Peebles (2020a; 2022a). Some are results of advances in technology or theory that suggest ideas that are in the literature, and could be appreciated by more than one person. Other examples are results of communication by many means, including nonverbal hints to directions of thinking, which can cause growing community recognition of ideas that are ``there, to be recognized,'' in previously neglected phenomena. This invites the hopeful thought that there are still other phenomena that might be recognized to be useful hints to the improvement of physics, if examined more closely. The thought is particularly relevant for cosmology because the universe has room for an immense number things. I argue that some seem to me to call for closer attention.
Research programs tend to pursue systematic explorations of scientific issues that the community agrees are important. An example in cosmology is the issue of cores and cusps in the distributions of dark and baryonic matter in the centers of galaxies. The issue is important, a guide to the establishment of constraints on the nature of dark matter and the theory of galaxy formation. Research in progress is examining the theory and observations. This is good science: close attention to a specific question might reveal something interesting. But it is good science also to cast about for more obscure issues that seem curious but have not yet been examined as carefully as might be useful. My purpose in this essay is to offer thoughts about anomalies in cosmology that are not so widely discussed, and might aid advances in this subject if given more attention.
Section~\ref{sec:anomalies} is a commentary on more familiar anomalies in physical science that seem relevant to cosmology. The subjects of the next three sections are anomalies, real or apparent, that are less well advertised for the most part. Section~\ref{sec:distributions} reviews open issues in the large-scale distributions of radio galaxies, AGNs and clusters of galaxies. The curious properties of the Local Void are considered in Section~\ref{sec:localvoid}, and apparent anomalies in the properties of galaxies are reviewed in Section~\ref{sec:galaxies}. Section~\ref{SummaryRemarks} presents an overview of what I consider to be the main points of this essay.
We must consider first the reliability of the basis by which phenomena are judged to be anomalous within accepted thinking. The empirical case that the relativistic hot big bang $\Lambda$CDM cosmology is a useful approximation to reality is outlined in Section~\ref{sec:EmpiricalBasis}. This theory is not exact, but it does not seem likely to be far off. Thus my assessments of issues and anomalies in cosmology take it that the tests have persuasively established that the $\Lambda$CDM theory is a good approximation to reality that likely requires refinement.
The anthropic principle is another consideration by which theories are motivated and their anomalies evaluated and maybe resolved. The line of thinking has been particularly influential in research in physical cosmology, and so deserves mention here. What is more, we must expect that social constructions, aided by the anthropic principle, will be forced on the scientific community eventually, if the research community survives that long, as theory outstrips the ability to test predictions. But I argue in Section~\ref{sec:Anthropic} that the anthropic approach is of doubtful use for the present purpose because we have immense room between the extremes of the unapproachably large and unapproachably small scales for continued empirical exploration.
My citations to the literature are limited to a few introductory papers and recent review articles, which I hope aids readability while indicating the present state of thinking about the science and the evidence. I do not comment on issues in cosmology where I have no thoughts to add to what already is in the literature.\footnote{\label{fn:S0s}Choices of interesting issues to explore in what we aim to be the advance of objective science must be subjective. The standard cosmology predicts a small but important primeval abundance of the lithium isotope $^7$Li. The prediction is clear, but I am not comfortable assessing the astrophysics of production and depletion of lithium. Planar alignments of satellites of galaxies might be significant, but I worry about our powerful ability to see patterns even in noise. Thinking about the future of the universe, maybe a big crunch or big rip, offers great adventures of the mind, but it is only empirically interesting if there is something to observe, maybe remnants of the last phase of a cyclic universe. I am particularly uneasy about my lack of attention to the S0 galaxies that are common in clusters, and present but not common among nearby galaxies. An S0 looks somewhat like a spiral galaxy whose arms have been erased leaving a disk of stars usually with a relatively large stellar halo. S0s may have readily interpretable things to teach us about cosmic structure formation, but I have not grasped them.} The declarative sentences in this essay are my opinions. For other reviews of independent selections of issues and anomalies in cosmology I refer to Abdalla, Abell{\'a}n, Aboubrahim, et al. (2022) and Perivolaropoulos and Skara (2022). A different philosophy informs the assessment of the empirical situation in cosmology by Subir Sarkar (2022).
\subsection{Empirical Basis for the Standard Cosmology}\label{sec:EmpiricalBasis}
To help clarify discussions of tests and anomalies we need a definition of the standard $\Lambda$CDM cosmology. The version used in this paper is discussed in Section~\ref{sec:definition}. Section~\ref{sec:tests} outlines the tests that establish this cosmology as a useful approximation to reality.
\subsubsection{Definition}\label{sec:definition}
Einstein's cosmological principle, or assumption, is that the universe is close homogeneous and isotropic in the large-scale average. To be more explicit about the role of this assumption in the standard $\Lambda$CDM theory used in this paper I offer the following definition. The theory applies the standard physics of matter, radiation, and Einstein's general theory of relativity with its cosmological constant to a cosmologically flat universe that is a spatially stationary, isotropic, random process with a close to scale-invariant power law power spectrum of Gaussian and adiabatic departures from homogeneity. This trimmed-down theory has eight free parameters (the density parameters in ordinary matter, dark matter, the CMB, and neutrinos with negligible rest masses; with Hubble's constant, the primeval Gaussian process amplitude and power law index, and the optical depth for scattering of the CMB by intergalactic plasma).
The mathematician and statistician Jerzy Neyman (1962) pointed out that the rational statement of Einstein's cosmological principle is that the universe is assumed to be a realization of a ``stationary stochastic process,'' or as it is put here a stationary random process. The process is what a theory with given parameters is supposed to predict. A measurement to test the theory and constrain the parameters uses a sample, which is considered a realization of the process. A fair sample is a realization that has statistical properties that are usefully close to the predictions of the process. Analyses of what is arguably a fair sample test whether the theoretical process is an adequate approximation to reality.
Neyman remarked that the empirical evaluation of a deterministic classical theory is necessarily indeterministic, because no finite sample can determine parameters to arbitrarily tight precision. To my knowledge Neyman was the first to apply this thinking to our strictly limited sample of the universe, our sample of the stationary random process of the cosmological principle. We can add another reason for the indeterministic nature of physical science: our theories are incomplete. A notable example is the inconsistency of the quantum and relativity principles (Sec.~\ref{sec:QM&GR}).
To make progress we must add to the definition of the $\Lambda$CDM theory the less specific provision that the realization of the random process that is our observable universe is close enough to a fair sample to allow reliable computations of testable predictions. Aspects of this situation are discussed in Section~\ref{sec:theacausaluiverse}. A related provision is that the primeval power law power spectrum of the primeval mass distribution must be truncated to avoid a divergence of spacetime curvature fluctuations. The measured scalar spectral index, $n_s\simeq 0.96$, is associated with primeval spacetime curvature fluctuations that scale with size $r$ as $\delta\phi\propto r^{(1-n_s)/2}\sim r^{0.02}$. The power spectrum has to be truncated at some large scale, but it will be assumed that the wavenumber at the truncation doesn't much matter. Variants of this standard picture, such as a Tilted Universe (Turner 1991), in which we might not have a fair sample, are not explicitly considered here.
Other authors use different definitions of the cosmological principle. Secrest, von Hausegger, Rameez, et al. (2022) take the principle to be ``that the universe on large scales must appear to be the same to all observers, independent of their location.'' I am indebted to Subir Sarkar for pointing out that Milne (1933) presented essentially the same definition. This is of historical interest because Milne introduced the term, Einstein's cosmological principle, and pointed out that Hubble's law follows from this principle without application of any deeper theory. But consider that fluctuations in the distributions of extragalactic objects on scales we can observe within the present Hubble length were at earlier epochs not observable. Why should there not be density fluctuations beyond the present Hubble length that are observable if at all only through their indirect effects? Neyman's philosophy accommodates this. Still more aspects of the situation are discussed in Section~\ref{sec:theacausaluiverse}.
\subsubsection{Cosmological Tests}~\label{sec:tests}
Surveys of the tests that make the case that the $\Lambda$CDM cosmology is a useful approximation to reality, and that an even better theory will look much like $\Lambda$CDM, are widely available in the literature. My contribution is in Peebles (2020a). I offer here a reminder of key points in my positive assessment of the situation.
The measured statistical patterns in the space distribution of the galaxies and in the angular distribution of the thermal cosmic microwave background radiation, the CMB, agree with what is expected from the remnants of acoustic oscillations of the plasma and radiation that acted as a fluid up to recombination at redshift $z\sim 1000$. The precision of the CMB anisotropy measurements and the tight consistency with the $\Lambda$CDM predictions is deeply impressive. But even more impressive is the consistency with the theory and observation of the pattern in the space distribution of the galaxies that in theory also is a remnant of the acoustic oscillations. This cosmological test of consistency is based on two quite different phenomena, the space distribution of the galaxies observed at redshifts less than unity and the pattern in the angular distribution of the CMB that in theory was formed at redshift $\sim 1000$. The patterns in the two distributions are measured by different methods of observation and data reduction, yet they agree with the same $\Lambda$CDM universe. This consistency is a demonstration that we have a good approximation to reality that is as close to convincing as we can get in natural science.
Other tests of consistency, in examples based on what happened over a broad range of redshifts, add weight to the case for the $\Lambda$CDM theory. The values of the cosmic mean mass densities in baryons and dark matter, the helium mass fraction, and the cosmological constant that are needed to fit the CMB anisotropy measurements that probe the state of the universe at $z\sim 1000$ agree with the baryon density that fits the formation of the isotopes of hydrogen and helium at $z\sim 10^9$, the abundances of helium in the Sun, interstellar plasma, and planetary nebulae, the dynamical mass density derived from relative motions of the galaxies at redshifts less than unity, the mass density and cosmological constant derived from the supernova redshift-magnitude relation at $z\sim 1$, the stellar evolution ages of the oldest known stars, and the angular size distance as a function of redshift at $z\lap 1$.
There are discrepancies, real or apparent. The values of Hubble's constant, $H_{\rm o}$, that are needed to fit the CMB anisotropy measurements differ from the relation between distances and recession speeds of relatively nearby galaxies by about 10\%. This well-discussed Hubble tension, if real, is a 10\% error arising from tracing the expansion of the universe to the present by a factor of a thousand from the epoch of formation of the patterns in the distributions of the baryons and CMB. I count this as an impressive success to be added to the rest of the evidence that the $\Lambda$CDM theory is a useful approximation to reality, though of course not exact. If the anomaly in the two measures of $H_{\rm o}$ is real then surely other anomalies are to be found. Another tension is the evidence that the normalization of the mass fluctuation power spectrum required to fit the CMB anisotropy measurements differs from the normalization required to fit measurements at low redshifts: gravitational lensing, the galaxy two-point correlation function in redshift space, and counts of clusters of galaxies. The normalizations, measures of $\delta M/M$, again differ at high and low redshifts by 10\% (in the compilation by Perivolaropoulos and Skara 2022).
Einstein's general theory of relativity passes tight tests on the scale of the Solar System down to the laboratory, on scales $\lap 10^{13}$~cm. Cosmology applies this theory at distances on the order of the Hubble length, $10^{28}$~cm. As a general policy, would you trust an extrapolation of a theory by fifteen orders of magnitude in length scale? But we have a prior example, the enormous range of scales of the successful applications of quantum physics, from superconductors to detection of the higgs boson. The success of the cosmological tests we have so far gives considerable weight to this great extrapolation of general relativity to its application in the $\Lambda$CDM theory.
The key point from these considerations is that we have a broadly consistent story from a considerable variety of ways to observe the nature of the universe by different methods of observation and analysis, by groups that are operating independently and, it is reasonable to expect, interested in finding anomalies that may prove to be interesting. It would be ridiculous to suppose that the network of tests is wrong or misleading but has converged to apparent consistency by some combination of accidental and unintended errors, let alone conspiracies. We have instead excellent reason to expect that a better theory to be discovered will look a lot like $\Lambda$CDM, because the $\Lambda$CDM universe has been shown to look a lot like what is observed. This is the basis for the conclusion that we have a useful approximation to reality, and the hope that empirical anomalies will offer hints to improvements.
The search for improvements includes exploration of the consequences of modified gravity physics. It would be a curious coincidence if the theory required modification just on reaching the scales of cosmology, however. A better immediate prospect for improvement is the physics of the dark sector of $\Lambda$ and dark matter, which seems artificially simple.
We cannot prove as a theorem that no other physical theory could predict the reasonable degree of consistency of theory and observation of these many different ways to probe the universe; we do not do theorems in natural science. But we can conclude from these tests that we have a compelling case that the $\Lambda$CDM theory is a useful approximation to reality, and we can hope to see improvements of the theory as the observations improve.
\subsection{The Anthropic Principle}\label{sec:Anthropic}
If society continues to be willing and able to support curiosity-driven research in the natural sciences there will come a time when physicists have found a final theory of everything that is internally consistent and agrees with all available tests, but the assessment by tests of predictions will be impossible. That would be impossible in principle, if something like a multiverse is involved, or impossible in practice, because the world economy cannot afford tests of the predictions of this final theory. It will be reasonable and sensible to consider this theory to be a persuasive nonempirical establishment of reality (Dawid 2013). But it will be sensible also not to be quite sure of what must be a social construction (Peebles 2022).
We have a precursor to this dilemma, the anthropic principle. It offers one way to deal with an anomaly: postulate an ensemble of all possible universes and observe that we could flourish only in one suited to our needs expressed so as to account for the anomaly. Reactions to this line of thought differ. Some dismiss it as a ``just so'' story. Steven Weinberg (1989) pointed out that it is one way to account for the quantum vacuum energy density, which looks likely to be quite unacceptable large. This section is meant to explain my feeling that the anthropic principle is not an appropriate guide to considerations of anomalies in physical cosmology.
Robert Henry Dicke (1961) introduced a weak form of this argument. Dicke pointed out that the universe has to have been expanding for at least a few gigayears. The time is needed to allow for the evolution of several generations of stars that produced the heavy elements we need, then the formation and cooling of the solar system, and then the evolution of the species up to observers who take an interest in the expanding universe. This is a consistency condition. Better put, it is the assumption that Nature abhors logical inconsistencies.
An argument based on a more adventurous form of the anthropic principle starts from the evidence that there are enormous numbers of planets around stars in our galaxy. This allows room for many planets capable of hosting beings similar to us. The frequency distribution in cosmic times when these beings flourish on different planets might be expected to peak at about $10^{10}$~yr, because this allows time for natural evolution while avoiding the serious slowing of star formation at much greater cosmic times. This time is about what is observed on our planet; we flourish about when might be expected. An empirical test from a modest sampling of what is on nearby planetary systems might be possible, eventually.
Weinberg (1989) discussed a stronger form that postulates a statistical ensemble of universes, a multiverse. If, for example, universes in the ensemble that have shorter expansion times are more numerous, then the odds are that we live in one of the universes with the minimum expansion time consistent with what is required to allow our existence, which seems about right. Weinberg applied this thinking to the curiously small value of the cosmological constant compared to what is expected from quantum physics. Weinberg postulated that the laws of physics in each universe in the ensemble would be different. We could only flourish in a universe with physics similar to ours on the level we require, but that degree of similarity could allow a broad spread of values of the quantum vacuum energy density, $\Lambda$, provided that that depends on deeper physics that does not affect our well-being. There would be universes in the multiverse that satisfy this condition and the value of $\Lambda$ is not so negative that the universe stops expanding and collapses too soon for the span of time we required, and not so positive and large that the rapid expansion driven by $\Lambda$ would have prevented the gravitational assembly of galaxies. If galaxies in the ensemble that have larger absolute values of $\Lambda$ are more common, as might be expected from the large value expected of the quantum vacuum, then we would expect to find ourselves in a universe with a value of $\Lambda$ that is about as large as is consistent with our existence. This is about what is observed.
Martin Rees (2020) rightly celebrates the concept of the multiverse as the next layer in the sequence of revolutions in our understanding of the nature of the world around us. Ideas about this have passed through many layers: the Ptolemy universe with the earth centered in the crystal spheres that hold the astronomical objects; the Copernican universe with the sun at the center; the Kapteyn universe centered on the Milky Way galaxy; Hubble's realm of the nebulae with no center; and the multiverse of which our universe is but a speck. The layers of discovery go down in scale too. Henri Poincar\'e (1902) remarked that the Mariotte/Boyle law is wonderfully simple and accurate for many gases, but these gases examined in sufficiently fine detail break up into the complex motions of enormous numbers of particles. Poincar\'e asked whether gravity examined in sufficiently fine detail might also depart from the simplicity of Newton's law into complex behavior. Poincar\'e suggested we consider that ``then again [there may be] the simple under the complex, and so on, without our being able to foresee what will be the last term.'' Maybe underlying the particle physicists' concept of a theory of everything are yet more layers of Poincar\'e's successive approximations. And why should the layers of structure on large scales not continue to multiverses and beyond?
Weinberg (1989) cautioned that the anthropic upper bound on the absolute value of $\Lambda$ is well above the bound that could be set by astronomical observations we had then. There are other examples of what seem to be excessive satisfaction of the anthropic condition. We need at least one gravitational potential well similar to that of a galaxy to contain and recycle the debris from a few generations of stars to have produced the chemical elements we require. But did we need the observed enormous number of galaxies? Would the already large number of planetary systems among the $\sim 10^{11}$ stars in the Milky Way have been adequate? If the Milky Way would serve, and given that it is present, does physics require all those other galaxies? In implementations of cosmological inflation the numbers of galaxies, and their sizes and densities, depend on the amplitude of the primeval fluctuations in spacetime curvature associated with the primeval departures from an exactly homogeneous mass distribution. A universe identical to ours except that the fluctuations in spacetime curvature are an order of magnitude smaller would develop far fewer galaxies that are less dense, but would that be a problem for our existence? We have so many galaxies to spare. If in the multiverse the universes with smaller primeval curvature fluctuations were more common then application of the anthropic consideration would lead us to expect far fewer galaxies than observed; we don't need so many. If universes with larger primeval curvature fluctuations were more common we would expect to find ourselves in an accidental island of tranquility among the chaos of violent mergers and relativistic collapses. Again, neither situation is observed.
Weinberg's argument is good science; it explores a possible aspect of reality that accounts for the great difference between the value of the $\Lambda$ of cosmology and the value expected from quantum physics. But there are troubling aspects of this approach. The excess of baryons over antibaryons in the Local Group could be attributed to the anthropic principle: we live in a universe drawn from the multiverse that has the excess baryon density acceptable for our existence. But theories of particle physics and physical conditions in the early universe might predict the excess. Given the choice of seeking this better physics or resorting to the anthropic principle, I expect most would choose the former. It can be awkward to base arguments on the Panglossian principle that we inhabit the best of all possible universes suited to our existence.
\section{Anomalies in Physical Science}\label{sec:anomalies}
Some anomalies that have long resisted interpretation are so familiar that they tend to pass without mention. An example is Wigner's (1960) ``Unreasonable Effectiveness of Mathematics in the Natural Sciences.''
\subsection{Physics is Successful} \label{PhysicsisSuccessful}
Eugene Paul Wigner (1960) wrote about the ``two miracles of the existence of laws of nature and of the human mind's capacity to divine them.'' Natural scientists are conditioned to accept these two miracles, or phenomena, as self-evident; they are essential for the discoveries of well-tested science that makes possible the vast range of technology we all experience. But we should pause on occasion to consider that Wigner's two miracles are assumptions. Experience supports them, but of course never offers a proof.
The starting miracle to add to these two is that the world around us exists, and has existed in some form or another for a very long time, with the properties one would expect of physical reality. In natural science we must take this as given. We assume then Wigner's two miracles and notice that they satisfy two consistency conditions. First, if macroscopic physics were not reproducible and lawful it would be difficult to imagine the natural evolution of the species: what is the use of adapting to physical properties of matter that can change on time scales less than cosmic? Second, if it were not possible to discover useful approximations to the laws of nature then I suppose we would not be marveling about it. These thoughts could be taken to suggest that the assumption of lawful behavior, which is such a good approximation, might be found to fail as we probe ever deeply into the nature of the world, at levels where erratic behavior need not have had a deleterious effect on living matter. It calls to mind quantum physics.
\subsection{The Relativistic and Quantum Principles are Inconsistent}\label{sec:QM&GR}
In the empiricist philosophy the use of quantum physics to account for the spectra of galaxies at redshift $z=10$ and the formation of the light isotopes at $z\sim1000$, applied in the strongly curved spacetime of relativity, need not be problematic even though the quantum and relativity principles are not consistent. Maybe this is a signal of an intrinsic inconsistency; maybe physics is not exactly lawful. The improbably large estimate of the quantum vacuum mass density discussed in Section~\ref{sec:Lambda} is a real problem. More abstract, but a worry, is whether the quantum physics of observables operating on state vectors is a useful approximation when extrapolated to describe the whole universe. Would a quantum universe in a pure state really decohere into the classical world of general relativity? Would a viable theory of our universe taken to be a mixed state be any different from our present theory? A reasoned assessment of such issues is beyond my ability and the scope of this essay.
\subsection{The Symmetry of Matter and Antimatter is Broken}\label{sec:baryonnumber}
A familiar anomaly that is essential to our existence is the pronounced local excess of matter over antimatter. The sensitive tests for anti-helium by the Alpha Magnetic Spectrometer indicate the Milky Way contains little antimatter (Poulin, Salati, Cholis, Kamionkowski, and Silk 2019; Aguilar, Ali Cavasonza, Ambrosi, et al., 2021). We can add the evidence from the absence of detection of gamma ray annihilation radiation from dwarf galaxies that have plunged into the Milky Way. And since satellites of the Milky Way and our neighbor M~31 surely have intermingled before falling into one or the other of the large galaxies without producing detectable gamma rays it seems clear that the Local Group is made of baryons, with a tiny fraction of antibaryons.
Should we revisit the question of whether some galaxies are made of anti-baryons? A sharp division of regions of matter and antimatter could invite an unacceptably large surface mass density in domain walls, but an imaginative physicist might find a way around that. The literature on possible extensions of the standard model for particle physics and the physical conditions that would account for baryogenesis continues to grow. I am not competent to review the state of the art.
\subsection{Should Local Physics be Evolving?}\label{sec:EvolutionPhysics}
We have been given leave by string theory to imagine that the dimensionless parameters of physics are evolving, because in the varieties of string theory it is difficult to see what fixes their values. Thus Uzan (2003, Sec.~VI.B) concludes that ``as yet no complete and satisfactory mechanism for the stabilization of the extra dimension and dilaton is known," and with it stabilization of the dimensionless parameters of fundamental physics. This complements an older thought based on a measure of the strength of the gravitational interaction,
\beq
{\cal G} = {Gm^2\over \hbar c}\sim 10^{-38}, \label{gravitystrength}
\eeq
where $m$ is the mass of a nucleon. This number is remarkably small, one might say anomalously so. An anthropic explanation is that if ${\cal G}$ were much larger it would make stellar evolution times too short to allow for evolution of the species by natural selection, even on a suitably small planet near a suitably low mass long-lived star.
Dirac (1937) argued that it is difficult to imagine how the tiny value of ${\cal G}$ could follow from a fundamental theory that might be expected to produce numbers such as $\pi$ and $e$, and integers of modest size and their fractional powers. Dirac suggested that a hint might be drawn from the comparison of ${\cal G}$ to the ratio of the atomic time $e^2/mc^3$ to the Hubble expansion time, then estimated to be $t \sim 2\times 10^9$~yr, giving the dimensionless number
\beq
{\cal T} = {e^2\over t~mc^3} \sim 10^{-38}. \label{atomictime}
\eeq
It is curious that the two exceedingly small numbers, ${\cal G}$ and ${\cal T}$, are similar. The value of ${\cal T}$ is decreasing, assuming the universe is evolving and local physics is not. Maybe ${\cal G}$ is small, and comparable to ${\cal T}$, because ${\cal G}$ is evolving to its natural value, zero, along with ${\cal T}$.
The measure of the electromagnetic interaction,
\beq
\alpha = {e^2\over\hbar c}\sim {1\over 137}, \label{eq:alpha}
\eeq
is not such a small number. But if ${\cal G}$ is not constant then $\alpha$ surely need not be fixed either.
Einstein's cosmological constant, $\Lambda$, represents the effective vacuum energy density,
\beq
\rho_\Lambda = {3 H_{\rm o}^2(1 - \Omega_m)\over 8 \pi G}
\sim 10^{-29}$~g~cm$^{-3}.
\eeq
The ratio of this quantity to the Planck mass density, $\rho_{\rm Planck}=c^5/\hbar G^2$, is
\beq
{\cal L} = {\rho_{\Lambda}\over\rho_{\rm Planck}} \sim 10^{-123}.
\eeq
The empirical evidence is that $\cal L$ is not zero (as reviewed in Secs.~\ref{sec:tests} and \ref{sec:Lambda}), but rather this dauntingly small number. Maybe it calls for application of the anthropic principle, as Weinberg (1989) argued. Or, following Dirac's thinking, maybe this number is so small because local physics has been evolving for a long time, and with it the value of $\cal L$.
Dicke was fascinated by the thought that the laws of physics might change as the universe evolves. My impression is that this was at least in part because checking on the idea led to fascinating explorations of the many possible sources of empirical evidence for or against evolution, drawn from the laboratory, geology, astrophysics, and on to cosmology, always with close attention to the empirical content (e.g. Dicke 1964). My PhD dissertation under his direction was on the theory and empirical constraints on the evolution of the strength $\alpha$ of the electromagnetic interaction. Dicke played an important role in setting up the Lunar Laser Ranging experiment that has yielded tight tests of physics, including the evolution of ${\cal G}$. The experiment has established that at the present epoch the strength of the gravitational interaction is not evolving faster than about one percent of the rate of expansion of the universe (Williams, Turyshev, and Boggs 2004). Dicke's program led Bharat Ratra and me to explore the thought that the value of the cosmological constant $\Lambda$ is so small compared to what is expected from quantum physics because $\Lambda$ is not a constant: it has been slowly evolving to its only reasonable value, zero. Ratra and Peebles (1988) present this thought and a model in which it happens.
The search for evidence of evolution of the effective value of the vacuum energy density $\Lambda$ under its new name, dark energy, is now widely discussed and well supported; an example is the Dark Energy Survey, DES (DES Collaboration 2022). It is equally good science to test whether the strength of the electromagnetic interaction, $\alpha = e^2/\hbar c$, might evolve. Advances in technology to test this are discussed in recent papers by Murphy, Molaro, Leite, et al.(2022) and Webb, Lee, and Milakovi{\'c} (2022). So why isn't there a well-supported Fine-Structure Survey, FSS?
The challenge posed by Dirac (1937) and Dicke (1964) remains: discover what accounts for the measure of the strength ${\cal G}$ of the gravitational interaction in equation~(\ref{gravitystrength}) that is so far from what might be expected to be predicted by a theory of everything. A challenge of the same sort is to account for the great difference between the values of the effective vacuum energy density $\Lambda$ of cosmology and the Planck mass density suggested by quantum physics (Sec.\ref{sec:Lambda}). Maybe these pronounced anomalies will be explained by subtle aspects of a deeper theory that predicts the tiny values of ${\cal G}$ and $\cal L$. Maybe we must resort to the anthropic philosophy. Or maybe both ${\cal G}$ and $\cal L$ have been decreasing, ${\cal G}$ following a course of evolution that happens to have escaped the tight constraint from the Lunar Laser Ranging experiment.
\subsection{The Standard Cosmology is Singular}
The $\Lambda$CDM theory defined in Section~\ref{sec:definition} predicts that the expansion of the universe traces back to a singular state of arbitrarily large density. It is usually supposed that Nature abhors singularities, and that this one necessarily points to incompleteness to be remedied by a better theory. The most widely discussed possible extension of $\Lambda$CDM is cosmological inflation (Guth 1981). It removes the singularity, but Borde, Guth, and Vilenkin (2003) argue that even eternal inflation cannot have been eternal back to the arbitrarily remote past. This is not an argument against inflation, eternal or otherwise, only that the term, eternal, cannot be the whole picture. We make progress by successive approximations.
When the concept of cosmological inflation first gained general attention an implication was taken to be that space sections are flat with close to Gaussian adiabatic departures from homogeneity and a power law power spectrum slightly tilted from scale-invariance. This was before each of these conditions were observationally established, and it no longer matters whether inflation really predicts all this. In the empiricist philosophy of this essay this history makes inflation particularly interesting, though of course the empirical case is not yet persuasive because tests of predictions are still scant.
\subsection{The Standard Cosmology is Acausal}\label{sec:theacausaluiverse}
Hubble's (1936) ``Realm of the Nebulae,'' the observed galaxies, does not have a noticeable edge. This is consistent with the $\Lambda$CDM theory defined in Section~\ref{sec:definition}, but it is an anomaly because in this theory distant galaxies observed in well-separated parts of the sky have not been in causal contact no matter how far back in time the expansion is traced, to the singularity. So how did the galaxies ``know'' how to resemble each other?
The resolution offered by the cosmological inflation picture is that there was a time in the early universe when a near exponential rate of expansion produced a far larger horizon, resulting in causal connection across all we can see. The successful empirical tests of cosmology outlined in Section~(\ref{sec:tests}), which depend on this acausality, are in turn evidence of the effect of inflation in some form. At the time of writing, however, we have only a modest empirical basis for assessments of specific theories of this aspect of what happened in the remote past.
In the standard theory the gravitational growth of primeval departures from an exactly homogeneous mass distribution, which is discussed in Section~\ref{sec:localgravity}, is acausal, as follows. Let the universe be cosmologically flat, ignore the mass in radiation and the pressure of matter, and assign time-orthogonal coordinates that eliminate the decaying mode of the departure from a homogeneous mass distribution. Then in linear perturbation theory the motion of the matter relative to the general expansion of the universe is
\beq
v^\alpha(\vec x,t) = {Ha(t)f(\Omega)\over 4\pi}
{\partial\over\partial x^\alpha} \int d^3x'{\delta(\vec x',t)\over |\vec x' - \vec x|}.
\label{eq:peculiaracceleration}
\eeq
Here $H$ is Hubble's constant, $a(t)$ is the expansion parameter, $f\simeq \Omega^{0.6}$ where $\Omega$ is the density parameter, and $\delta(\vec x,t)$ is the fractional departure from a homogeneous mass distribution. We see that in the gravity physics of the standard cosmology a mass concentration $\delta(\vec x',t)$ that is in principle too far away to be observed can produce a flow $v^\alpha(\vec x,t)$ that in principle can be observed. Grishchuk and Zel'dovich (1978) may have been the first to recognize this.
The situation illustrated in equation~(\ref{eq:peculiaracceleration}) is acausal, but we have learned to live with it by placing it in the initial condition that the universe is a stationary random process. This means that in the early universe the cosmic structure we observe would have been waves in the mass distribution with tiny amplitude and spread across far broader lengths than the particle horizon. Put another way, in standard $\Lambda$CDM the statistical homogeneity of the universe and its primeval power spectrum of departures from homogeneity have acausal origins.
Anther aspect of the acausality of our present cosmology arises in the theory and observation of the angular distribution of the thermal cosmic background radiation temperature $T(\theta, \phi)$ as a function of angular position across the sky. It is conveniently expressed as the spherical harmonic expansion
\beq
T(\theta, \phi) = \sum a_\ell^mY_\ell^m(\theta, \phi). \label{eq:harmonicexp}
\eeq
In the standard theory the real and imaginary parts of the expansion coefficient $a_\ell^m$, if measured in many different realizations of the universe, would have gaussian distributions around zero mean. On small scales, large degree $\ell$, there are in effect many observations of realizations of the random process across the sky, so the measurements of the absolute values $|a_\ell^m|^2$, averaged over $m$, have little scatter in the prediction and measurement. This allows the impressively close checks of the $\Lambda$CDM theory over a large range of values of $\ell$. The situation is different at small $\ell$, large angular scale, because the $|a_\ell^m|^2$ are averaged over only a few $m$. The result is an uncertain measurement to be compared to an uncertain prediction that depends on what is in parts of the universe we cannot observe in principle.
\subsection{Why Dark Matter?}\label{DM}
The starting idea for nonbaryonic dark matter was that it might be a new family of neutrinos with the standard V-A coupling to their leptons and neutrino rest mass $\sim 3$~Gev. The mass was chosen so the remnant abundance from the thermal production and annihilation of these hypothetical neutrinos in the hot early universe would be interesting for cosmology. It was introduced, independently as far as I know, in five papers: Hut (1977); Lee and Weinberg (1977); Sato and Kobayashi (1977); Dicus, Kolb, and Teplitz (1977); and Vysotskij, Dolgov, and Zel'dovich (1977). This candidate for dark matter has become known as weakly interacting massive particles, or WIMPs. Discussions of possible detection of WIMPs began with Steigman, Sarazin, Quintana, and Faulkner (1978), who pointed out that interactions of a sea of massive neutrinos with themselves and the baryons might allow accumulation of these neutrinos in stars and planets, with observable effects, as in what became known as dark stars. Cold dark matter was more formally added to the relativistic cosmological model by Peebles (1982), for the purpose of showing how the smooth distribution of the CMB can be reconciled with the clumpy distribution of the galaxies in the picture of gravitational formation of cosmic structure.
The first laboratory attempts to detect interactions of WIMPs with ordinary matter began in the 1980s. Schumann (2019) reviews the considerable advances in sensitivity since then in a considerable variety of experimental designs, mass scales, and effective isolation from cosmic rays and other local noise. There are other candidates for dark matter, with active research aimed at their detection. Examples are the lightest supersymmetric partner, axions, fuzzy dark matter, supersymmetric dark matter, and black holes. The searches for signs of these objects and more, in the laboratory and astronomical observations, have been energetically pursued for four decades, and at the time of writing they have not yielded a generally accepted detection of any form of dark matter (apart from the small contributions by the neutrinos in the three known lepton families). Several aspects of this situation are to be considered.
First, the absence of detection despite many years of great effort does not falsify the assumption of nonbaryonic dark matter in the $\Lambda$CDM theory. This dark matter is an essential postulate, but the theory places no requirement on its place in an extended standard model for particle physics. More broadly put, we must bear in mind that we have no guarantee that we can discover how to fit well-established phenomena into satisfactory theories. The success so far in turning anomalies into physical theories that pass challenging tests has been so productive that the failure to detect dark matter despite considerable effort might be considered anomalous on historical grounds. But to repeat: the failure of detection would conflict with experience in physics, but it would not conflict with the $\Lambda$CDM theory.
Second, we now have in physics two types of matter, baryonic and dark. Properties of the former are known in great detail; properties of the latter are only roughly constrained. Ideas about how baryonic matter formed look promising but have not yet converged on a generally accepted theory (Sec.\ref{sec:baryonnumber}). Ideas about how dark matter formed must be fragmented because they depend on how dark matter fits the rest of particle physics, which is not known. Since dark matter does not seem to be necessary for our existence the Panglossian philosophy of the anthropic principle discussed in Section~\ref{sec:Anthropic} would have it that the creation of dark matter must necessarily have accompanied the creation of the baryonic matter we certainly need. It could account for the otherwise curious coincidence of the comparable amounts of baryonic and dark matter, as in asymmetric dark matter (e.g., Zurek 2014).
Third, the elegance and predictive power of the rich physics of known matter inspires the thought, or hope, that the dark sector surely is more interesting than a gas of freely moving particles with initially small velocity dispersion along with a constant mass density that has a distinctly odd value. It makes sense to explore the idea that the physics of the dark sector resembles elements of our established particle physics. Influential examples include the Sommerfeld/Coulomb enhancement of scattering by a Yukawa potential, in the influential paper by Arkani-Hamed, Finkbeiner, Slatyer, and Weiner (2009); and the scalar field equation~(2) for Fuzzy Dark Matter in the influential discussion by Hui, Ostriker, Tremaine, and Witten (2017). But there are other possibilities. Maybe the dark matter is in whole or part black holes that were present before the earliest stars. The thought has a long history (e.g., Zel'dovich and Novikov, 1967; Carr and Hawking, 1974) and continues to look interesting (e.g., Carr, K{\"u}hnel, and Sandstad, 2016; Cappelluti, Hasinger, Natarajan, 2022). Or maybe dark matter is something new.
Do galaxies always contain dark matter? Since most of the dark matter is in the outskirts of a galaxy, possible exceptions would be galaxies that have been tidally stripped, and the dwarfs that might have formed by dissipative settling from tidal streams. Apart from such effects, dark halos are expected to be universal in the standard $\Lambda$CDM cosmology. There are possibly interesting challenges to this prediction. The large S0 galaxy NGC~3115 is in the field, which is unusual because most S0s at this luminosity are in clusters. If it has a dark matter halo with mass typical of its stellar mass then the halo of NGC~3115 must be much less dense than usual, the halo core radius much broader (Cappellari, Romanowsky, Brodie, et al. 2015). The low surface brightness satellites NGC1052-DF2 and NGC1052-DF4 of the elliptical NGC~1052 also look like exceptions. Keim, van Dokkum, Danieli, et al. (2022) argue that ``the dark matter halo masses of these galaxies cannot be much greater than their stellar masses.'' It is too soon to declare a challenge to the $\Lambda$CDM theory from the evidence of galaxies with little dark matter, but the development of the empirical evidence will be worth following.
What use is dark matter anyway? We could live on a planet in a solar system in a universe that is identical to ours except that the matter is all baryonic in standard forms, with no dark matter. The larger baryon density would result in a lower residual ionization and a larger molecular hydrogen abundance at decoupling, and the onset of galaxy formation would be delayed by the coupling of all matter to the CMB up to redshift $z\sim 1000$. I am not aware of analyses of the effects on the formation of stars and planets in young galaxies, but there are so many stars in our universe that I expect this alternative universe without dark matter would have ample homes for observers. We could live in a universe similar to ours except that the baryon mass fraction is much smaller, though not zero. Gravity would gather dark matter in halos similar to those of our galaxies, but with far fewer baryons. The dissipation of energy by these baryons would be slowed by the smaller baryon density, though aided by the larger residual ionization allowed by the lower baryon density. If this universe continued expanding into the sufficiently remote future then the baryons in a massive dark halo such as the one around the Milky Way would eventually lose enough energy to become dense enough to collapse to stars and planets and observers. These observers would see far fewer galaxies forming stars, but it is difficult to see how that would adversely affect their well-being. In short, our presence does not seem require an anthropic explanation of the dark matter. Maybe its presence is purely accidental. Maybe it is an anomaly to be resolved.
\subsection{Why Dark Energy?}\label{sec:Lambda}
We have evidence of detection of Einstein's Cosmological Constant, $\Lambda$, from the BAO signature in the CMB angular distribution; the consistent BAO signature in the galaxy spatial distribution; the supernova redshift-magnitude relation; the comparison of stellar evolution ages and the cosmic expansion time; and the dynamical measurements of the cosmic mean mass density. If these pieces of evidence were seriously wrong the consistency of the $\Lambda$CDM cosmology with these very different ways to probe the universe would be far more improbable than most of us would be willing to consider. The community conclusion is instead that we have a compelling empirical case for the presence of something that acts as Einstein's $\Lambda$. This is an argument of reasonableness, of course, not a theorem.
In the standard cosmology we flourish not long after the cosmological constant $\Lambda$ and the mean mass density in matter made equal contributions to the expansion rate. This curious, one might say unlikely, coincidence used to be considered a good argument against the presence of the $\Lambda$ term and for the scale-invariant Einstein-de~Sitter model in which the universe is expanding at escape speed whenever we happen to measure it. The argument has been falsified; we must learn to live with $\Lambda$. The anthropic principle accounts for the coincidence, at least broadly, by the argument that we are in a universe in the multiverse in which the absolute value of $\Lambda$ is about as large as is consistent with our existence (Weinberg 1989). Must we leave it at that? The issue is pressing because it proves to be difficult to see another way out of the expectation that the quantum vacuum mass density is quite unacceptably large.
It is worth reviewing the case for reality of the quantum zero-point energy. Consistency of the theory and measurements of binding energies of molecules requires taking account of zero-point energies. The well-tested consistency of energy and active and passive gravitational masses requires that this real zero-point energy of matter gravitates. The same quantum and gravity physics applies to the electromagnetic field. But the sum of the electromagnetic zero-point energies over laboratory wavelengths amounts to a gravitating mass density far greater than allowed in a relativistic cosmology. Jordan and Pauli (1928) recognized the problem and proposed that the zero-point energy of the electromagnetic field is not real. This was despite the empirical evidence they had of the reality of the zero-point energy of matter fields. How could they, and we, justify distinguishing between zero-point energies based on the same physical theory?
There also are the positive and negative zero-point energies of all the other fields of particle physics, with prescriptions for ultraviolet truncation, to be added to the contributions to the stress-energy tensor by field binding energies. This looks challenging to get right, but the sum surely is vastly different from an acceptable mass density in the relativistic cosmology.
Standard physics is independent of the velocity of the observer. If this is true of the quantum vacuum energy then in general relativity its stress-energy tensor must be of the form $g_{\mu\nu}\Lambda_{\rm qm}$, where $\Lambda_{\rm qm}$ is a constant. This is the form of Einstein's cosmological constant. It would be an elegant result except for the woeful difference between $\Lambda_{\rm qm}$ and the empirical $\Lambda$.
What are we to make of this? Maybe a symmetry principle to be discovered forces the value of the quantum vacuum energy density to vanish, the only natural and reasonable value. Then the cosmological $\Lambda$ would have to be a new parameter of nature. One could instead turn to the anthropic argument discussed in Section~\ref{sec:Anthropic}. Or maybe the value of $\Lambda$ is decreasing from the large value postulated in the inflation picture, and is approaching its natural value, zero, but slowly enough that we flourish while $\Lambda$ still is slightly different from zero. This is discussed further in Section~\ref{sec:EvolutionPhysics}. These thoughts are awkward. Maybe nature agrees and has something better for us to find.
\subsection{Why Magnetic Fields; Why not Cosmic Strings?}
The existence of cosmic magnetic fields is clear; it ranks as an anomaly because of the difficulty of accounting for its presence (e.g. Wielebinski and Beck 2005). Cosmic strings and other topological defects are anomalous because they are natural extensions of standard particle physics, yet they are not part of the standard $\Lambda$CDM theory. As as been said of particle theory, ``what is not forbidden is required.'' This hint from a proven theory is not to be lightly disregarded.
The magnetic field threaded through the Milky Way is made visible by the tendency of interstellar dust particles to be aligned with long axes either perpendicular or parallel to the local magnetic field. The dust absorbs starlight and reradiates the energy at longer wavelengths, preferentially with the electric field of the radiation parallel to the alignment of the dust. The effect is observed in the polarization of starlight that passes through dust clouds and is partially absorbed, and it is observed in the polarization of the radiation reemitted by the dust at longer wavelengths. I recommend looking at the wonderful map of the magnetic field threading the Milky Way from observations of this polarized radiation obtained by the ESA Planck Satellite.\footnote{Click on \url{https://www.esa.int/ESA_Multimedia/Missions/Planck/(result_type)/images} and scroll down.} At greater distances and on larger scales the evidence of magnetic fields is less direct and an active line of research.
The formation of cosmic magnetic fields might be understood within standard physics applied to what is known of the astrophysics (e.g., Daly and Loeb 1990; Kulsrud and Zweibel 2008; Durrer and Neronov 2013; Garaldi, Pakmor, and Springel 2021); or maybe we need new physics (e.g., Ratra 1992; Widrow, Ryu, Schleicher, et al. 2012). Or maybe magnetic fields grew out of fossils from the universe before the big bang, whatever that means.
The thought that cosmic strings and other field defects are natural extensions of the standard model for particle physics was persuasive enough to motivate considerable research on how cosmic strings might be observable and might play a role in the formation of cosmic structure (e.g. Kibble 1980; Vilenkin and Shellard 2000). Although cosmic strings are absent in the present standard cosmology it remains important to look for their effects in cosmic structure, the angular distribution of the CMB, and the gravitational waves produced by cosmic strings. Vachaspati (2021) reviews the present state of this art. Ostriker, Thompson, and Witten (1986) introduced the fascinating idea of magnetized superconducting cosmic strings; maybe they hold the secret to where cosmic magnetic fields came from. And maybe cosmic strings have something to do with the curiosities in the large-scale distributions of AGNs and rich clusters of galaxies that are discussed in Section~\ref{sec:distributions}.
\section{Large-Scale Distributions of Radio Galaxies, Quasars, and Clusters of Galaxies}\label{sec:distributions}
Analytic estimates and numerical simulations of cosmic structure formation in the $\Lambda$CDM cosmology show the growth of mass concentrations that are good approximations to observed rich clusters of galaxies. There are discrepancies in the cosmological parameters that best fit cluster counts and best fit the other constraints, but they are small (e.g., Perivolaropoulos and Skara 2022, Table~2). I would count this as a success for the standard cosmology if there were not the curious distributions of clusters and powerful radio galaxies at redshifts $z\lap 0.02$, and the distributions of radio galaxies and quasars at distances approaching the Hubble length. A standard interpretation is that these unusual objects form where the ambient mass density is unusually large. The evidence that there is more to it is reviewed in Section~\ref{sec:LSC}, on the situation in the region around us some 170~Mpc across, and in Section~\ref{CosPrin}, on scales closer to the Hubble length. A summary assessment is presented in Section~\ref{sec:remarks}.
\subsection{The Extended Local Supercluster}\label{sec:LSC}
G\'erard de Vaucouleurs' (1953) Local Supercluster is observed as concentrations of relatively nearby galaxies near the great circle across the sky that defines the plane of the Local Supercluster. It includes the concentrations of galaxies in our Local Group and in and around the Virgo Cluster of galaxies at about 18 Mpc distance. The pronounced presence of galaxies near this plane extends to perhaps 30 Mpc. To be considered here is the distributions of objects beyond that distance, in the region at redshifts between $z=0.01$ to 0.02, or distances about 45 to 85~Mpc at Hubble's constant
\beq
H_{\rm o} = 70\hbox{ km s}^{-1}\hbox{ Mpc}^{-1}.
\eeq
The lower bound on distance removes our special situation close to the plane of the Local Supercluster. The upper bound defines a region about $170$~Mpc across, with the central one eighth of the volume removed. The galaxies are distributed in clumps that look fairly close to uniformly scattered across this region. But the great clusters of galaxies, and the galaxies that are powerful radio sources, tend to be near the extension of the plane of the Local Supercluster. Tully (1986) and Tully, Scaramella, Vettolani, and Zamorani (1992) pointed out this effect for clusters, and Shaver and Pierre (1989) found the same effect for radio galaxies. Shaver (1991) remarked on the key point, the distinct difference from the far weaker concentration of the general population of galaxies to this plane. This interesting difference is not widely advertised, but it it well established and illustrated in Figure~\ref{fig:LSCf}.
The supergalactic latitude SGB of an object is the angular distance from the great circle defined by the plane of the Local Supercluster. The two panels in Figure~\ref{fig:LSCf} show distributions of the counts of angular positions of objects in equal intervals in sin~SGB, which are equal intervals of solid angle. The data are truncated at galactic latitude $|b|=10^\circ$ to take account of obscuration and confusion near the plane of the Milky Way Galaxy. This reduces the solid angles of the samples, largely at high supergalactic latitudes, SGB close to $\pm 90^\circ$. The effect is seen in the red histograms that show the mean of a random isotropic distribution of points at $|b|>10^\circ$. These red histograms are nearly flat, but suppressed at high supergalactic latitudes by the absence of objects at low galactic latitudes.
The black histogram in Panel (a) in Figure~\ref{fig:LSCf} is the distribution in sin~SGB of the 30 clusters of galaxies at $0.01 < z < 0.02$ detected as X-ray sources (from the NASA HEASARC compilation of clusters detected by the X-ray luminosity of the hot intracluster plasma). The clusters are close to the plane of the Local Supercluster as indicated by the peak at low angle SGB. This is to be compared to the red curve expected of an isotropic distribution truncated at low galactic latitude. At lower redshifts the Virgo and Ursa Major clusters, and the rich groups that contain the radio galaxy Centaurus~A and the giant elliptical galaxy IC 3370, are at supergalactic latitudes $-2.4^\circ$, $2.8^\circ$, $-5.2^\circ$ and $-15.1^\circ$, respectively. Again, they are close to the plane.
The blue histogram shifted slightly to the right in Panel (a) is the distribution of the 32 most luminous galaxies at radio wavelengths $\sim 1.1$~GHz (compiled by van Velzen, Falcke, Schellart, Nierstenh{\"o}fer, and Kampert 2012; the data were downloaded from the VizieR Online Data Catalog). This region contains some $10^4$ galaxies with stellar masses comparable to or larger than the Milky Way, luminosities $L\gap L_\ast$, meaning the 32 most powerful radio sources are exceptional galaxies. They tend to be in clusters, but since clusters and radio sources are found and cataloged in very different ways the consistency of their distributions offers a meaningful test of reproducibility of the concentration to the plane indicated by the peaks at low angle from the plane.
Panel~(b) in Figure~\ref{fig:LSCf} shows the distribution of the galaxies at $0.01<z<0.02$ that are most luminous at $60\mu$ wavelength. This is based on the redshift catalog Saunders, Sutherland, Maddox, et al. (2000) drew from the infrared astronomical satellite sky survey (Neugebauer, Habing, van Duinen, et al., 1984) at wavelengths from 12 to $100\mu$. The data were downloaded from the NASA HEASARC IRASPSCZ catalog, class GALAXY. I refer to these objects as LIRGs, for Luminous Infrared Galaxies, which seems appropriate though they do not necessarily fit the standard definition. The evidence is that these LIRGs are extraordinarily luminous at $60\mu$ because they are passing through phases of rapid formation of stars that generate the dust that absorbs the starlight and reradiates it as infrared radiation (P{\'e}rez-Torres, Mattila, Alonso-Herrero, Aalto, and Efstathiou 2021 and references therein). The black histogram in Panel~(b) in Figure~\ref{fig:LSCf} is the distribution in sin~SGB of the 29 most luminous LIRGs. For a check of reproducibility the blue histogram shifted slightly to the right shows the distribution of the 32 next most luminous. Both show no obvious tendency for these galaxies to be close to the plane of the Local Supercluster, or to avoid it. This is a pronounced difference from the distributions of the most powerful radio galaxies in Panel (a), and an illustration of the different distributions of different kinds of galaxies on scales $\sim 100$~Mpc.
Figure~\ref{fig:commongalaxies} shows the distribution relative to the plane of the Local Supercluster of a more numerous sample of galaxies (drawn from the Huchra, Macri, Masters, et al. 2012 catalog based on the Skrutskie, Cutri, Stiening, et al. 2006 identifications of galaxies detected in the Two Micron All Sky Survey, 2MASS). These galaxies are luminous enough to be in the Huchra et al. catalog out to redshift $z\leq 0.02$. Panel (a) shows the distribution of the 2708 early-type galaxies, ellipticals plus S0s (with Huchra morphological types $T\leq 0$), and Panel (b) shows the distribution of the 3276 spiral galaxies (with $1\leq T\leq 9$).\footnote{S0s are mentioned here for completeness, but since S0s are not common among the nearby $L\sim L_\ast$ galaxies they do not figure much in this essay, as noted in footnote~\ref{fn:S0s}. It might be interesting to compare distributions of separate spiral types, but this is not considered here.}
Since the early-type elliptical and S0 galaxies tend to be in clusters it is not surprising that the distribution in Panel~(a) is peaked at low SGB, with the clusters. But the peak is not as pronounced as in Panel~(a) in Figure~\ref{fig:LSCf}, meaning a greater fraction of these less luminous early-type galaxies are well away from the plane compared to the most powerful radio galaxies.
If considered alone the peak in the distribution of the 3276 spiral galaxies in Panel~(b) in Figure~\ref{fig:commongalaxies} might not seem significant; perhaps it only shows the large fluctuations to be expected in the correlated positions of galaxies, in this case a deficit at positive SGB balanced by an excess at SGB close to zero. But the peaks in the other distributions argue for a real tendency of spiral galaxies to be near this special plane. We also see that the tendency of common spirals to be near this plane is weaker than for common ellipticals plus S0s, which in turn is weaker than for powerful radio galaxies.
Figure~\ref{fig:morphologies} shows another aspect, a comparison of the distributions in sin~SGB of the elliptical and spiral galaxies with the greatest stellar masses (as indicated by the luminosity at $\sim 2\mu$ wavelength, which is considered a useful indicator of the stellar mass). These data also were drawn from the Huchra, et al. (2012) catalog. The 180 most luminous galaxies in the catalog at $0.01<z<0.02$ and galactic latitudes $|b|>10^\circ$ have absolute magnitudes bounded at apparent magnitude $K_s < 9.5$ at $z=0.02$. Of them, 53 are classified as ellipticals (with Huchra classification $T\leq -5$), 54 are classified spirals ($1\leq T\leq 9$), and the rest are classified S0s and irregulars of various kinds. Figure~\ref{fig:morphologies} shows the distributions in sin~SGB of the ellipticals and spirals.
It is not surprising that the angular distribution of the 53 most luminous ellipticals is similar to that of the most powerful radio galaxies and the clusters of galaxies shown in Figure~\ref{fig:LSCf}, because radio galaxies tend to be in giant ellipticals that tend to be in clusters. But again the data were obtained in different ways, and the samples are different. The pronounced concentration of the most luminous ellipticals to the plane of the extended plane of the Local Supercluster adds to the evidence of this interesting alignment.
The 54 most luminous spirals, with stellar masses that are comparable to the 53 most luminous ellipticals, are not noticeably more or less common at low SGB. Their distribution in the right-hand panel in Figure~\ref{fig:morphologies} resembles that of the most luminous galaxies at $60\mu$ in Panel (b) in Figure~\ref{fig:LSCf}. Within the noise there could be a peak in the distribution similar to that of more common spirals in Panel~(b) in Figure\ref{fig:commongalaxies}.
The arrangements of clusters of galaxies and galaxies of various types relative to the plane of the Local Supercluster are readily studied because we happen to be close to the plane. Other arrangements of objects similar to what is seen within 85~Mpc distance from our special position might include the Pisces-Perseus supercluster (Giovanelli, Haynes, and Chincarini, 1986), the CfA Great Wall (Geller and Huchra 1989), and the Sloan Great Wall (Gott, Juri{\'c}, Schlegel, et al., 2005). They are spread over hundreds of megaparsecs. It would be interesting to know whether radio galaxies and massive ellipticals in the neighborhoods of these more distant configurations are largely confined to a ridge, in the manner of the Local Supercluster, while massive spiral galaxies, and galaxies that are exceptionally luminous in the infrared, are not so particularly concentrated.
I have not discussed the distribution of quasars at $0.01<z<0.02$ because I am not aware of a quasar catalog with measured redshifts that is suitably close to complete across the sky at this relatively short distance. But we might take it that quasars and radio galaxies are related, and that the concentration of radio galaxies to a plane in a region around us $\sim 170$~Mpc across is an analog of a Large Quasar Group of the kind discussed by Clowes, Harris, Raghunathan, et al. (2013).
In the sample at $0.01 < z < 0.02$ the similar distributions of giant ellipticals, radio galaxies, and clusters of galaxies on the one hand, and the different but again similar distributions of giant spirals and luminous infrared galaxies on the other, present us with interesting issues.
\begin{enumerate}[label*=\arabic*.]
\item Why are the clusters of galaxies largely near a plane at $z<0.02$? If clusters formed where primeval mass density fluctuations were particularly large it would require that upward density fluctuations large enough to grow into clusters are confined to a plane in a region some 170~Mpc across. These large upward density fluctuations cannot be at all common at similar distances from us but not near the plane, because clusters are not at all common there. On the face of it this arrangement looks unlikely in the standard $\Lambda$CDM theory, but it could be checked in pure dark matter simulations.
\item Why are primeval conditions capable of producing galaxies with exceptionally large stellar masses far from the plane as well as near it, but only capable of producing clusters near the plane?
\item Why are the most massive elliptical and spiral galaxies, with comparable stellar masses to judge by the $2\mu$ luminosities, so differently distributed? If the clusters formed where the primeval mass density is exceptionally large, close to this preferred plane, it could account for the concentration of giant ellipticals to this plane. But then why are the giant spirals not more noticeably abundant than average near the plane of the Local Supercluster? Maybe the conditions that favor cluster formation were hostile to the formation of spirals near this plane, perhaps more likely to destroy the spiral arms? But if so why do the spirals not show evidence of avoiding the plane? The more common spirals instead show a modest tendency to be near the plane.
\item What accounts for the concentration of powerful radio galaxies to this plane? Radio galaxies seem to require the presence of a massive compact central object, very likely a black hole. Massive black holes are present in many if not all $L\sim L_\ast$ galaxies, spirals as well as ellipticals, including the many that are closer than 85~Mpc and not close to the plane of the Local Supercluster. These more common black holes seem to cause some galaxies to be radio sources, though seldom at the level of power of the radio galaxies that tend to be near the plane of the extended Local Supercluster. What is special about the massive black holes that are associated with the powerful radio sources that tend to be close to this plane?
\item What are we to make of the contrast between the distributions of galaxies that are exceptionally luminous at radio wavelengths and those that are exceptionally luminous at $60\mu$? The former tend to be near the plane, the latter not.
\end{enumerate}
These questions are not widely advertised. A measure of this is the 26 citations to Shaver (1991) in the Astrophysics Data System. One is a self-citation, four are mine, all on Shaver's point, five are on the possible implication of the alignment of radio galaxies for the angular distribution of energetic cosmic rays, and sixteen are on the nature of the space distribution of radio galaxies. Three of these space distribution papers take note of the planar distribution of the relatively nearby radio galaxies and clusters of galaxies. For example, Strauss (1993) remarks on ``the very large planar structures seen in the cluster and radio galaxy distribution by Tully et al. (1992) and Shaver (1991); this discrepancy remains to be explained.'' It certainly is interesting. But I find no discussion of what is to me most interesting, the considerable difference between the space distributions of radio galaxies and clusters of galaxies on the one hand, and the distribution of ordinary large galaxies on the other. The literature on this point might be scant to nonexistent because it is difficult to know what to make of it. That is no excuse, though: we are missing something, which surely is worth investigating.
Figures~\ref{fig:LSCf} to \ref{fig:morphologies} are added illustrations of Shaver's (1991) key point, that different kinds of extragalactic objects can have quite different spatial distributions relative to the plane of the Local Supercluster. Hints to understanding this might be found in the situation on a larger scale that seems analogous, as discussed next.
\subsection{Anomalies on Large Scales}\label{CosPrin}
To be considered here are apparent anomalies in the distributions and motions of objects on scales approaching the Hubble length. This begins in Section~\ref{sec:CMBDipole} with our motion relative to the reference frame set by the near homogeneous sea of thermal cosmic microwave background radiation, the CMB. The standard interpretation of the CMB dipole anisotropy is that it is the effect of our motion through the radiation. This is tested by computing the local peculiar velocity expected from the gravitational acceleration computed from the observed departures from a homogeneous distribution of objects that seem likely to be useful mass tracers. The results discussed in Section~\ref{sec:localgravity} do not disagree with the idea, but they are not very tight. The test considered in Section~\ref{sec:bulkflows} uses estimates of the mean motion relative to the CMB of objects within a given distance from us, computed from the departures of redshifts from the homogeneous Hubble flow. By some measures this mean motion, the bulk flow, is found to approach zero as the distance is increased, about as expected from the standard $\Lambda$CDM theory. But other measures of the bulk flow that look equally reliable are anomalous. A possibly related problem reviewed in Section~\ref{sec:KinematicDipole} is the predicted dipole anisotropy in the angular distributions of objects that are so far away that, in the standard cosmology, the space distribution likely averages out to homogeneity. The Doppler shifts and aberration of the observed angular positions of these objects caused by our motion relative to the CMB are predicted to produce a dipole anisotropy in the angular distributions of these objects, the kinematic dipole. The measured dipoles in the distribution of quasars and in the distributions of radio galaxies cataloged at several radio frequencies are in about the predicted direction, but the dipole amplitudes are too large, an anomaly. The situation from these considerations is reviewed in Section~\ref{sec:remarks}.
\subsubsection{The CMB Dipole Anisotropy}\label{sec:CMBDipole}
Departures from an exactly homogeneous sea of thermal microwave radiation, the CMB, are usefully represented by the spherical harmonic expansions of the CMB temperature and polarization as functions of position across the sky (eq.~\ref{eq:harmonicexp}). The amplitudes $a_\ell^m$ of the spherical harmonic expansion of the temperature at degree $\ell > 1$ are convincingly demonstrated to be remnants of the decoupling of acoustic oscillations in the plasma-radiation fluid. The dipole amplitude, $\ell = 1$, is much larger than predicted from this effect. The standard idea attributes it to our motion relative to the rest frame defined by the CMB, at velocity
\beq
{\vec v}_{\rm helio} - {\vec v}_{\rm CMB} = 370\hbox{ km s}^{-1}\hbox{ to } l = 264^\circ,\ b=48^\circ,\label{eq:heliocen_wrt_cmb}
\eeq
in galactic coordinates and the solar system rest frame (Planck Collaboration et al. 2020a). The adjustment for the solar motion relative to the Milky Way Galaxy and the motion of the Galaxy relative to an estimate of the motion of the center of mass of the Local Group, between the Milky Way and M~31, indicates the Local Group of galaxies is moving relative to the sea of radiation at
\beq
{\vec v}_{\rm Local Group} - {\vec v}_{\rm CMB}= 620\hbox{ km s}^{-1}\hbox{ to } l = 272^\circ,\ b=30^\circ. \label{eq:LG_wrt_cmb}
\eeq
The speed, $\sim 600$~km~s$^{-1}$, is much larger than the relative motions of the galaxies closer than 10~Mpc. A natural interpretation is that we and the nearby galaxies are moving at a near common velocity relative to the CMB, and that this motion is to be associated with the growing departures from an exactly homogeneous mass distribution. A test is discussed next.
\subsubsection{The Peculiar Gravitational Acceleration} \label{sec:localgravity}
In linear perturbation theory and time-orthogonal coordinates chosen to eliminate the decaying mode in a cosmologically flat universe our peculiar motion at coordinate position $\vec r=0$ is predicted to be usefully approximated (in a version of eq.~[\ref{eq:peculiaracceleration}]) as
\begin{equation}
\vec v = {2\beta G \rho_b\over 3 H_{\rm o}\Omega}\int d^3r~\delta_g(\vec r)~{\vec r\over r^3},\label{eq:dynamic_v}
\end{equation}
where
\begin{equation}
\delta_g(\vec r) = {\delta n(\vec r)\over\langle n\rangle},\quad
\delta_\rho={\delta\rho(\vec r)\over\langle\rho\rangle}\simeq{\delta_g\over b},\quad \beta \approx {\Omega^{0.55}\over b}\sim 0.4.\label{eq:dynamicparameters}
\end{equation}
The fractional departure from a homogeneous galaxy distribution, $\delta_g(\vec r)$, might be represented as a sum of Dirac delta functions minus the mean, or the result of smoothing of galaxy counts through a window. The mass density contrast $\delta_\rho$ is written in the simple linear bias model for the relative distributions of galaxies and mass. We have a measure of the bias parameter, $b\sim 1.2$, and the mass density parameter, $\Omega=0.31$, from the fit of theory to observations of the patterns in the distributions of galaxies and the CMB (as discussed in Planck Collaboration et al. 2020b). The evidence from these data is that galaxies are reasonably good mass tracers.
Having adopted general relativity in the standard cosmology we must accept that the mass outside the region we can observe in principle (that is, outside the particle horizon subsequent to inflation, or whatever saved us from the singularity of the standard model) can affect predicted observable peculiar motions (as discussed in Sec.~\ref{sec:theacausaluiverse}). There is evidence that the mass distribution that can be observed is a useful approximation, however. Erdo{\v{g}}du, Huchra, Lahav, et al. (2006) numerically evaluated the integral in equation~(\ref{eq:dynamic_v}) using a version of the Huchra, Macri, Masters, et al. (2012) galaxy redshift catalog. (This valuable catalog was used in the studies of the extended Local Supercluster in Sec.~\ref{sec:LSC}.) Erdo{\v{g}}du et al. found that the predicted velocity of the Local Group relative to the CMB seems to converge at about 100~Mpc distance from the Local Group. The integral computed to this distance indicates the motion of the Local Group is toward $l\sim 265^\circ$, $b\sim 38^\circ$, some $10^\circ$ from the CMB dipole. This is tolerably consistent considering the uncertainties in the use of galaxies as mass tracers. The integral computed to this distance agrees with the observed speed if the combination $\beta$ of bias parameter and mass density parameter in equation~(\ref{eq:dynamicparameters}) is $\beta = 0.40\pm 0.09$. This is consistent with the value derived from the CMB anisotropy spectrum (Planck Collaboration et al. 2020). At greater cutoff distances the computed velocity moves away from the CMB direction, a likely consequence of large effects of small systematic errors in the mass distribution on larger scales.
This test is important but not yet very precise. A better application requires a catalog of positions of useful mass tacers to greater distances with tighter controls on completeness and systematic errors in distances and positions across the sky, a challenging task. And we must live with the unknowable situation outside the Hubble length.
\subsubsection{Cosmic Bulk Flows}\label{sec:bulkflows}
This probe requires measurements of the radial components $v_p$ of peculiar velocities of objects relative to the general expansion of the universe,
\beq
v_p = cz - H_{\rm o}r. \label{eq:peculiarvelocity}
\eeq
To order $v/c$ the radial velocity derived from the measured redshift $z$ of an object is $v = cz$, $H_{\rm o}r$ is the speed of cosmological recession at the physical distance $r$ of the object, and the difference is the radial peculiar velocity $v_p$. In a suitably large sample of objects that are moving relative to us at mean velocity $\vec v_{\rm obs}$ the measured $v_p$ of the objects are expected to vary across the sky as $v_{\rm obs}\cos\alpha$, where $\alpha$ is the angle between the direction to the object and the direction of the mean velocity $\vec v_{\rm obs}$. It is conventional to define the bulk velocity $\vec v_{\rm sample}$ of the sample to be the mean velocity referred to the rest frame in which the CMB has no dipole anisotropy. (This ignores the small intrinsic dipole remnant from the decoupling of baryonic matter and the CMB.) In the standard model $\vec v_{\rm sample}$ is predicted to converge to zero in a catalog that reaches distances large enough to be a fair sample of the statistically homogeneous universe.
Boruah, Hudson, and Lavaux (2020) measured peculiar velocities of galaxies closer than $\sim 60$~Mpc from distances to supernovae of type Ia and distances to spiral galaxies based on the Tully-Fisher relation. The Boruah et al. mean of the peculiar velocities relative to the CMB is
\beq
{\vec v}_{\rm galaxies} - {\vec v}_{\rm CMB}= 252 \pm 11\hbox{ km s}^{-1}\hbox{ to } l = 293^\circ,\ b=14^\circ (\pm 5^\circ).\label{eq:Hudson}
\eeq
This can be compared to the earlier Ma and Scott (2013) measurement on a similar scale,
\beq
{\vec v}_{\rm galaxies} - {\vec v}_{\rm CMB}\sim 290\pm 10 \hbox{ km s}^{-1}\hbox{ to } l = 280\pm 8^\circ,\ b=5^\circ \pm 6^\circ.\label{eq:Scott}
\eeq
from fundamental plane distances to early type galaxies, Type Ia supernovae, and the Tully-Fisher relation for late-type galaxies. (The speed is my estimate from the four results in Ma and Scott Table 2.) Similar results are reported in quite a few analyses. Watkins, Feldman, and Hudson (2009) found bulk flow direction consistent with equations~(\ref{eq:Hudson} and~(\ref{eq:Scott}) and speed 150~km~s$^{-1}$ larger, but that is just twice the estimated uncertainty. Tighter and consistent results are reported by Turnbull, Hudson, Feldman, et al. (2012), who used Type~Ia supernova distances; Hong, Springob, Staveley-Smith, et al. (2014), who used Tully-Fisher distances to spiral galaxies; and Scrimgeour, Davis, Blake, et al. (2016), who used the fundamental plane relation to get distances to early-type galaxies. This bulk velocity seems to be securely established.
The bulk velocity of the sample of objects closer than about 60~Mpc need not be in the same direction as the peculiar velocity of the Local Group, but one might expect it to be fairly close, as it is in these measurements. The speed of a sample relative to the CMB, the bulk velocity, is expected to be smaller when averaged over larger scales, because the mass distribution is assumed to average to homogeneity in sufficiently large volumes. The speeds found from these analyses are roughly half that of the Local Group, and Figure 10 in Boruah et al. indicates the speed is consistent with the probability distribution in the average over a region of this size computed in linear perturbation theory in the $\Lambda$CDM cosmology.
In a remarkable advance the Planck Collaboration et al. (2014) reported a measurement of the mean motion of clusters of galaxies relative to the CMB based on the kinematic kSZ effect (Sunyaev and Zel'dovich 1980). This is the Doppler shift of the CMB scattered by electrons in the intracluster plasma of a cluster of galaxies that is moving relative to the CMB. The conclusion from detections of this kSZ effect is that the average speed of the clusters relative to the CMB is compatible with zero and the speed is less than about $260\hbox{ km s}^{-1}$ at the 95\% confidence level.This is in a sample of clusters with redshifts ranging around $z\sim 0.2$.
The detection of the kSZ effect in the plasma in a cluster of galaxies offers a direct measure of the effect of the motion of the cluster relative to the CMB. The Planck Collaboration observations agree with the expectation that the motion of the clusters averaged over scales approaching the Hubble length is small, continuing the trend from the speed of the Local Group, $\sim 600$~km~$s^{-1}$, to the mean for the galaxies out to 60~Mpc at half that speed, to the still smaller mean speed expected of the still more extended sample of clusters of galaxies.
The motions of clusters of galaxies based on measurements of distance rather than the kSZ effect are more complicated. Lauer and Postman (1994) used distances to Abell clusters (Abell 1958; Abell, Corwin, Olowin, 1989) based on apparent magnitudes of the brightest cluster members, measured in an aperture of fixed metric size and corrected for the redshift of the spectrum and extinction in the Miky Way. They found that the mean peculiar velocity of the Abell clusters relative to the CMB is
\beq
{\vec v}_{\rm Abell} - {\vec v}_{\rm CMB} \sim 689\hbox{ km s}^{-1}\hbox{ to } l = 343^\circ ,\ b = 52^\circ.\label{eq:LP}
\eeq
The direction might be taken to be roughly similar to that of the galaxies (eq.~[\ref{eq:Hudson}]), but the large speed is anomalous. This result remains unexplained. The Migkas, Pacaud, Schellenberger et al. (2021) measurement of the cluster bulk flow used scaling relations among properties of the plasma in clusters rather than properties of the cluster galaxies. The smallest scatter among their measurements is found in the relations between the distance-independent intracluster plasma temperature and the distance-dependent cluster X-ray luminosity, and between the plasma temperature and the distance-dependent integrated Comptonization parameter. The clusters in their sample are at redshifts $z\sim 0.01$ to 0.55, with 50\%\ within $z=0.05$ to 0.18. They found that in a region of the sky centered around $l\sim 276^\circ$, $b \sim -16^\circ$ their cluster peculiar velocities referred to the CMB rest frame are systematically negative, with a suggestion of positive peculiar motions in the opposite part of the sky. It can be interpreted as the effect of a mean bulk flow of the cluster sample at velocity
\beq
{\vec v}_{\rm cluster} - {\vec v}_{\rm CMB} \sim 900\hbox{ km s}^{-1}\hbox{ to } l \sim 88^\circ,\ b \sim 16^\circ.\label{eq:Migkasetal}
\eeq
Again, the speed is anomalously large. The direction disagrees with the Lauer and Postman result (eq.~\ref{eq:LP}), and it is close to opposite to the motions of the Local Group and the bulk flow of the galaxies within distance $\sim 60$~Mpc (eqs.~\ref{eq:LG_wrt_cmb}, \ref{eq:Hudson}).
The situation is interesting. The Abell cluster catalog was compiled by hand, so there may be sampling inhomogeneity, but that need not seriously affect the mean motion of the observed clusters. The Lauer and Postman (1984), and Migkas, et al. (2021), bulk flows are seriously different, maybe because their distances are based on different cluster scaling relations. It also might have something to do with the odd distributions of clusters and radio galaxies closer than 85~Mpc (Sec.~\ref{sec:LSC}), and the odd larger-scale distributions of radio galaxies and quasars to be considered next.
\subsubsection{The Kinematic Dipole}\label{sec:KinematicDipole}
This cosmological test requires a catalog of objects that are far enough away that we can assume their mean spatial distribution is adequately close to the homogeneity and isotropy of the cosmological principle. It of course requires that the efficiency of detection of objects is adequately close to uniform across the sky. In these conditions our motion relative to the mean of this sample is expected to produce a dipole anisotropy in the counts of objects, the results of the Doppler shifts of apparent magnitudes and the aberration of angular positions.
Ellis and Baldwin (1984) pointed out that this consideration provides us with a cosmological test: the dipole anisotropy of counts of objects across the sky is predicted to be, to order $v/c$,
\beq
\delta N/N = [2+x(1+\alpha)](v/c)\cos\theta, \label{eq:EllisBaldwin}
\eeq
where
\beq
S\propto \nu^{-\alpha}, \ N(>S) \propto S^{-x}. \label{eq:EBparameters}
\eeq
The parameter $\alpha$ is a measure of the typical spectrum of an object, $x$ is a measure of the variation of counts of objects with limiting flux density $S$, and $v$ is the heliocentric velocity relative to the mean of the sample of distant objects.
The physics of the Ellis and Baldwin kinematic effect is well established in other contexts. The Compton-Getting effect is the dipole anisotropy of energetic cosmic rays seen by an observer moving through an isotropic sea of cosmic rays. The same effect accounts for the thermal spectrum of the CMB detected in a given direction by an observer moving relative to the CMB rest frame (Peebles and Wilkinson 1968). Kaiser (1987) remarked that this apparent dipole in the mass distribution produces an apparent contribution to the computation of our cosmic gravitational acceleration. Bahr-Kalus, Bertacca, Verde, and Heavens (2021) review current studies of this ``rocket effect."
The evidence to be reviewed here is that the dipole anisotropy in the distribution of objects at distances comparable to the Hubble length is about in the direction expected from the kinematic effect if the dipole anisotropy in the CMB is due to our motion relative to the rest frame defined by the mean mass distribution, but the dipole amplitude is at least twice the prediction. This anomaly is about as well established as the Hubble Tension, yet the literature on the kinematic effect is much smaller than the 344 papers with the phrase ``Hubble Tension'' in the abstract in the SAO/NASA Astrophysics Data System. (I expect the difference is an inevitable consequence of the way we behave.) To illustrate this difference I offer my attempt at a close to complete literature on the kinematic effect (with apology for overlooked publications).
Baleisis, Lahav, Loan, and Wall (1998) considered the possible detection of the kinematic effect in the dipole anisotropy of radio galaxies. Since these objects are detectible past redshift $z=1$ their distribution probes scales large enough that one might hope the clustering of these objects averages out to the wanted uniformity to be seen broken by the kinematic dipole. Scharf, Jahoda, Treyer, et al. (2000) considered detections of the kinematic effect in the angular distributions of X-ray, AGNs, and clusters of galaxies. These are the earliest empirical studies of the effect I have found. Blake and Wall (2002) had the NRAO VLA Sky Survey of radio sources (NVSS; Condon, Cotton, Greisen, et al., 1998). Their dipole estimate is consistent with the direction and amplitude expected from the dipole anisotropy of the CMB. Later studies of the NVSS catalog confirmed the direction, but found larger than expected dipole amplitudes in this catalog at various flux density cuts (Singal, 2011; Gibelyou and Huterer, 2012; Tiwari, Kothari, Naskar, Nadkarni-Ghosh, and Jain 2015; Secrest, von Hausegger, Rameez, Mohayaee, and Sarkar 2022). Rubart and Schwarz (2013) had a partial check by combining NVSS with the Westerbork Northern Sky Survey (WENSS; de Bruyn, Miley, Rengelink, et al., 2000). Colin, Mohayaee, Rameez and Sarkar (2017) combined NVSS with the Sydney University Molonglo Sky Survey (SUMSS; Mauch, Murphy, Buttery, et al., 2003). Both papers reported the anomaly. Bengaly, Maartens and Santos (2018) reported consistency of separate measurements of the dipole anisotropies derived from the NVSS (USA) and TGSS-ADR1 (India; Intema, Jagannathan, Mooley, and Frail, 2017) catalogs, and Siewert, Schmidt-Rubart, and Schwarz (2021) reported separate measurements of the NVSS, TGSS-ADR1, WENSS (The Netherlands) and SUMSS (Australia) catalogs. Since each of these four catalog might be affected by its own systematic errors the reasonable consistency of the separately analyzed catalogs is a valuable check of reliability. The dipole directions from these four independently obtained samples average to about
\beq
l\sim 240^\circ,\ b\sim 30^\circ\hbox{ for radio galaxies.}\label{eq:radiodipole}
\eeq
This is not far from the direction of the heliocentric CMB dipole (eq.~[\ref{eq:heliocen_wrt_cmb}]), as would be expected from the kinematic effect. The radio source dipole amplitudes from the different samples are considerably different, maybe in part because the amplitude depends on frequency, but all are roughly a factor of 4 times the amplitude expected from our motion relative to the CMB rest frame. Siewart et al. conclude that the dipoles ``exceed the expectations derived from the CMB dipole, which cannot strictly be explained by a kinematic dipole alone.''
Darling (2022) reports a different situation from the analysis of the dipole anisotropy of radio sources in the more recent VLASS (USA; Lacy, Baum, Chandler, et al., 2020) and RACS (Australia; McConnell, Hale, Lenc, et al., 2020) surveys. The dipole interpreted as the kinematic effect indicates heliocentric velocity
\beq
{\vec v}_{\rm helio} - {\vec v}_{\rm radio} \sim 330\pm 130\hbox{ km s}^{-1}\hbox{ to } l \sim 270\pm 55^\circ,\ b \sim 56\pm 25^\circ.\label{eq:Darling}
\eeq
This agrees with the heliocentric velocity, direction and speed, relative to the CMB (eq.~[\ref{eq:heliocen_wrt_cmb}]).
Darling estimates that the ``most permissive'' analysis allows effective velocity 740~km~s$^{-1}$ at three standard deviations. Siewert et al. (2021) estimate that at that speed the kinematic would correspond to dipole amplitude $d\sim 0.01$. But this is well below most of the Siewert et al. radio source dipoles. It is a cautionary example of the difficulty of establishing this important measurement.
We have a check on this situation from the angular distribution of the roughly one million objects selected by their mid-infrared colors to be quasars detected by the Wide-field Infrared Survey Explorer (WISE; Wright, Eisenhardt, Mainzer, et al. 2010). In independently selected catalogs, Secrest, von Hausegger, Rameez, et al. (2021, 2022) and Singal (2021) found dipole anisotropies pointing to (in my estimate of Singal's central value)
\begin{align}
& l\sim 210^\circ,\ b\sim 45^\circ,\ v\sim 1700\hbox{ km s}^{-1},\hbox{ Singal (2021)},\nonumber\\
&l= 238^\circ,\ b=31^\circ, v \sim 750\hbox{ km s}^{-1},\hbox{ Secrest et al. (2022)}. \label{eq:kinematicdipole}
\end{align}
The directions of these two quasar dipoles are reasonably similar to the directions of the radio dipole (eq.~[\ref{eq:radiodipole}]), and perhaps not unreasonably far from the CMB dipole, $l = 264^\circ,\ b=48^\circ$ (eq.~[\ref{eq:heliocen_wrt_cmb}]). But with the authors' estimates of the parameters for spectrum and counts in equation~(\ref{eq:EBparameters}) the dipole amplitudes are at least twice that expected from the kinematic dipole (eq.~\ref{eq:EllisBaldwin}) at the velocity expected from the CMB dipole. Darling's (2022) new radio galaxy dipole amplitude puts the three standard deviation upper limit of 750~km~s$^{-1}$ on the effective velocity from the kinematic amplitude, consistent with Secrest et al. but still well below Singal.
Quasars and radio galaxies are related, but the data on the two were obtained and reduced by quite different methods, and each have been analyzed by two or more independent groups with consistent results that seem to make a consistent case for an anomaly. But the result in equation~(\ref{eq:Darling}) from independent radio data and analysis is an important reminder of the hazards of systematic errors in these measurements. I conclude that the present weight of the evidence from the other measures of the radio dipole and the WISE quasar dipole (eqs.~\ref{eq:radiodipole} and \ref{eq:kinematicdipole}) is that there is an anomalously large dipole common to distant radio galaxies and quasars, but the case is not yet persuasive.
Several other points are to be noted. First, the general conclusion has been that, within the standard cosmology and a reasonable degree of biasing in the positions of quasars and radio galaxies relative the mass, intrinsic inhomogeneity is not a likely explanation for the anomalous dipole anisotropies of these distant objects (e.g., Rubart, Bacon, and Schwarz 2014; Tiwari and Nusser 2016; Colin et al. 2017; Dom{\`e}nech, Mohayaee, Patil, and Sarkar 2022; and Murray 2022, who considered the effect of gravitational lensing). This might be checked by the cross-correlation between the surface mass density indicated by CMB lensing with the positions of distant radio galaxies (Robertson, Alonso, Harnois-D{\'e}raps, et al., 2021), but I have not found a discussion of the test. A simple order-of-magnitude argument is that if quasars and radio galaxies were useful mass tracers on scales approaching the Hubble length then the observed dipole anisotropies $\delta N/N\sim 0.02$ would have produced a bulk flow on the order of 2\% of the speed of light, which is absurd.
Second, we have a separate reason to question whether these objects are useful mass tracers, from the curious distributions of objects that contain AGNs closer than 85~Mpc (Sec.~\ref{sec:LSC}). Following this line of thought we might expect that the angular positions of clusters of galaxies at redshifts close to unity have an anomalous dipole anisotropy, perhaps similar to that of quasars and radio sources, though of course not similar to the mass distribution. Perhaps this can be checked with available data. If ordinary $L\sim L_\ast$ galaxies are useful mass tracers, as evidenced by the cosmological tests, then a really deep galaxy catalog would be expected to have dipole direction and amplitude consistent with the kinematic dipole (eq.~\ref{eq:EllisBaldwin}) indicated by our motion defined by CMB dipole. It may be possible to check this.
Third, one is tempted to ask whether the dipole anisotropy in the distributions of distant radio sources and quasars, or the Migkas et al. (2021) cluster bulk flow, are somehow related to the plane of the Local Supercluster. There is no indication of that in the directions they define, supergalactic latitudes $SGB\sim -50^\circ$ and $SGB\sim +50^\circ$. But we do see common evidence of anomalous distributions of objects that contain AGNs.
Fourth, Migkas (private communication 2022) points out that our heliocentric velocity relative to the cluster rest frame found by Migkas et al. (2021, eq.~[\ref{eq:Migkasetal}]) is
\beq
{\vec v}_{\rm helio} - {\vec v}_{\rm clusters} \sim 1100\hbox{ km s}^{-1}\hbox{ to } l \sim 280^\circ,\ b \sim 5^\circ.\label{eq:Kostas}
\eeq
This is not far from our effective heliocentric velocities relative to the radio sources in equation~(\ref{eq:radiodipole}) and the quasars in equation~(\ref{eq:kinematicdipole}). Maybe it is only a coincidence, but it is not to be ignored in this confusing situation.
\subsection{Summary Remarks} \label{sec:remarks}
The association of the CMB dipole anisotropy with the growing mode of the departure from an exactly homogeneous universe is tested by the check of consistency with the prediction from the peculiar gravitational acceleration. There is room for an anomaly because the integral in equation~(\ref{eq:dynamic_v}) seems to converge at $\sim 100$~Mpc distance while the evidence is that the bulk flow of the galaxies on about the same scale is some 250~km~s$^{-1}$ (eq. [\ref{eq:Hudson}]). But assessing the significance of the discrepancy is difficult because the computation of the integral in equation~(\ref{eq:dynamic_v}) is sensitive to small errors in the large-scale galaxy distribution.
The evidence is that the bulk flow of the galaxies relative to the CMB decreases with increasing sample size about as expected from standard ideas, from about 600~km~$^{-1}$ in the average over distances of a few megaparsecs, to about 250~km~$^{-1}$ averaged out to distances $\sim 60$~Mpc. Detection of the kinetic SZ effect on the intracluster plasma suggests an even smaller bulk flow of clusters of galaxies at distances distributed around $\sim 600$~Mpc. Better checks of the convergence of the galaxy bulk flow and the predicted local peculiar gravitational acceleration from the integral over the mass distribution requires improved measures of the space distributions of galaxies and mass. Perhaps it will come from the Euclid mission ``to capture signatures of the expansion rate of the Universe and the growth of cosmic structures'' (Percival, Balogh, Bond, et al. 2019). Nadolny, Durrer, Kunz, and Padmanabhan (2021) present a worked example of how this might go.
There are interesting anomalies in the space distributions of radio galaxies, quasars, and clusters of galaxies. The evidence reviewed in Section~\ref{sec:LSC} (and shown in Figs. \ref{fig:LSCf} and \ref{fig:morphologies}) is that we are in a region some 170~Mpc across in which the most luminous early-type galaxies, powerful radio galaxies, and clusters of galaxies tend to be near the extended plane of the Local Supercluster. In contrast to this the spirals with stellar masses as great as the most massive ellipticals, to judge by the luminosities at $2\mu$, are not noticeably correlated with this plane. The same is true of the most luminous galaxies at $60\mu$, and the much more abundant $L\sim L_\ast$ galaxies that are useful mass tracers.
A possibly related anomaly is found in measurements of the bulk velocities of clusters of galaxies (Lauer and Postman 1994; Migkas, et al. 2021). Both are based on cluster properties: first-ranked galaxy luminosity or cluster plasma scaling relations. If the anomaly from cluster distance measures persist, and the mean motion of clusters of galaxies relative to the CMB measured by the kSZ effect continues to show a small bulk velocity, then we will be forced to the conclusion that scaling relations of cluster properties are not universal. This is not as extreme at it might at first seem, for recall the anomalous distribution of the nearer clusters, in the region 170~Mpc across.
Yer another possibly related anomaly is the dipole anisotropies in the angular distributions of presumably distant and on average uniformly distributed radio galaxies and quasars. All recent analyses agree that the dipole is about in the direction expected from the kinematic effect of our motion relative to the CMB. Though not all recent analyses agree on the amplitude, the weight of the evidence is that the radio galaxy and quasar dipole amplitudes are anomalously large.
We are led to the thought that the properties of large groups and clusters of galaxies, radio galaxies, and quasars defined by colors, were enabled by something that has had a subdominant correlation with the mass distribution everywhere except in rare situations such as the neighborhood of the extended Local Supercluster. This would be a departure from the Gaussian adiabatic initial conditions of the standard cosmology. Thoughts turn to cosmic strings, or primeval isocurvature fluctuations, or black holes left from some earlier epoch. Or, as I have remarked elsewhere in this essay, something completely different.
Also to be considered is the indication that positions of physically related quasars are spread over greater lengths than could have grown out of the Gaussian near scale-invariant adiabatic initial conditions assumed in the standard $\Lambda$CDM cosmology (e.g., Clowes, et al. 2014). If these Large Quasar Groups are physically real associations then, under the assumptions of the standard theory, they are associations among objects that have never been causally connected. This is a familiar situation, of course. The Gaussian initial conditions of the standard cosmology also are acausal in the standard model (Sec.~\ref{sec:theacausaluiverse}).
Some authors conclude that the anomalously large dipole anisotropy of distant quasars and radio galaxies, and the Large Quasar Groups, if physically real, violate the cosmological principle (e.g., Secrest, von Hausegger, Rameez, et al. 2022). This depends on the definition of this principle, of course. It does not violate the definition explained in Section~(\ref{sec:definition}), but it likely violates the assumption of Gaussian near scale-invariant and adiabatic initial conditions that has served so well for many other cosmological tests.
\section{The Local Void}\label{sec:localvoid}
Figure~\ref{fig:LocalVol} shows the distribution of galaxies closer than $D=9$~Mpc, plotted in the supergalactic coordinates discussed in Section~(\ref{sec:LSC}). (The data are from Karachentsev, Karachentseva, Huchtmeier, Makarov, 2004, updated in the NASA HEASARC Updated Nearby Galaxy Catalog. A distance cutoff at $D=8$~Mpc eliminates the interesting dwarf galaxy at the top of the figure; a cutoff at 10~Mpc adds several dwarfs in the low density area to the upper right that are not very close to the two interesting dwarfs that are well within the Local Void.)
The Local Supercluster is the concentration of galaxies running through the center of the figure. The distributions of radio galaxies and clusters of galaxies relative to the plane of the Local Supercluster, at ten times the distance sampled in Figure~\ref{fig:LocalVol}, are illustrated in Figures~\ref{fig:LSCf} and~\ref{fig:morphologies}
The open red circles in Figure~\ref{fig:LocalVol} show positions of the 14 most luminous galaxies, absolute magnitudes $M_B<-20$. Galaxies that are more luminous than any but the brightest in this region are exceedingly rare. The 65 smaller filled red circles mark positions of galaxies with absolute magnitudes in the range $-20 < M_B < -17$, which spans a factor of 16 in luminosity. The positions of the 718 less luminous of the galaxies with useful distance estimates are marked as the still smaller filled black circles. It is expected that many more dwarfs will be added to this sample.
The upper left region in Figure~\ref{fig:LocalVol} is part of the Local Void, a strikingly empty region. Only two of the known $\sim 800$ galaxies within 9~Mpc occupy about a quarter of the volume in this part of the Local Void. This amounts to a space density of galaxies in the near empty region at about one percent of the mean within the full $R<9$~Mpc volume. A common estimate from numerical simulations of structure formation in the $\Lambda$CDM theory is that in low density regions the mean mass density bottoms out at roughly 10\% of the cosmic mean. A recent example is presented in Cautun, Cai, and Frenk (2016). Peebles (2001) argued that this low density of detected galaxies seems distinctly odd. Tikhonov and Klypin (2009) concluded from their numerical simulations that ``The emptiness of voids [is] yet another overabundance problem for the cold dark matter model.'' But Tinker and Conroy (2009) pointed out that the low density of galaxies in the Local Void is consistent with the predicted mass distribution if the most numerous lowest mass dark matter halos contain very few luminous stars. The Tinker and Conroy application of this idea using the halo occupation distribution model applied to high resolution pure dark matter simulations produces acceptably empty voids. This is progress but it is a prescription, not a prediction. It might be tested by other information.
Let us begin with the two known dwarf galaxies in the nearest part of the Local Void. The dwarf galaxy at the top of the figure, ZOA~J1952+1428, was discovered in a blind survey for HI emission by the Arecibo Zone of Avoidance Survey (McIntyre, Minchin, Momjian, et al. 2011). The other well isolated dwarf galaxy lower down and to the left in Figure~\ref{fig:LocalVol} is KK~246, also known as ESO 461-036.
Karachentsev, Dolphin, Tully, Sharina, et al. (2006) present HST images of KK~246 among other nearby galaxies. The optical image of KK~246 looks similar to other dwarfs (to my untrained eye) that are not so extremely isolated. Kreckel, Peebles, van Gorkom, van de Weygaert, and van der Hulst (2011) present maps of the extended atomic hydrogen envelope around KK~246. The long axis of the stellar distribution is tilted relative to this hydrogen envelope. This is curious because the tilt does not seem likely to be a long-lasting feature in a galaxy that looks so well isolated. Perhaps KK~246 was disturbed by a relatively recent merger with another dwarf galaxy, though that would seem odd given the isolation. Perhaps, as Tinker and Conroy (2009) argued, the local void contains numerous dark matter halos that have too few stars and too little atomic hydrogen to be observable. Maybe KK~246 was disturbed by one of them.
Rizzi, Tully, Shaya, et al. (2017) present an HST image of ZOA~J1952+1428; it too has the appearance of other low mass early-type galaxies. McIntyre et al. (2011) found that the mass of the atomic hydrogen envelope is $M_{\rm HI}=10^{7.0}M_\odot$. The optical luminosity, $L_B=10^{7.5}L_\odot$, suggests the mass in HI is less than the mass in stars. This is unusual; Bradford, Geha, and Blanton (2015) find that isolated low mass galaxies typically have considerably more mass in atomic hydrogen than in stars. One might wonder whether ZOA~J1952+1428 has been disturbed by an event that dissipated much of its HI, maybe supernovae, though that has not affected other dwarfs, or something external, though it appears to be isolated.
Another interesting object in the low density region toward the top of Figure~\ref{fig:LocalVol} is the spiral galaxy NGC~6946. It is marked by the open red circle at the largest positive value of SGZ. The ambient density is low there, but the image of this galaxy (to be seen on the web) looks much like the large spirals in the far more crowed region near the plane of the Local Supercluster that runs across the middle of the figure. A quantitative measure of that is the tight relation between the spiral galaxy circular velocity and baryonic mass (McGaugh 2020), which does not offer much room for sensitivity to environment. The atomic hydrogen surrounding NGC 6946 extends well beyond the starlight (Boomsma, Oosterloo, Fraternali, van der Hulst, and Sancisi, 2008), and it has the customary retinue of dwarf satellite galaxies (Karachentsev, Sharina, and Huchtmeier 2000).
The galaxy NGC~6946 is a counterexample to one of those arguments that seem to make intuitive sense. In the standard cosmology the primeval departures from homogeneity are a random Gaussian process. For simplicity reduce this to two waves, a long wavelength one that represents ambient conditions, and a short wavelength one that represents the seeds of galaxy formation. Suppose a seed that happened to be near a maximum density in the long wavelength component, the ambient density, could produce a large galaxy like NGC~6946. That same seed that happened to be near a minimum of the ambient density would have a smaller total density; it would end up as a dwarf. The picture looks reasonable. It works for the most massive galaxies, which are found in particularly dense regions such as clusters of galaxies. It makes sense put another way: a galaxy might be expected to grow larger where the ambient density is larger and better able to supply matter to the growing galaxy. But this intuition does not account for the presence of NGC~6946 in the low density region above the Local Supercluster in Figure~\ref{fig:LocalVol}. And it does not account for the general evidence of similar space distributions of $L\sim L_\ast$ galaxies and the far more numerous dwarf galaxies (Davis, Huchra, Latham, and Tonry 1982; Zehavi, Zheng, Weinberg, et al. 2011.) This failure of intuition is an anomaly.
Conditions in the Local Void are different from our neighborhood, to judge by the scarcity of galaxies. What might be new and interesting there? Maybe dark matter halos with HI but no stars, or dark matter halos without baryons, or even HI clouds without dark matter?
Arrays of telescopes such as MeerKAT are sensitive to the 21-cm line from atomic hydrogen, and will be surveying the Local Void as part of scans of all the sky at all the radio frequencies accessible to the telescopes. But the Local Void is interesting enough to justify a Grand Project: a far deeper search for 21-cm sources confined to the part of the sky and the range of redshifts of the Local Void. This restricted use of an important facility would limit its production, but consider the compelling scientific interest in this unique opportunity to probe a void as deeply as possible.
The image of the dwarf KK~246 in the Local Void is easy to see on the digitized ESO sky survey plates (when I have been shown where to look), and I suppose there is not likely to be more dwarfs this luminous in the Local Void at this distance and not obscured by dust at low galactic latitude. The fainter Local Void dwarf ZOA~J1952+1428 was discovered as a 21-cm source, but the HST image in Rizzi et al. (2017) certainly looks like an unambiguous detection of the stars. This means an optical to infrared search for more of these faint dwarfs in the Local Void is technically feasible, if given expensive resources.
The Local Void is particularly interesting because it can be examined in particular detail. Why are galaxies of stars so scarce in this void? Why does the spiral NGC~6946 with its retinue of dwarfs show so little indication of having been affected by its isolation? Why do the two dwarfs in the part of the Local Void pictured in Figure~\ref{fig:LocalVol} seem unusual despite their apparent isolation? What else is in this void?
\section{Galaxies}\label{sec:galaxies}
A century of research on the nature of galaxies\footnote{A century ago \"Opik (1922) turned earlier thoughts that the spiral nebulae might be other galaxies of stars into a quantitative demonstration. \"Opik started with the assumption that the Andromeda Nebula M~31 ``consists of stellar matter similar to the matter of our Galaxy,'' with the same ratio of mass to luminosity as the estimate for the Milky Way. That combined with the angular size of M~31, its apparent magnitude, and the measured rotation velocity, from the Doppler shift, yields a useful estimate of the distance and mass of this galaxy. (Showing how this follows is a good exercise for the student.) \"Opik's distance is half the correct value, an impressive advance at the time, and clear evidence that M~31 is a massive galaxy of stars, comparable to the size of the Milky Way.} has yielded a rich phenomenology and the challenge of understanding how or whether the phenomenology agrees with the cosmology. The complex nature of galaxies limits this test, but there are regularities that are useful hints to how the galaxies formed, which in turn offer guidance to whether the properties of galaxies fit what is expected in the standard $\Lambda$CDM cosmology. An example from the late 1990s is the prediction by Neta Bahcall and colleagues that in the Einstein-de Sitter model the masses of rich clusters of galaxies grow more rapidly than observed (Bahcall, Fan, and Cen 1997). This was credible early evidence that the mass density is less than predicted by the Einstein-de Sitter model that was popular then. The evidence remains credible and an example of how galaxies serve to test cosmology. Deciding which galaxy regularities, or curiosities, seem worthy of closer attention must be a matter of taste, of course. I offer the following potentially informative lines of thought.
\subsection{Early and Late Type Family Resemblances} \label{sec:earlyandlate}
Galaxies, like snowflakes, are all different if examined closely enough. You can see this by looking at the images of nearby $L\sim L_\ast$ galaxies to be found on the web. Among them are distinctly odd objects, but they look odd because they do not resemble the great majority of nearby galaxies that are readily classified as either elliptical or spiral, or in common usage early or late.\footnote{A century ago the two morphological classes, spiral and elliptical, were well known, as seen in Wolf's (1908) sketches showing examples that include the elliptical NGC 4494 and the spiral M~101. Jeans, Hubble, and others thought a galaxy might evolve from one type to the other, hence the names early and late. This now seems unlikely, but use of the names remains common.} In broad terms, another way to put it is that, with occasional exceptions, late type $L\sim L_\ast$ galaxies are gas-rich and early types are gas-poor. Stars are forming frequently enough in a gas-rich galaxy that the short-lived massive luminous blue stars tilt the spectrum of the galaxy to the blue; it is said to be in the blue cloud in a scatter plot of galaxy color and luminosity. A gas-poor galaxy has relatively few massive young blue stars; it fits in the red sequence in this color-luminosity plot (e.g., Salim, Rich, Charlot, et al. 2007, Fig. 1). S0 galaxies are a complication, but they are not common nearby, and they do not figure much in this essay (which might be a serious omission, as noted in footnote~\ref{fn:S0s}).
The galaxies in each of the two distinct types have their own family resemblances,\footnote{My use of the term, family resemblance, follows the Wikipedia interpretation of Ludwig Wittgenstein's thinking: family members share resemblances, or features, though no member need have all features. I take the term to be equivalent to family traits.} as in the red sequence and blue cloud. Tully and Fisher (1977) pointed out that the luminosity of a spiral galaxy is correlated with the circular velocity of the stars and gas in the disk. McGaugh (2020) presents the extension to include the mass in atomic hydrogen, which gives a tight correlation between the observed baryonic mass of a spiral and the circulation speed in its disk. This is a family resemblance, a characteristic of spiral galaxies. The analog for the early-type family began as the Faber and Jackson (1976) correlation between the luminosity and velocity dispersion of the stars in an elliptical galaxy. It was sharpened to the fundamental plane relating the elliptical galaxy luminosity, velocity dispersion, and radius (Dressler, Lynden-Bell, Burstein, et al. 1987; Djorgovski and Davis 1987). Bernardi, Nichol, Sheth, Miller, and Brinkmann (2006) show an example of this family trait, or regularity, and demonstrate that the regularity is not sensitive to ambient density.
A notable family trait among ellipticals is the correlation of the spectrum of the galaxy with its stellar velocity dispersion: the greater the velocity dispersion the redder the mean spectrum (Zhu, Blanton, Moustakas 2010). But Zhu et al. show that at given velocity dispersion the mean spectra are very similar for ellipticals in more crowded and less crowded environments (apart from more prominent H-$\alpha$ emission in field ellipticals). If ellipticals grew by dry mergers of star clusters that had a range of values of velocity dispersions, and hence a range of different spectra, then one might have predicted a sensitivity of the assembled elliptical to the present local situation, which might be expected to be correlated with the degree of merging. But the effect on the spectra is difficult to see.
The largest galaxies, with luminosities $L\sim 10L_\ast$ prefer dense regions. But the properties of the early and late-type galaxies with $L\sim L_\ast$, the ones that contribute most of the cosmic mean luminosity density, are insensitive to environment. This is notable evidence.
The early family type prefers denser regions. Early and late types have different life histories. Ellipticals have larger abundances of the alpha-process elements---carbon, oxygen, and so on---that are produced in early generations of massive stars, and lower abundances of the iron group elements that are more slowly produced in explosions of type~I supernovae. The abundance pattern in the spiral family brings to mind slower build-up of the elements, which agrees with the different distributions of stellar ages in the two families.
Figure~\ref{fig:BtoT} shows measured values of bulge to total luminosity $B/T$ for 32 galaxies that are within 10 Mpc distance and have luminosities $L_K > 10^{10}$ (from Kormendy, Drory, Bender, and Cornell 2010; and Fisher and Drory 2011). The sample is small, but beautiful images of the galaxies mentioned in the next paragraph are to be seen and admired on the web.
The three ellipticals --- Centaurus~A, Maffei~1, and M~105 --- are in the modest peak at the right-hand side of the figure. The stars in these ellipticals are supported by near isotropic motions; we may say these stars are a hot component. The stars in the disk of a spiral galaxy are supported by rotation with a relatively small scatter around the mean: a cool component. The Sombrero Galaxy NGC~4594 at the center of the figure has $B/T=0.5$. It looks like a large spiral centered on an elliptical of similar size. Other disk galaxies further to the left in the figure, including the spirals M~31 and M~81, have a classical bulge, a hot component that rises above the disk. In these galaxies the bulge is more compact than in the Sombrero Galaxy. The pure disk spirals near the peak at the left-hand side of Figure~\ref{fig:BtoT} do not have an appreciable classical bulge, and their images look strikingly flat. Examples are M~101, NGC~253, and the edge-on galaxy NGC~4945. These pure disk spirals might have a pseudobulge, an unusually large surface brightness in the disk near the center. Authorities warn that observations of more distant galaxies at poorer spatial resolution may mistake a pseudobulge for a classical bulge; deciding which it is can be difficult. Some pure disk galaxies have a bar of stars that runs across the center of the galaxy; NGC~1300 is a pronounced example (at greater distance than the other galaxies mentioned here).
There are exceptions to the two families. I mentioned the Sombrero Galaxy. The S0 galaxies have near featureless disk-like distributions of stars with bulges,
giving the impression of spirals that have lost the spiral arms but kept remnants of the disk stars and the dust. Examples of nearby S0s are NGG~404, about 3~Mpc away, and NGC~2784, at about 10~Mpc. The S0 NGC~1460, at about 20 Mpc distance, is an elegant example of a barred galaxy without the spiral arms. These are rare exceptions to the population of galaxies outside clusters of galaxies; common in clusters. There are irregular galaxies; NGC~4490 looks like it is merging or falling apart, and the NASA/IPAC Extragalactic Database seems to be uncertain about the classification of the Circinus Galaxy. These exceptions are real but not common among nearby large galaxies. If the galaxies closer than 10~Mpc are a fair sample of the situation outside clusters of galaxies then they present us with clear and persuasive evidence that galaxies exhibit a distinct bimodality in their family traits.
It is said that elliptical galaxies formed by dry mergers, spirals by wet. Maybe an example is the situation in the group that contains the radio galaxy Centaurus~A. The two largest members are the elliptical NGC~5128, which is the radio source, and the spiral M~83. Figure 1 of Karachentsev, Sharina, Dolphin, et al. (2002) shows that most of the smaller galaxies around the late type M~83 also are late types, and most of the smaller galaxies around the early type Centaurus~A are early types. This agrees with the thought that early type galaxies grew by mergers of dry subhalos while late types grew by wet mergers. It is a description, of course, not an explanation.
People have been wondering about the origin of the early-late bimodality, and more broadly the Hubble sequence of galaxies, for the last century. Modern numerical simulations based on the $\Lambda$CDM cosmology capture aspects of the early and late morphologies (e.g., Vogelsberger, Genel, Springel, et al. 2014). Nelson, Pillepich, Springel, et al. (2018) show in their Figures~1 and~3 distributions of model galaxy color and stellar mass from their simulations of the formation of the central galaxies in dark matter halos. The distributions are quite similar to the Kauffmann, Heckman, White, et al. (2003) results from their analyses of the SDSS observations, which is encouraging. And their models at stellar masses $\sim 10^{10}M_\odot$ show bimodal morphologies. The empirical situation is richer, of course. There are comparable numbers of the most massive galaxies with elliptical and spiral morphologies in the Huchra et al. 2012 catalog (as in Fig.~\ref{fig:morphologies} in Sec.~\ref{sec:LSC}, and in Ogle, Lanz, Appleton, Helou, and Mazzarella 2019), and there are clear examples of spirals and ellipticals at luminosities $L\lap L_\ast$. Understanding the distinct nature of galaxy bimodality remains an interesting challenge.
\subsection{What is the Separatrix for Bistable Galaxy Formation?}\label{separatrix}
Current thinking, which is well motivated by the success of the standard cosmology, is that galaxies grew by gravity out of tiny primeval departures from an exactly homogeneous mass distribution, a stationary random Gaussian adiabatic process. The baryonic and dark matter gathered into mass concentrations, or halos, which grew more massive by merging with other halos and accretion of diffuse matter. Bayons settled, stars formed, and a protogalaxy grew into one or the other of the distinct galaxy families discussed in Section~\ref{sec:earlyandlate}.
Sidney van den Bergh's (1976) thinking about galaxy morphologies a half-century ago was that
\begin{quotation}\noindent
\noindent Canonical views on galaxy evolution suggest that the present morphology of galaxies is predestined by the genetic heritage provided by initial mass and angular momentum. The results discussed above suggest that the evolution of galaxies is also substantially affected by environmental factors.
\end{quotation}
Both thoughts remain empirically well supported. We can add that, if galaxies grew by gravity out of small primeval Gaussian departures from homogeneity, then galaxy formation had to have been a bistable process. The point can be made a little more explicit by recalling the idea of bistable evolution and its separatrix in classical mechanics.
Suppose the state of a system is completely described by the values of $N$ components of particle positions and their $N$ canonical momenta. Let these $2N$ parameters be the coordinates in a $2N$ dimension phase space. The initial condition of the system is represented by its position in this space at a chosen starting time. Imagine an ensemble of initial conditions spread across phase space at this starting time. The equation of motion determines the evolution of the system, its path through phase space, from each initial position. In a bistable situation paths in phase space from the distribution of initial conditions arrive at one or the other of two (or more) basins of attraction. The separatrix is the boundary that separates initial positions in phase space that end up in one of the basins of attraction from the initial positions that end up in the other(s). The orbits of stars may have separatrices (e.g., Yavetz, Johnston, Pearson, Price-Whelan, and Weinberg 2021). The evolution of protogalaxies from their initial conditions is much more complicated; we must take account of dissipation, for example, and consider the complexities of stellar formation and its effects on the evolution of the galaxy. But the example from classical mechanics illustrates the concept of evolution of protogalaxies from initial conditions without manifest bimodality to a bimodal final state. It is what seems to have happened.
So in this way of thinking, what is the separatrix in galaxy formation that determines the evolution of a protogalaxy to a spiral or elliptical morphology? It cannot simply be the mass. There are spirals among the most luminous of galaxies, at $L \sim 10L_\ast$. (The distributions of the most massive spiral and elliptical galaxies relative to the plane Local Supercluster are shown in Fig.~\ref{fig:morphologies}.) At least some of these supermassive late types have the familiar two arms elegantly spiraling out from the center. An early example is UGC 2885 (Rubin, Ford, and Thonnard 1980); Ogle et al. (2019) catalogue others. At lower stellar masses there are more spirals than ellipticals, but both types are observed. Van den Bergh (1976) had good reason to mention mass as part of the genetic heritage, but the story must be more complicated.
The disk of stars in a spiral galaxy is supported largely by rotation, while the stars in an elliptical are supported by a closer to isotropic distribution of orbits. Thus van den Bergh had good reason to consider that the separatrix is related to angular momentum. A dimensionless measure of the angular momentum $L$ of a galaxy is the combination $\Lambda = L~E^{1/2}G^{-1}M^{5/2}$, where $E$ the magnitude of the binding energy and $M$ is the mass (Peebles 1971). In analytic estimates and numerical simulations the distribution of $\Lambda$ is not bimodal (e.g., Efstathiou and Jones 1979). If, despite this, $\Lambda$ is the separatrix, the division between early and late types would require a sharp sensitivity to the values of $\Lambda$ and mass. A more likely picture along this line is that morphology is determined by ``the {\it coherent alignment} of the angular momentum of baryons that accrete over time to form a galaxy'' (Sales, Navarro, Theuns, et al. 2012). The investigation of galaxy morphology and halo spin in numerical simulations by Rodriguez-Gomez, Genel, Fall, et al. (2022) reveals a systematic difference of angular momentum of models classified as spirals and as ellipticals plus S0s. It is not yet bimodality in the spin-stellar mass plane (in their Fig.~1), but perhaps a step in this direction.
Environment matters. The giant $L\sim 10L_\ast$ galaxies in rich clusters likely formed by mergers of cluster members, meaning environment likely is the separatrix between these giants and ordinary $L\sim L_\ast$ galaxies. Maybe another example follows from the larger ratio of early to late types in denser regions. For example, one might imagine that all protogalaxies began evolving toward the spiral morphology, but that violent mergers turned some proto-spirals into proto-early types. It would have happened more frequently in more crowded environments. But recall the separate family resemblances of spirals and ellipticals, which do not seem to be sensitive to environment. And consider the curious separation of early and late types in the Centaurus group (Sec.~\ref{sec:earlyandlate}).
The evidence is that the formation of galaxy morphologies was determined more by nature than nurture, and it is an interesting challenge to identify the character of the separatrix. It might be some combination of mass, angular momentum, and environment, or maybe something completely different. The issue might be resolved by what is learned from numerical simulations of galaxy formation, or maybe by semi-analytic considerations of what appears to be happening. It is a fascinating opportunity for research, provided you bear in mind that people have been trying to solve the puzzle of the early-late bimodality for a long time. It means the resolution must be subtle, but surely it exists.
\subsection{Bulges and Disks of Spiral Galaxies}\label{sec:spirals}
Numerical simulations of galaxy formation produce spiral galaxies that are impressively good approximations to what is observed, but there are three (or more; I invite suggestions) issues that are persistent enough to merit attention. One is that the central concentrations of starlight in simulated galaxies are overly luminous. Another is that the velocity dispersions of the stars moving in the planes of the disks of model spiral galaxies are unrealistically large. And a third is that we do not know the separatrix responsible for the distinct galaxy bimodality. I review these issues in Peebles (2020b); they are outlined and considered further here.
Early attempts to simulate galaxy formation encountered the problem that a cloud of baryonic matter --- gas and plasma --- with the mass and radius typical of an $L\sim L_\ast$ galaxy readily dissipates energy and collapses almost freely. Observed spiral galaxies must have avoided this overcooling problem, and it must be avoided in models for otherwise they would have overly prominent classical bulges or stellar halos. The problem has been tamed by adjustments of the prescriptions for star formation and models for the effects of the stars on the distributions of baryonic and dark matter, but the evidence is that the problem in simulations of galaxy formation persists.
Figure~\ref{fig:BtoT} shows the distribution of the ratio of bulge to total luminosities of the nearby large galaxies. For some galaxies we can add to the hot component in the bulge the hot stellar halo that spreads out to greater distances away from the disk. Estimates of the median value of the luminosity fraction of the stars in the two hot components, bulge plus halo, in spiral galaxies are (from Peebles 2020b)
\beq
\hbox{simulations: }{B+H\over T}\sim 0.45,\quad \hbox{observations: }{B+H\over T}\sim 0.18. \label{eq:BHoverT}
\eeq
The rest of the total luminosity, $T$, is assigned to the disk. The median of the observed fraction is from examinations of ten nearby galaxies by Merritt, van Dokkum, Abraham, and Zhang (2016) and Harmsen, Monachesi, Bell, et al. (2017). The hot fraction in simulations is from reports by Grand, G{\'o}mez, Marinacci, et al. (2017) and Garrison-Kimmel, Hopkins, Wetzel, et al. (2018) of the results of two large research programs. The greater hot fraction in simulations agrees with my visual impression of images of real and model spirals. You are invited to check your impression.
A second anomaly is the large dispersion of model disk stars in the direction of the plane of a simulated spiral galaxy. An illustration uses a simplified model for a spiral galaxy in which the stars move in the plane of a disk with a flat rotation curve, constant circular speed $v_c$. This is a reasonable approximation to many observed and model spiral galaxies. I refer to Peebles (2020b) for the details of the results of computation of stellar orbits. The orbit of a model star is characterized by a parameter, $\epsilon$, that is a measure of the orbital angular momentum, with $\epsilon = 1$ for a star moving in a circular orbit and $\epsilon = -1$ for a star in a circular orbit but moving in the opposite direction from the mean motion of the disk stars.
Numerical solutions give the rms radial velocities at two choices of this circularity parameter:
\begin{align}
\langle (dr/dt)^2\rangle^{1/2} &= 0.32 v_c\hbox{ for } \epsilon = 0.9; \nonumber\\
&= 0.45 v_c \hbox{ for } \epsilon = 0.8. \label{eq:radialveldispn}
\end{align}
I have not found discussions of disc star velocity dispersions in model disk galaxies. My estimate is that in recent suites of numerical simulations (Grand, et al. 2017; Garrison-Kimmel et al. 2018) the most promising of the distributions in $\epsilon$ for a spiral galaxy have at least a quarter of the stars at $\epsilon<0.9$, which means that the radial velocity dispersions are greater than about a third of the circular velocity $v_c$ in a quarter of the stars. A majority of the stars in a promising model have $\epsilon<0.8$, with radial velocity dispersion greater than about half the circular velocity. Observations of the distribution and motions of the stars in our neighborhood of the Milky Way Galaxy (Anguiano, Majewski, Hayes, et al. 2020) indicate the radial velocity dispersion in the thin plus thick disk stars is about $\sigma_r=43$~km~s$^{-1}$ with $v_c\sim 240$~km~s$^{-1}$. The models look much hotter.
The evidence reviewed here is that the model galaxies that emerge from modern simulations of cosmic evolution have unacceptably large populations of stars with large velocity dispersions present in stellar halos, classical bulges, and disks. This looks quite different from the impression of cool populations of stars and gas in the common nearby $L\sim L_\ast$ pure disk galaxies (with the small fraction of the stellar mass in hot stars in the halo). I do not know whether this anomaly has resisted persistent attempts at remediation, or perhaps it has been put aside pending explorations of how to deal with the many other complexities in modeling galaxy formation. But since the art of simulating galaxy formation has a large literature, and the problem with hot star populations in simulated spiral galaxies remains, it ranks as a serious anomaly. Perhaps the situation will be resolved by further advances in numerical simulations based on the $\Lambda$CDM theory. Or again, maybe something is missing.
\subsection{Merger Trees and the Cosmic Web}
The phrases, ``merger tree'' and variants, and ``cosmic web,'' often figure in discussions of how the galaxies formed. Aspects of both are worth considering here.
Merging certainly happens; a clear example of a violent merger is the nearby Antennae Galaxies. Ostriker (1980) introduced considerations of what the remnant of the merger of two $L\sim L_\ast$ galaxies, as in this example, would look like when it had relaxed to a close to steady state. Let us only note that the remnant would have a luminous stellar bulge and halo made of pre-existing stars, which certainly is not seen in the nearby pure disk galaxies. And if the remnant of merging spirals looked like an elliptical it would have an unusual mix of chemical elements. If the local galaxies are close to a fair sample of the situation outside clusters then relaxed remnants of mergers of $L\sim L_\ast$ galaxies are not common, because pure disk galaxies are common, and I expect we would have heard of it if ellipticals with odd chemical abundances were common.
A more modest example of merging is the pure disk edge-on galaxy NGC 5907 with its stellar stream that we may expect eventually will add to the stellar halo of this galaxy (e.g., van Dokkum, Lokhorst, Danieli, et al. 2020). The ample evidence of tails and streams of stars around ellipticals and spirals is suggestive of close passages and mergers (e.g., van Dokkum 2005). And the stellar halo of the Milky Way is growing by tidal disruptions of dwarf galaxies (e.g., Belokurov, Zucker, Evans, et al. 2006)
The concept of galaxy formation as a hierarchical merging process grew out of several considerations. As just noted, galaxies do merge. The distribution of galaxies on scales from $\sim 0.1$~Mpc to $\sim 10$~Mpc is well approximated as a scale-invariant clustering hierarchy. It seems natural that the hierarchy also formed at smaller scales and was erased by merging to form galaxies (Davis, Groth, Peebles, 1977). And hierarchical growth of clustering is seen in numerical simulations of cosmic structure formation. But although the merger tree concept is well motivated by theory and observation it is in the spirit of empiricism to ask whether, absent simulations but given our knowledge of the phenomenology, people would have been led to the hierarchical assembly picture.
Absent the guidance of merger trees people might have settled on the Eggen, Lynden-Bell, and Sandage (1962) account of the formation of the stellar halo and disk of the Milky Way spiral galaxy by a closer to monolithic collapse. The theory would include occasional mergers of galaxies, as observed. It would include formation of substructure in protogalaxies to account for satellites and the spreading of baryons across the disks of the Milky Way and other spiral galaxies, but substructure need not be to the extent of formation of identifiable halos that merge to form identifiable halos in a merger tree. The Eggen et al. picture is more in line with the observation that the distributions of heavy elements in halo stars do not resemble the distributions in stars in dwarf satellites (e.g., Tolstoy, Hill, and Tosi 2009, Fig. 10). Our stellar halo instead would have formed in the closer to monolithic collapse Eggen et al. envisoned, and would have been salted by stars from accreted dwarfs that produced the ``field of streams'' (Belokurov, et al. 2006).
How are we to interpret the description of the formation of early type galaxies by dry mergers and the late type by wet mergers, those rich in diffuse hydrogen? It calls to mind formation of morphology by nature rather than nurture: protogalaxies that have dry or wet natures. That could happen in a merger tree, but more simply in a monolithic collapse.
Cowie, Songaila, Hu, and Cohen (1996) pointed out that lower mass galaxies on average formed the bulk of their stars later. This downsizing effect does not naturally follow from a hierarchical merger tree in which less massive halos formed earlier. The discrepancy need not be serious because galaxy formation is complicated, but it does call to mind a picture similar to Eggen et al. (1962).
Madau and Dickinson (2014) concluded that ``galaxies formed the bulk (75\%) of their stellar mass at $z < 2$.'' The growth of the stellar mass of a pure disk galaxy in the manner described by Madau and Dickinson cannot have been by the merging of subhalos, or galaxies, that contained many stars, because the stars would have ended up in stellar bulges or halos, which are not prominent in these galaxies. The stars in pure disk galaxies had to have formed out of gas or plasma that had settled to the disk, as in cool streams (e.g. Kretschmer, Dekel, and Teyssier, 2022). This has the flavor of the Eggen, Lynden-Bell, and Sandage picture.
The phrase, ``cosmic web,'' also often figures in discussions of galaxy formation. Bond, Kofman, and Pogosyan (1996) introduced the concept as an evocative description of the distribution of dark matter in numerical simulations: large and small concentrations of dark matter are connected by filaments of dark matter in a pattern that calls to mind a web. It also resembles the observed galaxy distribution. Bond et al. pointed out that the filaments in simulations might be observable by detection of atomic hydrogen along the filaments. One might imagine that the filaments also are threaded by a magnetic field, and maybe even cosmic strings.
Detection of HI streams would be particularly interesting because the filaments are expected to be present, connecting collapsing concentrations of dark matter, if the dark matter is adequately well approximated as a continuous fluid with no pressure or viscosity and the initial conditions are continuous. Dark matter consisting of black holes would be expected to form streams if the masses were small enough, and separate dark matter halos if the masses were large enough.
Instrument arrays capable of detecting 21-cm radiation from the hydrogen in primeval filaments of matter connecting dark matter halos may be becoming available (e.g., Tudorache, Jarvis, Heywood, et al., 2022; Greene, Bezanson, Ouchi, et al., 2022). It will be interesting to see the nature of the H{\small I} distribution between mass concentrations, and the constraint that places on the black hole mass in a black hole model of the dark matter.
\subsection{Massive Black Holes}\label{massiveblackholes}
The large luminosities and compact natures of quasars led to the thought that these objects are powered by the energy released by the collapse of matter onto black holes with masses of perhaps a million solar masses (Salpeter 1964; Lynden-Bell 1969). The clear evidence now is that large galaxies contain central compact objects with masses in the range of $10^5$ to $10^{10} M_\odot$. The objects in the centers of the elliptical M~87 and our Milky Way spiral galaxy certainly are compact (Event Horizon Telescope Collaboration et al., 2019; 2022). It makes a good case that these two objects, and the ones in other galaxies, are supermassive black holes of the kind predicted by Einstein's general theory of relativity.
It was natural to suppose that matter would settle to the center of a galaxy and perhaps accumulate to the point of relativistic gravitational collapse. But that picture is complicated by the extreme difference between the density characteristic of an $L\sim L_\ast$ galaxy, perhaps $\sim 10^{-24}$ g~cm$^{-3}$, and the density characteristic of a $10^{9} M_\odot$ black hole, $\sim c^6G^{-3}M^{-2}$, roughly $1$~g~cm$^{-3}$. Feeding the growth of a supermassive black hole by dissipative settling of diffuse baryonic matter certainly is conceivable, but one might instead expect that the settling would result in multiple fragmentation and the formation of star clusters, as in the formation of the first stars (e.g., Abel, Bryan, and Norman, 2002). Statistical relaxation of the star cluster would in time produce core collapse to a central black hole surrounded by a nuclear star cluster. But this is only one of the several current lines of thought that are reviewed by Greene, Strader, and Ho (2020), along with a discussion of how these black holes are detected. Issues that seem particularly relevant are discussed here.
A clue to the formation of these central black holes is the relation between the black hole mass and properties of the galaxy such as the bulge or spheroid mass or stellar velocity dispersion (e.g., Magorrian, Tremaine, Richstone, et al., 1998). We need more than one relation, because some of the pure disk $L\sim L_\ast$ galaxies that are common nearby have massive central black holes with at most modest classical bulges. The familiar example is the Milky Way Galaxy with its central bar, little starlight in a classical bulge, and clear evidence of a central black hole with mass $4\times 10^6 M_\odot$. Another is the galaxy NGC~4945 (Gaspar, D{\'\i}az, Mast, et al. 2022 and references therein). This galaxy is seen nearly edge-on, it looks wonderfully flat, and there is little indication of a concentration of starlight in a classical stellar bulge rising out of the disk. (You can see the image of this galaxy at \url{https://apod.nasa.gov/apod/ap220226.html}.) The evidence is that this galaxy contains an active galactic nucleus operating around a black hole with mass comparable to the one in the center of our Milky Way galaxy.
If evidence accumulates that every $L\sim L_\ast$ galaxy has a central supermassive black hole it will invite the thought that galaxies formed around black holes, whether supermassive or collections of less massive black holes that seeded their formation (e.g., Silk and Rees 1998; Carr and Silk 2018). The thought is encouraged by the observations of quasars at redshifts $z\sim 7$. These quasars presumably depended on the presence of massive black holes, at a cosmic time when the stellar masses of galaxies were much smaller than now.
In the Local Void (discussed in Sec.~\ref{sec:localvoid}) galaxy formation seems to have been suppressed. The thought that galaxies formed around black holes suggests that the Local Void contains black holes, seed or supermassive, that are centered on galaxies that are unusually small for their black hole masses. Maybe closer examinations of the space distributions and redshifts of the stars in the two void dwarfs at the top and to the left in Figure~\ref{fig:LocalVol} could determine whether they contain unusually massive black holes for such small galaxies. There are other void galaxies to examine, and there is a considerable area of sky and range of redshifts in the Local Void for deeper surveys for HI sources, star clusters, and maybe even gravitational lensing by primeval black holes without stars, any of which could be a valuable clue to the origin of massive black holes.
Seth, van den Bosch, Mieske, et al. (2014) present evidence of an ultra-compact dwarf galaxy with an exceptionally massive central compact object, maybe a black hole. It may be near the large elliptical galaxy M~60. Maybe this dwarf is the remnant of a galaxy whose growth around a particularly massive primeval massive black hole was interrupted by tidal stripping. It is easy to invent such scenarios, sometimes difficult to test them, but perhaps an essential part of the search for the explanation of these objects.
Are there normal-looking $L\sim L_\ast$ galaxies that do not have a central massive black hole? The relatively nearby and face-on spiral galaxy M~101 has lanes of dust that spiral in toward the center, ending at a star cluster with mass $\sim 4\times 10^6M_\odot$ (Kormendy et al 2010). It is not yet known whether there is a massive central black hole inside the star cluster. A detection, maybe from the shapes of integrated Doppler-broadened stellar lines, would be interesting, and a seriously tight upper bound on the mass of a central black hole would be even more interesting.
The normal-looking spiral galaxy M~33, the third largest galaxy in the Local Group, does not have a central black hole more massive than about $2\times 10^3M_\odot$ (Gebhardt, Lauer, Kormendy, et al., 2001; Merritt, Ferrarese, and Joseph, 2001). It is difficult to imagine how a merger of two galaxies could have driven both black holes out of the remnant while leaving the pure disk morphology of this galaxy. The more likely interpretation is that the formation of M~33 did not require a primeval black hole. It would mean that the massive black holes in other galaxies need not have served as seeds for galaxy formation, but instead grew together with the galaxies.
In the coevolution picture the accumulation of mass in a growing central black hole would be by dissipative settling and merging at rates that might be expected to differ from galaxy to galaxy. This could be compared to the formation of stellar bars that run across the centers of spiral galaxies: some dominate the shape of the spiral, some are less conspicuous, and others are not noticeable. So it might be with the formation of supermassive black holes. The correlation of mass with host galaxy properties argues against a broad scatter of massive black hole masses at given galaxy mass, but it will be helpful to see the scatter of central black hole masses as a function galaxy properties in larger samples. It might serve as a test of the coevolution picture. And it is to be noted that the coevolution picture does not seem promising for M~33, because there is no evidence of a central massive black hole.
The LIGO detection of gravitational waves from the merging of black holes with masses $\sim 66$ and $85 M_\odot$ (the event GW190521 reported by Abbott, Abbott, Abraham, et al. 2020) was unexpected because the masses are intermediate between the black holes produced by the relativistic collapse of stars and the supermassive black holes in the centers of galaxies. Maybe they are in line with the idea that black holes in this intermediate mass range were seeds for the formation of supermassive black holes. There is a long span of logarithmic time from the end of inflation, or whatever saved us from the spacetime singularity of the standard cosmology, to the formation of the isotopes of hydrogen and helium. During this time cataclysmic events of some sort might have produced massive black holes or their seeds. The favorite thought is directed to the disturbances to the mass distribution as the universe expanded and cooled through first-order cosmic phase transitions. These transitions might have been violent enough to have produced seed black holes, maybe with a broad range of masses set by the variety of cosmic first-order transitions in standard particle physics (e.g., Cappelluti, Hasinger, and Natarajan, 2022). Another thought is that supermassive black holes or their precursors formed during cosmological inflation (e.g., Kallosh and Linde 2022). Yet another is that seed mass black holes formed by collapse of the first generation of gravitationally bound clouds of hydrogen and helium with mass $\sim 10^5M_\odot$ set by the baryon Jeans length (Silk and Rees 1998).
Considerable research on the theory and observations of supermassive black holes in the centers of galaxies has not yet produced a convergence to the theory the origin of these objects. Their presence remains an anomaly.
\subsection{Why the Characteristic Galaxy Luminosity?}\label{sec:Lstar}
The frequency distribution of optical luminosities of galaxies has a characteristic value, $L_\ast$. There are far more galaxies with luminosities less than $L_\ast$, but the $L\sim L_\ast$ galaxies produce most of the cosmic mean optical luminosity density. The largest known galaxies have luminosities $L\sim 10 L_\ast$. That factor of ten is a curiously abrupt cutoff compared to the broad range of luminosities of galaxies that are less luminous than $L_\ast$. What accounts for the value of $L_\ast$ and the cutoff at greater luminosities?
The abrupt cutoff was anticipated by Schechter's (1976) functional form for the galaxy luminosity function with its exponential cutoff. It grew out of a Press and Schechter (1974) argument that ``contains no ad hoc information about an initial spectrum of long-wavelength density perturbations.'' The counterargument in Peebles (1974) agrees with the more recent demonstrations that the formation of cosmic structure is sensitive to the form of the spectrum of primeval departures from homogeneity. The Press and Schechter argument nevertheless produced an analytic form for the luminosity function that remains broadly useful.
In pure dark matter numerical simulations of the growth of cosmic structure in the standard cosmology the halo mass function is not as abruptly truncated at the high mass end as the galaxy luminosity function (e.g., Garrison, Eisenstein, Ferrer, et al., 2018, Fig. 7). That need not be an anomaly; we must bear in mind the complexities of how mass was apportioned to dark matter halos and the baryons to stars. But is the existence of the characteristic luminosity $L_\ast$ and the sharp upper cutoff in galaxy luminosities an accidental result of these complexities, or might both be more readily understandable in an improved cosmology?
\subsection{Why MOND is Successful but Unpopular}\label{MOND}
The rotation speed $v_c$ of the stars and gas in the outer parts of the disk of a spiral galaxy typically is close to independent of radius. The standard interpretation is that the spherically averaged mass density in the outer parts of the galaxy varies with the distance $r$ from the galaxy as $\rho\propto r^{-2}$, which translates to gravitational acceleration $g\propto r^{-1}\propto v_c^2/r$, satisfying the condition that the speed $v_c$ is independent of distance from the galaxy. This is the flat rotation curve observed in many spiral galaxies. The starlight density in the outer parts of a typical spiral galaxy falls off more rapidly then $r^{-2}$. The standard remedy is the postulate that the mass in the outer parts of a galaxy is the nonbaryonic dark matter of the $\Lambda$CDM theory, the dark matter halo.
Milgrom (1983) introduced an influential alternative: modified Newtonian gravity, or MOND. Instead of the hypothetical dark matter Milgrom proposed that the rotation curve is flat in the outer parts of a galaxy because at gravitational acceleration less than a characteristic value, $a_0$, Newton's law is modified to gravitational acceleration $g = \sqrt{GMa_0}/r$. In this limit, and assuming most of the mass is within the radius $r$, then the speed in a circular orbit is $v_c=(GMa_0)^{1/4}$. If the luminosity $L$ of the stars in the galaxy is proportional to the mass $M$ in baryons, then MOND predicts that the value of the circular velocity $v_c$ in the outer flat part of the galaxy rotation curve scales with the luminosity of the galaxy as the universal form $v_c\propto L^{1/4}$. This is close to the empirical Tully and Fisher (1977) relation. It is even closer to the power law relation $v\propto M^{1/4}$ observed when $M$ is the mass in interstellar atomic hydrogen added to the mass in stars (McGaugh 2020).
MOND was proposed after the discovery of the Tully-Fisher relation, but it is reasonable to count the observed $v_c\propto M^{1/4}$ relation as a MOND prediction that passes a tight test. One may ask why this successful prediction receives so little community attention.
Let us note first that in the standard $\Lambda$CDM cosmology the observed $v_c\propto M^{1/4}$ relation does not challenge the theory. It is instead is a property of galaxies, along with the other family traits. The challenge is to explore in this theory how these family resemblances among galaxies grew as the universe expanded. It will be a crisis for $\Lambda$CDM if an explanation for the observed traits cannot be found within the theory. We know far too little about how galaxies formed to hope for a judgement on this point any time soon.
An argument for MOND is the considerable literature on how the properties of galaxies, and groups and clusters of galaxies, can be understood in this picture. It is reviewed by Diaferio and Angus (2016) and Banik and Zhao (2022). If in an alternative world all our present phenomenology of cosmic structure were known but nothing was known about the evidence of a relativistic evolving universe, MOND likely would be a community favorite. But in our world there are two serious reasons for limited interest in MOND.
First, the well-tested $\Lambda$CDM theory with its cold dark dark matter offers a ready and promising framework for development of a theory of the formation of cosmic structure, galaxies and all. It has attracted the attention of active and productive research groups. I have argued for problems with the results, but they are details that have not discouraged the research groups and I am hoping might guide us to adjustments that improve $\Lambda$CDM.
Second, Milgrom's (1983) MOND does not offer a ready framework for development of a viable cosmology. The approach explored by Angus (2009) and Diaferio and Angus (2016) follows thoughts about alternative gravity physics by Bekenstein and Milgrom (1984) and Milgrom (2010). A starting postulate is that there is dark matter, in the form of thermal sterile neutrinos with rest mass 11~eV. The dark matter density parameter is close to the standard model. Gravity physics in this model is enough like general relativity at redshifts $z \gap 1000$, and the warm dark matter is enough like the cold dark matter of the standard cosmology, that the acoustic oscillation pattern imprinted on the CMB is close to the standard prediction and the measurements. This is an important result. At low redshift the neutrino hot dark matter would have largely escaped galaxy potential wells, and gravity physics would have become enough like Milgrom's original MOND that galaxy rotation curves fit measurements without dark matter in halos around galaxies and with the weaker gravity of MOND. This is viable, within the notions of the gravity physics in this picture, though contrived. It is not demonstrated in a theory that allows computations of predictions. Diaferio and Angus (2016) conclude that
\begin{quotation}
It remains to be seen whether [in the adopted gravity physics] gravitational instability in a universe filled with baryonic matter and one species of sterile neutrino with 11 eV mass can form the observed cosmic structure at the correct pace.
\end{quotation}
The neutrino dark matter might be trapped in growing clusters of galaxies, which could be a helpful feature for the application of MOND to clusters. But Diaferio and Angus caution that
\begin{quotation}
the ability to explain the cluster mass discrepancy does not directly imply that MOND combined with 11-eV sterile neutrinos can form clusters in a cosmological context.
\end{quotation}
To my mind it is exceedingly unlikely that an alternative gravity physics and cosmology without cold dark matter, or something that acts much like it, can fit the array of tests passed by the standard general theory of relativity applied to the $\Lambda$CDM cosmology. But good science seeks to replace intuition with worked predictions of adequately specified theories. As Diaferio and Angus conclude, it remains to be seen whether this can be done for a generalization of Milgrom's MOND.
\section{Summary Remarks}\label{SummaryRemarks}
To prevent misunderstandings I repeat the conclusion in Section~\ref{sec:tests}, that the empirical tests give excellent reason to expect that a more advanced physical cosmology will look much like the theoretical $\Lambda$CDM universe, because many well-checked tests show that the $\Lambda$CDM universe looks much our universe. But the great progress in cosmology and the other physical sciences has left anomalies, some of which have been troubling for a long time.
A century ago Pauli and Jordan understood the vast difference between the vacuum mass density allowed by the relativistic theory and the density suggested by quantum physics (Sec.\ref{sec:Lambda}). Applications of quantum physics have grown far broader, giving compelling evidence that this physics is a broadly useful approximation to reality, but I am not aware of a significant advance in resolving the quantum energy density problem, apart from the anthropic argument (in Sec.~\ref{sec:Anthropic}).
A century ago \"Opik (1922) turned earlier thoughts that the spiral nebulae might be other galaxies of stars into a quantitative demonstration. The progress from there to a promising physical basis for analyses of how the galaxies formed was slower than the development of quantum physics, but we have now a well-tested cosmology that might be expected to be an adequate basis for a secure understanding of these objects. The starting fundamental goals for a theory of the galaxies are to understand galaxy stellar masses, the spatial distributions of the stars in galaxies, and the stellar motions. Modern numerical simulations have made encouraging progress to this end, but there are anomalies. The evidence I have seen is that, despite careful attention to the many details required for the numerical simulations, disk star velocity dispersions in the planes of model spiral galaxies are unrealistically large (Sec.~\ref{sec:spirals}). The overcooling problem remains, resulting in unrealistically large classical bulges and stellar halos (Sec.~\ref{sec:spirals}). Many of the nearby $L\sim L_\ast$ are strikingly flat, unlike model spirals.
A century ago Wolf (1908) reviewed evidence of what proves to be the bimodal natures of galaxies. The far richer evidence we have now (Sec.~\ref{sec:earlyandlate}) presents us with an interesting opportunity: identify the separatrix that determines whether a protogalaxy evolves into a spiral or an elliptical (Sec.~\ref{separatrix}). How are we to understand the cool motions of stars in the wonderfully thin galaxies seen nearby, so very different from the motions of stars in elliptical galaxies? The problem has been known for a century, aspects of the situation are the subjects of many papers, but identification of the separatrix remains an open challenge.
Less familiar anomalies tend to be less secure because less thought has been given to the theory and observation, and we can add that it is natural to give less thought to what is contrary to accepted thinking. We have an example in Section~\ref{sec:distributions}, on the large-scale distributions of astronomical objects. In the standard cosmology clusters of galaxies formed where the primeval upward mass density fluctuations were unusually large. Within available accuracy this agrees with numerical simulations of cosmic structure formation. But why would the primeval mass distribution in the $\Lambda$CDM universe be so arranged that upward mass fluctuations capable of evolving into the clusters that at distances less than about 85~Mpc are present only near the extended plane of the Local Supercluster (Fig.~\ref{fig:LSCf} in Sec.~\ref{sec:LSC})? Why are the most massive elliptical galaxies within 85~Mpc close to this plane, while comparably massive spirals are not noticeably correlated with the plane (Fig.~\ref{fig:morphologies})? The powerful radio sources present in some galaxies are thought to be associated with massive central black holes. Many large galaxies contain these black holes. So why are the powerful radio sources within 85~Mpc present in a select few of the large galaxies, those near this plane? Is there something special about these black holes? Tully and Shaver knew aspects of this situation thirty years ago (Sec.~\ref{sec:LSC}). To judge by the sparse citations to Shaver's key point these phenomena have not captured the general attention of the community. But the phenomena surely are real, and interesting, and not likely to have grown out of the Gaussian, adiabatic, and near scale-invariant initial conditions of the present standard cosmology.
On larger scales, the applications of scaling relations to convert cluster luminosities to distances for estimates of peculiar velocities yield indications of unreasonable bulk flows relative to the rest frame defined by the CMB (Sec.~\ref{sec:bulkflows}). This disagrees with observations of the effect of the motion of intracluster plasma on the CMB. This kinetic SZ effect indicates a reasonably small cluster bulk flow. Maybe the situation is confused by subtle systematic errors, though the peculiar cluster velocity measurements have been carefully checked. If the anomalous peculiar velocities are confirmed it means the luminosities of first-ranked cluster members, and the properties of the intracluster plasma, are not well constrained by other cluster properties that are not sensitive to distance, maybe even that cluster properties depend on parameters that are not in the standard cosmology. The thought is speculative, but recall that there are cosmic magnetic fields, and maybe cosmic strings, which might connect clusters and perhaps set hidden parameters.
On scales comparable to the Hubble length the heliocentric dipole anisotropies of quasars and radio galaxies are about in the direction expected from the kinematic effect of our motion relative to the CMB, but the dipole amplitude is unacceptably large. If this were because of a real unexpectedly large dipole anisotropy in the mass distribution on the scale of the Hubble length then the standard cosmology would predict an unacceptably large local peculiar velocity. But recall that at distances $\lap 85$~Mpc the curious distributions of radio galaxies, massive elliptical and spiral galaxies, and clusters of galaxies encourage the thought that the positions of radio galaxies, and so likely quasars, are only weakly related to the mass distribution traced by $L\sim L_\ast$ galaxies. So we have a mystery: what would have caused the anomalous large-scale distributions of massive ellipticals, quasars, radio galaxies and clusters of galaxies? The evidence is that all these objects contain massive black holes. Establishment of the theory of how these black holes formed might help.
The standard $\Lambda$CDM theory passes demanding well-checked tests that reenforce the expectation that some if not all of the curious phenomena discussed here will prove to be results of systematic errors and/or statistical fluctuations. But consider one example, the distributions of the clusters of galaxies and the most luminous galaxies at distances less than about 85~Mpc. The $\Lambda$CDM theory offers a good account of the number density of mass concentrations similar to rich clusters. This adds to the argument for this theory and against the proposed anomalies. But does the distribution of these mass concentrations that grow into the clusters shown in Figure~\ref{fig:LSCf} in Section~\ref{sec:LSC} look reasonable? It certainly looks real. Luminous radio galaxies and the most massive elliptical galaxies also tend to be close to this plane. They are related, but identified and cataloged in different ways. The consistent case for alignment of these object with the extended plane of the Local Supercluster is convincing. The odd thing is that the most massive spirals, and the galaxies that are most luminous at $60\mu$, are not noticeably concentrated to the plane.This curious situation is difficult to reject and maybe suggestive of something interesting to be discovered.
Let us not blame the messengers for the problems with the properties of galaxies reviewed in Section~\ref{sec:galaxies} and the distributions of galaxies discussed in Sections~\ref{sec:distributions} and \ref{sec:localvoid}; research groups are doing the best they can with the theory they have. It has not escaped community attention that the extreme simplicity of the dark sector of the standard $\Lambda$CDM cosmology seems unlikely to be better than a crude approximation to reality, maybe crude enough to be an impediment to progress in understanding cosmic structure.
\section{Acknowledgements}
I am grateful to colleagues for guidance to issues arising. They include Jean Brodie and Elaina Tolstoy for advice about dwarf galaxies; Antonaldo Diaferio and Garry Angus or advice on the generalization of MOND; Simon Driver and Samir Salim for education about galaxy morphologies; Mike Hudson, Tod Lauer, and Kostas Migkas for explanations of cluster bulk flow measurements; Manoj Kaplinghat for discussions of supermassive black holes; Roya Mohayaee and Subir Sarkar for discussions of the kinematic dipole; Dylan Nelson for discussions of simulations of galaxy formation; Patrick Ogle for instruction on supermassive spiral galaxies; and Xavier Prochaska for comments on observations of extragalactic magnetic fields. I am particuarly indebted to Michael Strauss for guidance to the phenomena and Neil Turok for guidance to the theory. Turok encouraged me to write this considerable revision of the draft in Peebles (2021).
\label{lastpage} |
Title:
Covariant Predictions for Planck-Scale Features in Primordial Power Spectra |
Abstract: In this companion to our letter (arXiv:2208.10514), we study the predicted
corrections to the primordial scalar and tensor power spectra that arise from
quantum gravity-motivated, natural, covariant ultraviolet cutoffs. We implement
these cutoffs by covariantly restricting the fields which are summed over in
the path integrals for the primordial correlators, and we discuss in detail the
functional analytic techniques necessary for evaluating such path integrals.
Our prediction, which is given in terms of measured cosmological parameters and
without assuming any particular inflationary potential, is that the corrections
take the form of small oscillations which are superimposed on the conventional
power spectra. The frequency of these oscillations only depends on the location
of the cutoff scale, while the amplitude and phase are moderately sensitive to
how smoothly the cutoff turns on. The specificity of the new predictions offers
an opportunity to significantly enhance experimental sensitivity through
template search in observations of the cosmic microwave background and
large-scale structure. This may be used to place ever higher bounds on the
scale at which quantum gravity effects become important in quantum field theory
or may even provide positive evidence for quantum gravity effects.
| https://export.arxiv.org/pdf/2208.11711 |
\interfootnotelinepenalty=10000
\baselineskip=18pt
\hfill
\vspace{2cm}
\thispagestyle{empty}
\begin{center}
{\LARGE \bf
Covariant Predictions for Planck-Scale Features in\\Primordial Power Spectra}\\
\bigskip\vspace{1cm}{
{\large Aidan Chatwin-Davies${}^{a,b}$, Achim Kempf$\,{}^{c}$, and Petar Simidzija${}^{a}$}
} \\[7mm]
{\it ${}^a$Department of Physics and Astronomy, University of British Columbia\\[-1mm]
6224 Agricultural Road, Vancouver, BC, V6T 1Z1, Canada\\[1.5mm]
${}^b$Institute for Theoretical Physics, KU Leuven\\[-1mm]
Celestijnenlaan 200D B-3001 Leuven, Belgium \\[1.5 mm]
${}^c$Department of Applied Mathematics, University of Waterloo\\[-1mm]
Waterloo, ON, N2L 3G1, Canada}
\let\thefootnote\relax\footnote{\noindent e-mail: \email{[email protected]}, \email{[email protected]}, \email{[email protected]}} \\
\bigskip\vspace{0.5cm}{\today}
\end{center}
\bigskip
\centerline{\large\bf Abstract}
\begin{quote} \small
In this companion to our letter (arXiv:2208.10514), we study the predicted corrections to the primordial scalar and tensor power spectra that arise from quantum gravity-motivated, natural, covariant ultraviolet cutoffs. We implement these cutoffs by covariantly restricting the fields which are summed over in the path integrals for the primordial correlators, and we discuss in detail the functional analytic techniques necessary for evaluating such path integrals. Our prediction, which is given in terms of measured cosmological parameters and without assuming any particular inflationary potential, is that the corrections take the form of small oscillations which are superimposed on the conventional power spectra. The frequency of these oscillations only depends on the location of the cutoff scale, while the amplitude and phase are moderately sensitive to how smoothly the cutoff turns on. The specificity of the new predictions offers an opportunity to significantly enhance experimental sensitivity through template search in observations of the cosmic microwave background and large-scale structure. This may be used to place ever higher bounds on the scale at which quantum gravity effects become important in quantum field theory or may even provide positive evidence for quantum gravity effects.
\end{quote}
\setcounter{footnote}{0}
\newpage
\tableofcontents
\newpage
\section{Introduction}
The development of quantum gravity has been impeded by the lack of experimental access to the Planck scale. For example, the peak energy of the Large Hadron Collider is still about 15 orders of magnitude below the Planck energy, and so the quantum gravitational regime remains well out of the reach of accelerator experiments.
The numbers are more favorable, however, in cosmology.
This is because, according to the standard model of cosmology, the inhomogeneities in the Cosmic Microwave Background (CMB) originated in quantum fluctuations of modes which froze when they exceeded the Hubble length during inflation. Since the Hubble length at that time was only about 5 to 6 orders of magnitude larger than the Planck length, Planck-scale effects in the CMB should be correspondingly less suppressed and could perhaps even become observable.
The question arises, therefore, as to what exact signature of potential Planck-scale effects in the CMB to predict, so that experimental efforts can be guided towards probing these predictions \cite{Chluba:2015bqa,Slosar:2019gvt}. The candidate theories for quantum gravity differ strongly in their description of physics at the Planck scale and therefore, in principle, each theory could yield its own predictions \cite{Rovelli:1997qj,WittenStrings,Carlip:2015asa,Loll:2022ibq}. However, just below the Planck energy scale, each candidate theory for quantum gravity must quite quickly reduce to quantum field theory on curved spacetime in order to be consistent with the current standard model of cosmology. This indicates that, at the Hubble scale during inflation, some 5 or 6 orders of magnitude from the Planck scale, only the most dominant features of Planck-scale physics should have been able to leave an imprint in the quantum fluctuations of the inflaton and metric modes that froze at that time.
This leads to the question as to what is the most dominant impact of Planck-scale physics on quantum field theory in curved spacetime at those scales where quantum field theory is still a good enough model to describe inflation. Since candidate theories of quantum gravity tend to predict the presence of some form of natural ultraviolet (UV) cutoff \cite{Garay:1994en,Hossenfelder:2012jw}, it is natural to conjecture that
quantum field theory close to the Planck scale is modified predominantly by the presence of an ultraviolet cutoff, which may be hard or soft. The challenge is then to predict the impact that such a cutoff in quantum field theory in curved spacetime would have on the predictions for the CMB.
The literature on this question has been mostly working with noncovariant ultraviolet cutoffs and associated modified dispersion relations, see, e.g., \cite{Padmanabhan:1988jp,Padmanabhan:1988se,Jacobson:1999zk,Kempf:2000ac,Martin:2000xs,Brandenberger:2000wr,Niemeyer:2000eh,Brandenberger:2001zqv,Easther:2001fi,Kempf:2001fa,Easther:2001fz,Brandenberger:2002hs,Easther:2002xe,Danielsson:2002kx,Brandenberger:2004kx,Sriramkumar:2004pj,Greene:2004np,Shiu_2005,Easther:2005yr,Tawfik:2015rva,Ali:2015ola,Skara:2019uzz,Frob:2012ui,Frob:2014cza}.
For cutoff-free models, see, e.g., \cite{Calcagni:2016ofu,Calcagni:2017via,Modesto:2022asj,Calcagni:2022tuz}.
In previous work, we studied the case of a hard natural ultraviolet cutoff that is covariant \cite{Kempf:2012sg,Chatwin-Davies:2016byj}.
That the cutoff is covariant is important to ensure that the predictions arise only from the presence of the UV cutoff and are uncontaminated by the breaking of symmetries.
The cutoff is enacted on the spectrum of the scalar field's spacetime d'Alembertian, and it has an innate information theoretic interpretation as a cutoff on the field's density of degrees of freedom in spacetime.
We then presented a proof-of-principle calculation to illustrate how the apparatus could be used to compute the signature that a covariant UV cutoff would leave in the spectrum of inflationary perturbations in the CMB \cite{Chatwin-Davies:2016byj}.
In the present paper, we build on this ansatz and explicitly calculate the correction that a natural covariant UV cutoff at (or near) the Planck scale produces for inflationary primordial power spectra, assuming slow-roll parameters that arise from observation.
We focus on the scalar power spectrum, i.e., the dimensionless power spectrum of the comoving curvature perturbation, $\Delta_\mathcal{R}^2$, since it has already been characterized with ample amounts of observational data \cite{Planck:2013oqw,Planck:2015mrs,Planck:2018nkj}, with more data from the CMB and large scale structure surveys on the way \cite{Slosar:2019gvt}.
A natural covariant UV cutoff also produces a correction to the as-yet unobserved primordial spectrum of tensor perturbations, $\Delta_\mathcal{T}^2$, which we also compute.
We assume that inflation is driven by a single inflaton field, but beyond this assumption our calculation is model-independent, in the sense that we make no assumptions about the inflaton's potential.
The only input required for the calculation is the background Friedmann-Lema{\^i}tre-Robertson-Walker (FLRW) Hubble parameter which describes the inflationary phase, which we fix using measured values of slow-roll parameters and the parameters that describe $\Delta_\calR^2$ (see \Eq{eq:Hubble-eff}).
We find that a natural covariant UV cutoff produces small oscillations which are a function of the comoving momentum $k$, superimposed on top of the uncorrected power spectrum, as illustrated in \Fig{fig:prediction}a.
The predicted effect depends only on the dimensionless ratio $\sigma(k)$ of the cutoff length, $\ell_c$, to the Hubble length, which varies with the comoving wave number $k$ during the slow roll.
In terms of observational parameters, $\sigma(k)$ is given by
\begin{equation}
\sigma(k) = \frac{\Mpl}{\Omega} \sqrt{\pi \epsilon A_s} \left(\frac{k}{k_\star}\right)^{(n_s-1)/2},
\end{equation}
where $\Omega \equiv 1/\ell_c$ is the energy scale associated with the cutoff, $\epsilon$ is the first slow-roll parameter, $A_s$ is the scalar perturbation amplitude, $n_s$ is the scalar spectral index, and $k_\star$ is a pivot scale.
Our finding is that, for a sharp cutoff, the relative change in the power spectrum as a function of the comoving mode $k$ is given by
\begin{equation}
\frac{\delta \Delta^2_\calR}{\Delta^2_\calR}=
\mathcal{C}
\frac{\sigma(k)^{3/2}}{\ln(\sigma(k)/2)} \sin\left(\omega(k)\, \sigma(k)\right),
\end{equation}
where $\mathcal{C}=0.8796...$ is a numerical constant and where we have defined
\begin{align}
\omega(k) &\equiv \frac{1}{\sigma(k)^2}\left(1-\ln\frac{2}{\sigma(k)}\right).
\end{align}
While the oscillations' amplitude and phase depend mildly on fine-grained details of the cutoff, such as how gradually it turns on, the oscillations' frequency is a robust prediction which is essentially independent of the hardness or softness of the cutoff.
The only free parameter in this prediction is $\Omega$, the precise energy scale of the UV cutoff.
In particular, the smaller the value of $\Omega$, the smaller is the peak oscillation frequency and the larger is the amplitude of the oscillations, resulting in a larger imprint on the primordial power spectrum; see Figs.~\ref{fig:prediction} and \Fig{fig:scaling} for illustration.
Consequently, existing and future measurements should be able to place bounds on $\Omega$, the scale at which quantum gravity effects become important in inflation.
Since our calculations involve a variety of techniques, we have opted to be liberal when covering background material and in the expositions.
In \Sec{sec:cosmological-perturbations}, we therefore begin by briefly reviewing relevant aspects of the theory of cosmological perturbations and the computation of primordial power spectra while establishing notation and definitions in preparation for our subsequent calculations.
Next, in \Sec{sec:covariant-cutoff}, we review the definition and implementation of the covariant natural UV cutoff which we consider.
We demonstrate how such a cutoff produces a correction to the power spectrum of a scalar field in de Sitter spacetime and we explain how to extend the result to slowly rolling FLRW spacetimes.
In \Sec{sec:prediction}, we take the scalar field to be the comoving curvature perturbation, and we compute the correction to its primordial power spectrum assuming single field inflation and realistic slow-roll parameters.
We also discuss the correction to the tensor spectrum here.
Finally, we end with a summary and discussion in \Sec{sec:discussion}.
The most intensive of calculational details appear in the subsequent appendices.
\section{Cosmological perturbations}
\label{sec:cosmological-perturbations}
The most remarkable success of the theory of inflation is its ability to predict a primordial power spectrum which is in quantitative agreement with observed large scale fluctuations in the universe, as seen, for example, in the cosmic microwave background. Let us briefly review how this primordial power spectrum is computed.
In the simplest model of inflation, which we consider here, the spacetime metric $g_{\mu\nu}$ is coupled to a scalar field $\phi$, called the inflaton, via the action
\begin{align}\label{eq:action}
S = \frac{\Mpl^2}{16\pi} \int \dee^4 x \sqrt{-g} R
- \int \dee^4 x \sqrt{-g} \left[\frac{1}{2}\partial_\mu \phi\partial^\mu \phi +V(\phi)\right],
\end{align}
where $\Mpl \equiv 1/\sqrt{G}$ is the Planck mass, and we work in units $c=\hbar=1$. On the largest scales, the universe is nearly spatially homogeneous and isotropic, and hence the metric and inflaton field can be written as
\begin{align}
g_{\mu\nu}(\eta,\bm x) &= a^2(\eta)\eta_{\mu\nu}+ h_{\mu\nu}(\eta, \bm x),\label{eq:metric}\\
\phi(\eta, \bm x) &= \bar\phi(\eta)+ \delta \phi (\eta,\bm x)\label{eq:inflaton}.
\end{align}
The first terms describe a spatially flat FLRW cosmology with a scale factor $a(\eta)$ and a spatially constant background field $\bar\phi(\eta)$, which both depend only on the conformal time $\eta$, while the second terms allow for deviations from spatial homogeneity and isotropy. We will assume that these deviations are small and we will treat them quantum mechanically, while the dominant background pieces will be treated classically.
\subsubsection*{Background}
Substituting \eqref{eq:metric} and \eqref{eq:inflaton} into the action \eqref{eq:action}, and keeping only leading order terms, one obtains the equations of motion for the background fields
\begin{align}
H^2 &= \frac{8\pi}{3\Mpl^2}\left(\frac{1}{2}\bar\phi'^2 + V(\bar\phi)\right),\label{eq:background_Friedmann}\\
\dot H &= -\frac{4\pi\bar\phi'^2}{\Mpl^2a^2},\label{eq:second_Friedmann}\\
\bar\phi'' &= -2a H\bar\phi'-a^2 V_\phi(\bar\phi),\label{eq:background_scalar}
\end{align}
where $V_\phi\equiv dV/d\phi$, $H\equiv \dot a/a = a'/a^2$ is the Hubble parameter, primes denote derivatives with respect to conformal time, $\eta$, and dots are derivatives with respect to cosmic time, $t$.
The theory of inflation postulates that in the very early universe, perhaps by some abnormally large quantum fluctuation, the inflaton field found itself at a value where the potential is large (close to the Planck scale) but the gradient of the potential is given by a much lower scale. The equations of motion \eqref{eq:background_Friedmann} and \eqref{eq:background_scalar} give the resulting background dynamics: $\bar\phi(\eta)$ slowly rolls down the potential from its large value, while $a(\eta)$ experiences a period of highly accelerated expansion, characterized by a Hubble parameter $H(\eta)$ which is slowly decreasing in time. More precisely, one can quantify the rate of change of $H$ via the \textit{slow-roll parameters}
\begin{align}
\epsilon \equiv -\frac{\dot H}{H^2}, \quad \delta \equiv \frac{\ddot H}{2H\dot H}.
\end{align}
Slow-roll inflation is characterized by the conditions $\epsilon \ll 1 $ and $\delta \ll 1$.
\subsubsection*{Perturbations}
Now let us consider the fluctuations $\delta\phi$ and $h_{\mu\nu}$ on top of this spatially homogeneous and isotropic classical background. To obtain the dynamics of these fields, one again substitutes \eqref{eq:metric} and \eqref{eq:inflaton} into the action \eqref{eq:action}, this time keeping terms up to second order in the perturbations; hence, the perturbations are free fields. However, even in the absence of interactions, there is a challenge in quantizing these fields due to the fact that the theory is diffeomorphism-invariant, i.e. there is gauge freedom associated with the choice of coordinates. The quantization of gauge theories is complicated by the fact that for a gauge invariant action such as \eqref{eq:action}, it is not manifest which fields constitute the physical degrees of freedom of the theory and which fields can simply be gauged away. A careful analysis shows that after gauge fixing, the only remaining degrees of freedom in the perturbations are a scalar, $\calR(x)$---the Mukhanov-Sasaki variable---and two degrees of freedom associated with a transverse, traceless, symmetric tensor, $h_{ij}(x)$. The gauge-fixed action is given by $S = S_\calR + S_h$, where \cite{Mukhanov:1990me}
\begin{align}
S_\calR &= -\frac{1}{2}\int \dee^4 x \, z^2 \eta^{\mu\nu} \partial_\mu \calR\partial_\nu \calR,\\
S_h &= -\frac{\Mpl^2}{64\pi} \int \dee^4 x\, a^2\eta^{\mu\nu}\partial_\mu h_{ij}\partial_\nu h_{ij}.\label{eq:S_h}
\end{align}
If we ignore the tensor structure, we see that the field $h_{ij}$---which describes primordial gravitational waves with two polarizations---simply has the dynamics of a massless scalar field in a FLRW spacetime with scale factor $a(\eta)$.
On the other hand, the Mukhanov-Sasaki variable experiences a modified scale factor, $z(\eta)$, defined in terms of the background fields as
\begin{align} \label{eq:z}
z \equiv \frac{a\dot{\bar\phi}}{H} = \frac{a^2\dot{\bar\phi}}{a'} =\frac{a\bar\phi'}{a'}.
\end{align}
Notice that the second Friedmann equation \eqref{eq:second_Friedmann} implies $z = (\Mpl^2\epsilon/4\pi)^{1/2}a$. Hence, if the first slow-roll parameter $\epsilon$ is constant, $z$ is simply proportional to $a$, and thus the Hubble parameter $\dot z/z$ associated with the modified scale factor is equal to the Hubble parameter $H=\dot a/a$ associated with the ``true'' scale factor. More generally, $\epsilon$ is not constant, but rather varies as \cite{weinberg2008cosmology}
\begin{align}
\dot \epsilon = 2H\epsilon(\epsilon+\delta).
\end{align}
This implies
\begin{align} \label{eq:z-Hubble}
\frac{\dot z}{z} = H(1+\epsilon+\delta),
\end{align}
and so $\dot z/z$ is very close to $H$ in the slow-roll regime.
\subsubsection*{Canonical quantization}
Let us now canonically quantize the scalar and tensor perturbations by expanding them in terms of spatial Fourier modes. We obtain
\begin{align}
\calR(x) & = \frac{1}{z(\eta)}\int \frac{\dee^3 \bm k}{(2\pi)^{3/2}}e^{i\bm k\cdot\bm x}u_{\bm k}(\eta)a_{\bm k}^\dagger+h.c.
\\
h_{ij}(x) & = \frac{\sqrt{16\pi}}{a(\eta)\Mpl}\sum_{\lambda=1}^2\int \frac{\dee^3 \bm k}{(2\pi)^{3/2}}e^{i\bm k\cdot\bm x}\epsilon_{ij}(\hat{\bm k},\lambda)v_{\bm k}(\eta)b_{\bm k,\lambda}^\dagger+h.c.
\end{align}
Here, $a^\dagger, b^\dagger$, and their adjoints are canonically commuting creation and annihilation operators, and $\epsilon_{ij}(\hat{\bm k},1)$ and $\epsilon_{ij}(\hat{\bm k},2)$ are two linearly independent, symmetric, traceless, and transverse polarization tensors. The mode functions $u_{\bm k}(\eta)$ and $v_{\bm k}(\eta)$ are harmonic oscillators with time-dependent frequencies
\begin{align}
u_{\bm k}''+\left(k^2 - \frac{z''}{z}\right)u_{\bm k}&=0, \label{eq:mode-fcn-eom-scalar}\\
v_{\bm k}''+\left(k^2 - \frac{a''}{a}\right)v_{\bm k}&=0. \label{eq:mode-fcn-eom-tensor}
\end{align}
In the slow-roll regime, both $a(\eta)$ and $z(\eta)$ approach $1/(-\eta)$ for large negative values of $\eta$, and hence in this limit both equations simply reduce to that of a harmonic oscillator with frequency $k = |\bm k|$. A natural choice of vacuum state is the Bunch-Davies vacuum, which corresponds to setting the usual positive frequency initial conditions
\begin{align}
u_{\bm k}(\eta)
\rightarrow
\frac{1}{\sqrt{2k}}e^{-i \omega \eta},
\quad\text{and}\quad
v_{\bm k}(\eta)
\rightarrow
\frac{1}{\sqrt{2k}}e^{-i \omega \eta},
\label{eq:u_v_initial_condition}
\end{align}
at $\eta\rightarrow -\infty$. We denote this vacuum by $\ket 0$ and assume that the field starts out in this state.
The quantum fluctuations of the scalar and tensor perturbations can be quantified in terms of their two-point correlators. The equal time correlators are
\begin{align}
G_\calR(\eta, \bm x) &\equiv \bra 0 \calR(\eta, \bm x)\calR(\eta, 0)\ket 0
=
\frac{1}{z(\eta)^2}
\int \frac{\dee^3 \bm k}{(2\pi)^3} e^{-i\bm k \cdot \bm x} |u_{\bm k}(\eta)|^2,
\\
G_h^{ij,kl}(\eta, \bm x) &\equiv
\bra 0 h_{ij}(\eta, \bm x)h_{kl}(\eta, 0)\ket 0
=
\frac{16\pi}{a(\eta)^2\Mpl^2}
\int \frac{\dee^3 \bm k}{(2\pi)^3} e^{-i\bm k \cdot \bm x}
|v_{\bm k}(\eta)|^2
\Pi_{ij,kl}(\hat{\bm k}),
\end{align}
where we define the quantity $\Pi_{ij,kl}(\hat{\bm k}) \equiv \sum_\lambda \epsilon_{ij}(\hat{\bm k},\lambda)\epsilon_{kl}^*(\hat{\bm k},\lambda)$. The Fourier transforms of the two-point correlators are
\begin{align}
G_\calR(\eta,\bm k)
&\equiv \frac{1}{(2\pi)^3}\int \dee^3\bm x e^{i\bm k\cdot\bm x}G_\calR(\eta, \bm x)
=
\frac{|u_{\bm k}(\eta)|^2}{(2\pi)^3z(\eta)^2},\label{eq:Greens_function_R}
\\
G_h^{ij,kl}(\eta,\bm k)
&\equiv \frac{1}{(2\pi)^3}\int \dee^3\bm x e^{i\bm k\cdot\bm x}G_h^{ij,kl}(\eta, \bm x)
=
\frac{16\pi|v_{\bm k}(\eta)|^2}{(2\pi)^3a(\eta)^2\Mpl^2}\Pi_{ij,kl}(\hat{\bm k}). \label{eq:Greens_function_h_ij}
\end{align}
Notice that the tensor structure of the quantity $G_h^{ij,kl}$ is purely kinematic and that all of the dynamics is contained in the single function $v_{\bm k}(\eta)$. This is simply the statement that the two linearly independent tensor helicities have the same vacuum fluctuation amplitudes and each fluctuate as a free scalar field.
As shorthand, let $G_h$ denote $G_h^{ij,kl}$ modulo its kinematic tensor structure, i.e.
\begin{equation} \label{eq:Greens_function_h}
G_h(\eta,\bm k) \equiv \frac{16\pi|v_{\bm k}(\eta)|^2}{(2\pi)^3a(\eta)^2\Mpl^2}.
\end{equation}
\subsubsection*{Primordial power spectra}
The scalar and tensor primordial power spectra are respectively defined as
\begin{align}
\Delta_\calR^2(k)&\equiv \frac{k^3}{2\pi^2z(\eta_k)^2}|u_{\bm k}(\eta_k)|^2, \label{eq:scalar-fluc-spec}
\\
\Delta_\calT^2(k)&\equiv \frac{k^3}{2\pi^2a(\eta_k)^2}\frac{64\pi}{\Mpl^2}|v_{\bm k}(\eta_k)|^2,
\end{align}
where $\eta_k$ is the comoving time at which modes with comoving momentum of magnitude $k = |{\bm k}|$ cross the Hubble horizon, i.e. it is defined as the solution to the equation
\begin{align}
k = a(\eta_k)H(\eta_k).
\end{align}
We will make use of a few alternate expressions for the primordial power spectra. In a slowly rolling spacetime, it is possible to obtain approximations for the mode functions $u_{\bm k}$ and $v_{\bm k}$ (or equivalently the two-point functions) at horizon crossing, i.e. at $\eta=\eta_k$, in terms of the value of the Hubble parameter. This results in the following expressions for the power spectra (see e.g. \cite{dodelson2003modern}):
\begin{align}
\Delta_{\calR}^2(k) &= \frac{H^2}{\pi\epsilon \Mpl^2}\Big|_{\eta=\eta_k}, \label{eq:scalar-spectrum} \\
\Delta_{\calT}^2(k) &= \frac{16H^2}{\pi \Mpl^2}\Big|_{\eta=\eta_k}. \label{eq:tensor-spectrum}
\end{align}
Notice that since the Hubble parameter and the slow-roll parameter $\epsilon$ vary slowly during inflation, we expect $\Delta_\calR$ and $\Delta_\calT$ to vary only mildly with $k$---the primordial power spectra should be roughly \textit{scale invariant}. Indeed, observations of cosmic perturbations indicate the scalar spectrum to be of the form
\begin{align} \label{eq:scalar-spectrum-pheno}
\Delta_{\mathcal R}^2(k) = A_s\left(\frac{k}{k_\star}\right)^{n_s-1},
\end{align}
where, at the pivot scale $k_\star = 0.05~\mrm{Mpc}^{-1}$, the amplitude $A_s$ and spectral tilt $n_s$ are observed at the values \cite{Planck:2018vyg}
\begin{align}
A_s &= (2.10\pm0.03)\times 10^{-9}, \quad\\
n_s &= 0.966 \pm 0.004.
\end{align}
Similarly, the tensor spectrum can be empirically fit to the curve
\begin{align}
\Delta_\calT^2(k)=A_t\left(\frac{k}{k_\star}\right)^{n_t}.
\end{align}
Since tensor fluctuations have not yet been observed, the values of $A_t$ and $n_t$ are unknown, but the tensor-to-scalar ratio $r\equiv A_t/A_s$ is constrained to $r<0.06$ \cite{Planck:2018jri}.
Finally, note that using equations \eqref{eq:Greens_function_R} and \eqref{eq:Greens_function_h}, we can write the power spectra $\Delta_\calR$ and $\Delta_\calT$ in terms of two-point correlators as
\begin{align}
\Delta_{\calR}^2(k) &= 4\pi k^3 G_\calR(\eta_k,k), \label{eq:scalar-spec-from-2pt}\\
\Delta_{\calT}^2(k) &= 8\pi k^3 G_h(\eta_k,k).
\end{align}
(The extra factor of 2 in the tensor power spectrum comes from there being two linearly independent gravitational wave polarizations \cite{dodelson2003modern}.)
The advantage of this rewriting is that the two-point functions can be expressed in a manifestly covariant manner using a path integral. The goal of this paper will be to impose a covariant high energy cutoff on this path integral and to study this as a simple model of the way in which Planck-scale physics might affect inflationary power spectra.
\section{Covariant ultraviolet cutoffs}
\label{sec:covariant-cutoff}
In this section, we explain the machinery that we use to calculate predictions for Planckian corrections to CMB power spectra.
We begin by reviewing the class of covariant natural UV cutoffs that we work with.
We then give a detailed description of how to impose such a cutoff on the fluctuation spectrum of a quantized scalar field in de Sitter spacetime.
Finally, we use the de Sitter result to obtain the cutoff fluctuation spectrum for near-de Sitter FLRW spacetimes.
The kinematics of covariant natural ultraviolet cutoffs are discussed in detail in Refs.\cite{Kempf:1999xt,Kempf:2003qu,Kempf:2009us,Kempf:2010rx,Kempf:2012sg}.
An outline of the de Sitter calculation was given in \Ref{Chatwin-Davies:2016byj}, and here we elaborate the calculation in full while making important refinements along the way.
\subsubsection*{Definition and implementation}
Starting with a quantum field theory on a curved spacetime, our aim is to model gravitational corrections to the effective theory via a natural UV cutoff, and we wish to do so covariantly.
Conceptually, our approach is to suppress field configurations in the quantum field theoretic path integral that lie beyond the Planck scale, which we parameterize in terms of eigenfunctions and eigenvalues of the field's d'Alembertian.
Concretely, let $(\mathcal{M},g)$ be a Lorentzian manifold $\mathcal{M}$ with metric $g$ that supports a real scalar quantum field $\hat \phi$.
Let $\Box$ denote the field's d'Alembertian,
\begin{equation}
\Box = \frac{1}{\sqrt{-g}}\partial_\mu \left( \sqrt{-g} g^{\mu \nu} \partial_\nu \,\cdot \right),
\end{equation}
and let us suppose that any boundary conditions have been appropriately chosen so that $\Box$ is self-adjoint.
Given a non-negative function $f$, we define a cutoff on a field configuration $\phi(x)$ via a linear combination of projectors onto the eigenspaces of $\Box$:
\begin{equation} \label{eq:cutoff}
f(\Box) ~ : ~ \phi(x) ~ \mapsto ~ \sum_{\lambda \in \mrm{spec}\,\Box} f(\lambda) \langle \psi_\lambda, \, \phi \rangle \psi_\lambda(x)
\end{equation}
Here, $\psi_\lambda$ denotes the eigenfunction of $\Box$ with eigenvalue $\lambda$, and $\langle \, \cdot \; , \, \cdot \, \rangle$ is the $L^2(\mathcal{M})$ inner product.
For example, if
\begin{equation} \label{eq:sharp-f}
f(\lambda) = \theta(\Omega^2 - |\lambda|),
\end{equation}
where $\theta$ is the Heaviside step function, then $f(\Box)$ is a sharp cutoff that projects fields onto the subspace spanned by eigenfunctions whose eigenvalues' magnitudes are less than $\Omega^2$.
Cutoffs of the form \eqref{eq:cutoff} are manifestly covariant; they do not depend on a choice of coordinates for $\mathcal{M}$, as they are specified entirely in terms of the spectrum of $\Box$, which is itself just a set of real numbers that depends on $\mathcal{M}$ alone.
Flat spacetime is an illustrative example.
Choosing $\mathcal{M} = \mathbb{R}^{1,d}$ and usual Cartesian coordinates $(t,{\bm x})$, the d'Alembertian is $\Box = -\partial_t^2 + \partial_j \partial^j$ and its eigenfunctions are plane waves, $\psi_{k}(x) = e^{-i k^0 t + i {\bm k}\cdot{\bm x}}$.
The corresponding eigenvalues are $(k^0)^2 - {\bm k}^2$, and so a sharp cutoff like \eqref{eq:sharp-f} removes Fourier contributions to a field configuration from plane waves whose spacetime momentum-squared is greater than $\Omega^2$ or less than $-\Omega^2$.
For a given field configuration $\phi(x)$ with Fourier transform $\tilde \phi(k)$, the action of
\begin{equation}
P_\Omega \equiv \theta(\Omega^2 - |\Box|)
\end{equation}
is therefore
\begin{equation} \label{eq:bandlimited-field-flat}
P_\Omega \phi(x) = \int_{|k_\mu k^\mu| \leq \Omega^2} \frac{\dee^{d+1}k}{(2\pi)^{d+1}} ~ \tilde \phi(k)~ e^{i k_\nu x^\nu}.
\end{equation}
The quantity $\Omega$ plays the role of a short-distance cutoff.
Moreover, $\Omega$ admits a natural information theoretic interpretation as a covariant bandlimit on the density of degrees of freedom in spacetime, in the sense of Nyquist-Shannon sampling theory \cite{shannon1998mathematical,NyquistReprint}.
Given a Riemannian manifold, one can view a \emph{conventional} bandlimit as a cutoff on the spectrum of the Laplacian, $\bigtriangleup$.
For example, for functions on $\mathbb{R}$, with $\bigtriangleup = -\partial_x^2$ and eigenfunctions $e^{ikx}$, a cutoff $\Lambda$ restricts $k^2 \leq \Lambda^2$.
\emph{Bandlimited functions} are then functions whose Fourier transforms are compactly supported in a finite interval $[-\Lambda, \Lambda]$, and the maximum Fourier frequency $\Lambda$ is the \emph{bandlimit}, or equivalently here, the functions' \emph{bandwidth}.
A sampling theorem then applies: a bandlimited function can be perfectly reconstructed everywhere on the real line knowing only its values at a discrete set of sample points whose average density is greater than or equal to $\Lambda/\pi$ \cite{LandauSampling}.
With appropriate modifications, versions of the sampling theorem above generalize to $\mathbb{R}^d$ and to Riemannian manifolds \cite{LandauSampling,PesensonSampling}.
The covariant cutoff $\Omega$ on $\mathbb{R}^{1,d}$ is somewhat different, mainly because the set of allowed eigenvalues $|(k^0)^2 - {\bm k}^2| \leq \Omega^2$ is not compact.
Nevertheless, each partial Fourier transform of a covariantly bandlimited field such as \eqref{eq:bandlimited-field-flat} enjoys a sampling theorem.
That is, consider taking a partial Fourier transform of $P_\Omega \phi$ with respect to $\bm x$ and holding $\bm k$ fixed:\footnote{We could equally have elected to take a partial Fourier transform with respect to $t$ and hold $k^0$ fixed, but we will employ the former choice in the calculations to come.}
\begin{equation}
P_\Omega \phi(t;{\bm k}) = \int_{|(k^0)^2 - {\bm k}^2| \leq \Omega^2} \frac{\dee k^0}{2\pi} \tilde \phi(k^0, {\bm k}) e^{-i k^0 t}
\end{equation}
A conventional sampling theorem then applies to $P_\Omega\phi(t; {\bm k})$ because the allowed frequencies, $k^0$, form a compact set.
In particular, notice that arbitrarily large magnitudes of ${\bm k}$ are still allowed, but the bandwidth in time for $P_\Omega\phi(t;{\bm k})$ falls to zero as $|{\bm k}| \rightarrow \infty$; the spatial modes ``freeze out'' and their density of degrees of freedom in time falls to zero.
Furthermore, these notions transform covariantly: a spatial mode that contracts under a Lorentz transformation acquires a smaller bandwith in time, which is consistent with the dilation of its degrees of freedom in time.
See Ref.~\cite{Kempf:2012sg} for a more extensive exposition.
Let us now return to a general Lorentzian manifold $\mathcal{M}$ and focus on the sharp covariant cutoff $P_\Omega$.
As a convenient piece of terminology, we will say that $P_\Omega \phi(x)$ is a \emph{covariantly bandlimited} field.
Conceptually, we implement the covariant cutoff at the level of the quantum field theoretic path integral by only integrating over covariantly bandlimited fields.
For example, consider the Feynman propagator, $G_F$, which can be written in terms of a path integral as
\begin{equation} \label{eq:GF}
i G_F(x,x') = \frac{\int \mathcal{D}\phi ~ \phi(x) \phi(x') e^{iS[\phi]}}{\int \mathcal{D}\phi ~ e^{iS[\phi]}}.
\end{equation}
The covariantly bandlimited propagator, which we denote $G_F^\Omega$, is then given by
\begin{equation} \label{eq:cutoff-GF-PI}
i G_F^\Omega(x,x') = \frac{\int_{B_\mathcal{M}(\Omega)} \mathcal{D}\phi ~ \phi(x) \phi(x') e^{iS[\phi]}}{\int_{B_\mathcal{M}(\Omega)} \mathcal{D}\phi ~ e^{iS[\phi]}},
\end{equation}
where $B_\mathcal{M}(\Omega) \equiv \mrm{span}\{\psi_\lambda \, | \, \Box \psi_\lambda = \lambda \psi_\lambda, |\lambda| \leq \Omega^2\}$ denotes the space of covariantly bandlimited fields on $\mathcal{M}$.
In practice, we implement the covariant cutoff using the projectors $P_\Omega$.
One can view the propagator $G_F(x,x')$ as the integral kernel of an operator on $L^2(\mathcal{M})$ that is the right inverse of the d'Alembert operator.
The bandlimited propagator is then obtained by projecting onto $B_\mathcal{M}(\Omega)$:
\begin{equation} \label{eq:GFc}
G_F^\Omega = P_\Omega G_F P_\Omega
\end{equation}
This prescription is exactly equivalent to the path integral prescription \eqref{eq:cutoff-GF-PI} when the scalar field's action is of the form
\begin{equation} \label{eq:scalar-action}
S[\phi] = \int \dee^{d+1}x \sqrt{-g} ~ \phi F(\Box) \phi,
\end{equation}
as shown in \App{app:PI-projector-equivalence}.
In particular, this includes the case of a free scalar field, with $F(\Box) = \Box - m^2$.
\subsubsection*{Cutoff fluctuation spectrum}
Our next goal is to impose a covariant cutoff on the fluctuation spectrum of a scalar field in a FLRW spacetime.
We consider FLRW spacetimes with no spatial curvature, for which we may write the line element
\begin{equation} \label{eq:FLRW-line-element}
\dee s^2 = a^2(\eta) (-\dee \eta^2 + \dee x_i \dee x^i).
\end{equation}
We choose Cartesian spatial coordinates $x^i$, and for an inflating spacetime, the conformal time takes values $\eta \in (-\infty, 0)$.
The fluctuation spectrum of a scalar field on such a spacetime was defined in \Eq{eq:scalar-fluc-spec} in terms of its mode functions and in \Eq{eq:scalar-spec-from-2pt} in terms of its two-point function.
The two-point function coincides with the Feynman propagator at equal times, so let us write
\begin{equation}
\Delta^2_\phi(\eta,k) = \left. 4\pi k^3 |G_F(\eta,\eta';k)| \, \right|_{\eta = \eta'} .
\end{equation}
$G_F(\eta,\eta';k)$ denotes the spatial Fourier transform of $G_F$ with respect to ${\bm x}$, and it depends only on the magnitude $k \equiv |{\bm k}|$ because of spherical symmetry.
This motivates us to define the (covariantly) cutoff fluctuation spectrum by
\begin{equation}
(\Delta_\phi^\Omega)^2(\eta,k) = \left. 4\pi k^3 |G_F^\Omega(\eta,\eta';k)| \, \right|_{\eta = \eta'},
\end{equation}
and so we must compute the cutoff propagator $G_F^\Omega$.
In fact, what we are more interested in is the correction to the fluctuation spectrum due to imposing a covariant cutoff.
When we impose the covariant cutoff, the Feynman propagator of course changes:
\begin{equation}
G_F ~ \rightarrow ~ G_F^\Omega \equiv G_F + \delta G_F
\end{equation}
The change in the fluctuation spectrum is then
\begin{equation} \label{eq:change-in-fspec-exact}
\delta \Delta_\phi^2 = 4\pi k^3 \left( |G_F + \delta G_F| - |G_F| \right).
\end{equation}
In practice, we will always have that $|\delta G_F| \ll |G_F|$, and so we can compute the difference \eqref{eq:change-in-fspec-exact} via a Taylor series.
One finds, to lowest order in $\Re~\delta G_F$ and $\Im~\delta G_F$, that the relative change in $\Delta_\phi^2$ is given by
\begin{equation}
\label{eq:reldiffscalar}
\frac{\delta \Delta_\phi^2}{\Delta_\phi^2} = \mrm{Re} \left( \frac{\delta G_F}{G_F} \right) + O(\delta G_F^2).
\end{equation}
In a similar vein, while the covariantly bandlimited propagator is formally given by \Eq{eq:GFc}, we can access the correction to it directly by rewriting \eqref{eq:GFc} in terms of the complementary projector $P_\Omega^\perp = I - P_\Omega$:
\begin{equation}
G_F^\Omega = G_F + (P_\Omega^\perp G_F P_\Omega^\perp - P_\Omega^\perp G_F - G_F P_\Omega^\perp).
\end{equation}
The bracketed term is therefore the correction $\delta G_F$.
The calculation of $\delta G_F$ begins with writing down the Sturm-Liouville eigenvalue problem: $\Box u(\eta, {\bm x}) = \lambda u(\eta, {\bm x})$.
Notice, however, that a spatial Fourier transform with respect to ${\bm x}$ preserves the spectrum of the d'Alembertian, i.e. if $\Box u(\eta, {\bm x}) = \lambda u(\eta, {\bm x})$, then $\Box_{\bm k} u(\eta, {\bm k}) = \lambda u(\eta, {\bm k})$.
We may therefore impose the covariant cutoff on each spatial mode individually, which will be practical since we are ultimately interested in computing the correction to the spatial Fourier transform of the propagator on a mode-by-mode basis.
Next, we fix boundary conditions so that $\Box_{\bm k}$ is self-adjoint, and we identify its spectrum.
Then, with eigenfunctions and eigenvalues in hand, for each $k$ we construct the projector $P_\Omega^\perp$ and use it to obtain $\delta G_F$, and hence $\delta \Delta^2_\phi / \Delta^2_\phi$.
\subsubsection*{Example: de Sitter inflation}
\label{sec:dS-fluctuation-spectrum}
As a both productive and necessary example, let us explicitly compute $\delta G_F$ for a massless scalar field in de Sitter spacetime.
This amounts to the choice of scale factor $a(\eta) = (-H\eta)^{-1}$, and we also specialize to four spacetime dimensions from now on.
We will only describe the calculation in broad strokes here, leaving a complete account of all the details to \App{app:details}.
In summary, the calculation essentially proceeds in six steps:
\begin{enumerate}
\item Starting with the de Sitter $k$-d'Alembertian $\Box_k$, we write down the two linearly independent solutions of the eigenvalue equation $\Box_k u = \lambda u$ (Eqs.~\eqref{eq:SL-J} and \eqref{eq:SL-Y}). One solution is normalizable in $L^2((-\infty,0), a^4(\eta)\, \dee \eta)$ when $\lambda < 9H^2/4$, and so self-adjoint realizations of $\Box_k$ have point spectrum in this range. In contrast, both solutions are non-normalizable for $\lambda \geq 9H^2/4$, and so self-adjoint realizations of $\Box_k$ have continuous spectrum in this range.
\item We use an orthonormality relation (\Eq{eq:pt_spec_eigenf}) among point spectrum eigenfunctions, $\psi_n$, to determine the different possible point spectra corresponding to different self-adjoint realizations of $\Box_k$ as an operator on $L^2((-\infty,0), a^4(\eta)\, \dee \eta)$.
\item We fix a particular choice of self-adjoint extension by requiring that $\Box_k (G_F^h \psi_n) = \psi_n$, where $G_F^h$ is the Hermitian part of the Feynman propagator (\Eq{eq:GF-herm}). The latter is calculated according to canonical quantization with the Bunch-Davies vacuum chosen as the field's vacuum state (\Eq{eq:GF-dS}).
\item We then determine the continuous spectrum eigenfunctions, $\varphi_q$ (\Eq{eq:cts_spec_eigenf}), by requiring that they be orthogonal to the $\psi_n$, as well as mutually continuum-normalized.
\item Next, we construct the projector $P_\Omega^\perp$ (\Eq{eq:P-Omega-perp}), which we use to write down an expression for the change in the Feynman propagator, $\delta G_F$, due to the covariant cutoff.
\item Finally, we argue that we can neglect the point spectrum contribution to $\delta G_F$, and we make several useful approximations for the continuous spectrum contribution to arrive at a compact final expression for $\delta \Delta^2_\phi/ \Delta^2_\phi$.
\end{enumerate}
We ultimately find that
\begin{equation}\label{eq:final_answer}
\frac{\delta \Delta^2_\phi}{\Delta^2_\phi} \approx \frac{ 2 I(Q,x) \left[ \left(\tfrac{2}{\pi}\right)^{3/2} Y_{3/2}(x) + \tfrac{4}{\pi^3} I(Q,x) \right] }{J_{3/2}(x)^2 +Y_{3/2}(x)^2},
\end{equation}
where $I$ is the integral
\begin{equation} \label{eq:I-integral-app-mainbody}
I(Q,x) = - \int_0^\infty \dee b ~ e^{w(Q,b;2/(x e))} \sin\left( W[Q,b;2/(x e)] \right)
\end{equation}
and where we have defined $x = -k\eta$, $Q = \sqrt{\sigma^{-2}-9/4}$, and $\sigma = H/\Omega$.
The functions $w$ and $W$ are given by
\begin{align}
w(Q,b;2/(xe)) &= - \frac{1}{2} \left(b + \frac{3}{2}\right) \ln\left(Q^2+b^2\right) - Q \arctan\left(\frac{b}{Q}\right) - b \ln\left(\frac{2}{xe}\right), \\[2mm]
W(Q,b;2/(xe)) &= \frac{Q}{2} \ln\left(Q^2+b^2\right) - \left(b + \frac{3}{2}\right) \arctan\left(\frac{b}{Q}\right) + Q \ln\left(\frac{2}{xe}\right) .
\end{align}
\Eq{eq:final_answer} makes manifest that $\delta \Delta^2_\phi / \Delta^2_\phi$ is only a function of two independent parameters: $x$, which characterizes when we evaluate the fluctuation spectrum relative to horizon crossing, and $\sigma$, the ratio of the cutoff and Hubble scales.
Although the integral \eqref{eq:I-integral-app-mainbody} cannot be evaluated in closed form, note that for $\sigma\ll 1$, i.e. $Q\gg1$, the exponential function in the integrand decays much more rapidly than the sine function oscillates. Hence, in this limit, the integral can be well-approximated by expanding the functions $w$ and $W$ up to second order in $b$ and analytically evaluating the resulting Gaussian integral. This results in the asymptotic expansion\footnote{The next term in the expansion for $I$ is
\begin{align}
\frac{\left(3 \ln (2Q/x)+2\right) \cos \left(Q-Q \ln(2Q/x)\right)}{2 Q^{5/2} \ln ^3(2 Q/x)},
\end{align}
which is smaller than the leading term by a factor of $1/Q$. Hence for $Q\sim 1/\sigma\sim 10^5$, the leading approximation is quite accurate.
}
\begin{align}\label{eq:I-approx}
I \sim \frac{\sin \left(Q-Q \ln(2Q/x)\right)}{Q^{3/2} \ln(2Q/x)}.
\end{align}
We see that $I\ll 1$ for $Q\gg 1$, and thus to a good approximation we can further neglect the $I^2$ term in the expression \eqref{eq:final_answer} for the relative change in $\Delta_\phi$, which gives
\begin{equation} \label{eq:final_approx}
\frac{\delta \Delta^2_\phi}{\Delta^2_\phi}
\approx
-\frac{4(\cos x + x \sin x)}{\pi(1+x^2)}
\frac{\sin (Q-Q\ln[2Q/x])}{Q^{3/2} \ln[2Q/x]}.
\end{equation}
It is expected that inflationary fluctuations freeze out (i.e. stop fluctuating) as they cross the horizon, so from now on we will set $x=1$ to reflect this expectation.
Then, if we also approximate $Q \approx \sigma^{-1}$, we arrive at the following compact expression for the relative change in $\Delta_\phi$:
\begin{equation}\label{eq:final_answer-approximation}
\left. \frac{\delta \Delta^2_\phi}{\Delta^2_\phi} \right|_{x=1}
\approx \frac{2(\cos1+\sin1)}{\pi}
\frac{\sigma^{3/2}}{\ln (\sigma/2)}
\sin (\omega(\sigma)\sigma).
\end{equation}
In the equation above, we defined the $\sigma$-dependent frequency
\begin{align}
\omega(\sigma)\equiv
\frac{1}{\sigma^2}\left(1-\ln\frac{2}{\sigma}\right).
\end{align}
These oscillations are a key feature of our predicted correction to the power spectrum.
A plot of the correction to $\Delta_\phi^2$ as a function of $\sigma$, in which these characteristic oscillations can be seen, is shown in \Fig{fig:dS-correction}.
\subsubsection*{Predictions for slow-roll FLRW inflation}
For the purpose of comparing with cosmological data, given a slowly-rolling inflationary spacetime, the goal is to produce a prediction for $\delta \Delta^2_\phi/\Delta^2_\phi$ evaluated at horizon crossing as a function of the comoving mode $k$, namely
\begin{equation} \label{eq:prediction}
\left. \frac{\delta \Delta^2_\phi}{\Delta^2_\phi} \right|_{aH = k}.
\end{equation}
For a de Sitter scale factor, horizon crossing happens at $x = - k \eta = 1$ for all modes, and so \eqref{eq:prediction} is a constant correction for all $k$.
This is just a reflection of the fact that the proper size of the de Sitter horizon is constant in time, and so all modes are the same proper size when they cross the horizon.
Consequently, the magnitude of their fluctuations is the same, as well as the magnitude of the correction due to a covariant cutoff.\footnote{Explicitly, here both $x = -k\eta$ and $\sigma = H/\Omega$ are constant in using \Eq{eq:final_answer} to compute \eqref{eq:prediction}.}
On the other hand, in a slowly-rolling inflationary spacetime, the proper size of the cosmological horizon is slowly changing, and so different modes will have different proper wavelengths when they cross the horizon.
The prediction \eqref{eq:prediction} will therefore be nontrivial as a function of $k$.
The calculation that we carried out for de Sitter in \Sec{sec:dS-fluctuation-spectrum} is intractable for a generic FLRW spacetime, including even for a simple example such as a power law scale factor.
Our strategy will therefore be to approximate the prediction \eqref{eq:prediction} for a given slowly-rolling FLRW spacetime by a succession of instantaneously de Sitter calculations.
That is, we will use the de Sitter result \eqref{eq:final_answer} to compute \eqref{eq:prediction}, but with a Hubble parameter $H$ that depends on $k$.
Intuitively, such an adiabatic approximation will be accurate provided that the true time-dependent Hubble parameter of the FLRW spacetime, $H(\eta)$, evolves sufficiently slowly, which is what we expect during slow-roll inflation.
We discuss this adiabatic, or ``slow-roll'' approximation and additional supporting evidence for its validity in \App{sec:approximation}.
In practice, for each given comoving mode $k$, we must fix the values of $\eta$ and $H$, or equivalently the values of $x$ and $\sigma$, in \Eq{eq:final_answer-approximation}.
Since we are interested in the correction to a mode's fluctuations when it crosses the cosmological horizon, and since \Eq{eq:final_answer-approximation} was derived for a de Sitter background, we will set $\eta$ to be the de Sitter horizon-crossing time; that is, we fix $x = -k\eta = 1$ in \Eq{eq:final_answer-approximation}.
Then, to fix the value of $H$, we use the horizon-crossing condition $a(\eta) H(\eta) = k$ for the slowly-rolling $a(\eta)$ to determine the value of $H(\eta)$ at horizon crossing.
We show this calculation explicitly in the next section.
\section{Corrections to primordial power spectra}
\label{sec:prediction}
In this section, we compute the correction to the primordial power spectrum due to a covariant natural UV cutoff using realistic cosmological parameters.
We focus on the scalar spectrum, $\Delta_\mathcal{R}^2$, but the calculation for the tensor spectrum, $\Delta_\mathcal{T}^2$, is completely analogous.
As input to the prediction \eqref{eq:prediction}, we need the Hubble parameter seen by a mode $k$ when it crosses the cosmological horizon.
We compute this by comparing the theoretical form of $\Delta_\mathcal{R}^2$ (without cutoff) given by \Eq{eq:scalar-spectrum} with its observational parameterization \eqref{eq:scalar-spectrum-pheno}.\footnote{Note that we are justified in using the uncorrected value of $\Delta_\mathcal{R}^2$ to compute $H$ because the error incurred in doing so is of second order, whereas we are computing a first order correction.}
Equating the two, we arrive at an expression for $H$ evaluated at horizon crossing in terms of observed parameters
\begin{equation} \label{eq:Hubble-eff}
\left. H^2 \right|_{aH = k} = \Mpl^2 \pi \epsilon A_s \left( \frac{k}{k_\star} \right)^{n_s-1}.
\end{equation}
Now let us define $\mu$ as the ratio of the Planck scale $\Mpl$ to the cutoff scale $\Omega$,
\begin{equation}
\mu \equiv \frac{\Mpl}{\Omega}.
\end{equation}
Setting $\mu = 1$ corresponds to a UV cutoff at the Planck scale; however, it is possible that a quantum gravity-motivated UV cutoff could lie at a lower energy scale (e.g. the string scale), in which case we would have $\mu > 1$.
We therefore find that the ratio of the Hubble and cutoff scales at the time when the mode $k$ crosses the horizon is given by\footnote{If we set $\mu = 1$ and take $k_\star = 0.05~\mrm{Mpc}^{-1}$, $A_s = 2.1 \times 10^{-9}$, $n_s = 0.97$, and $\epsilon = 0.003$, then $\sigma$ ranges from $4.9 \times 10^{-6}$ to $4.1 \times 10^{-6}$ over the range of $k$ from $10~\mrm{Mpc}^{-1}$ to $10^{-4}~\mrm{Mpc}^{-1}$. So, indeed, the Planck and Hubble scales were separated by about 5 to 6 orders of magnitude when the modes measured in the CMB crossed the cosmological horizon.}
\begin{equation} \label{eq:Heff}
\sigma(k) \equiv \left. \frac{H}{\Omega} \right|_{aH = k} = \mu \sqrt{\pi \epsilon A_s} \left( \frac{k}{k_\star} \right)^{(n_s-1)/2}.
\end{equation}
Finally, we can use \Eq{eq:Heff} in \Eq{eq:final_answer} or \Eq{eq:final_answer-approximation} to obtain a prediction for the corrections $\delta \Delta_\mathcal{R}^2/\Delta_\mathcal{R}^2$ to the scalar primordial power spectrum, where we take the placeholder scalar field $\phi$ to be the Mukhanov-Sasaki variable, $\mathcal{R}$.
There is one subtlety that we must address, however, which is that the Mukhanov-Sasaki variable does not see the Hubble parameter corresponding to the scale factor $a$.
Rather, it experiences a modified Hubble parameter given by $\dot z/z$ due to the modified scale factor $z$; see \Eq{eq:z}.
Therefore, strictly speaking, we should use this modified Hubble parameter in \Eq{eq:final_answer} for making a prediction for how the scalar primordial power spectrum changes, whereas \Eq{eq:Heff} gives the unmodified Hubble parameter $H \equiv \dot{a}/a$.
However, we also saw in \Eq{eq:z-Hubble} that $\dot z/z$ coincides with $H$ at leading order in the slow-roll expansion.
Therefore, to a very good approximation, we can still simply use \Eq{eq:Heff} for computing $\delta \Delta_\mathcal{R}^2/\Delta_\mathcal{R}^2$.
We have thus arrived at our main prediction, which can be summarized as follows: a sharp covariant natural UV cutoff causes small $k$-dependent oscillations which are superimposed on the non-cutoff primordial power spectrum. These corrections are given by
\begin{align}\label{eq:main_prediction}
\left. \frac{\delta \Delta^2_\calR}{\Delta^2_\calR}\right|_{aH=k}
=
\mathcal{C}
\frac{\sigma(k)^{3/2}}{\ln(\sigma(k)/2)} \sin\left(\omega(k)\, \sigma(k)\right),
\end{align}
where $\mathcal{C} = 2(\cos 1 + \sin 1)/\pi$,
\begin{align}
\omega(k) \equiv \omega(\sigma(k))
= \frac{1}{\sigma(k)^2}\left(1-\ln\frac{2}{\sigma(k)}\right),
\label{eq:oscillation-frequency}
\end{align}
and where $\sigma(k)$ is given by \Eq{eq:Heff}.
Both the amplitude and frequency of the oscillations track the ratio $\sigma(k)$ of the Hubble and cutoff scales at horizon crossing, and as a result both are functions of $k$. In particular, up to a logarithmic correction, the amplitude scales as $\sigma^{3/2}$.
Interestingly, this scaling almost exactly interpolates between previous claims that the scaling should be linear \cite{Easther:2001fi} or quadratic \cite{Kempf:2001fa,Frob:2012ui} in the ratio of Planck and cutoff scales.
Notice that the only free parameter in this prediction is $\mu = \Mpl/\Omega$, which fixes the precise location of the cutoff scale.
The oscillation frequency $\omega(\sigma(k))$ is the most robust predicted feature of covariant natural UV cutoffs.
So far, we have focused on a sharp cutoff $P_\Omega = \theta(\Omega^2 - |\Box|)$; however, the class of cutoffs specified by \Eq{eq:cutoff} that we could consider is much larger.
For example, we could consider a cutoff $f(\Box)$ that smooths out the Heaviside step function in $P_\Omega$.
Doing so effectively introduces a free functional parameter (which characterizes the degree of smoothness), yet even so, the oscillation frequency is largely independent of such precise details of the cutoff.
This is in contrast to the oscillations' amplitude and phase, which depend more strongly on such details.
These points are discussed more thoroughly in \App{sec:detailed-features}.
In a nutshell, smoothing out the cutoff shifts the oscillations' phase and tends to reduce their amplitude.
The former has little practical consequences, but the latter makes the oscillations harder to observe.
With these findings in mind, here we continue to focus on the case of a sharp cutoff, with the understanding that this corresponds to the strongest possible signal.
Let us now examine the oscillation frequency more carefully for realistic inflationary parameters.
As discussed in \Sec{sec:cosmological-perturbations}, for a pivot scale $k_\star = 0.05~\mrm{Mpc}^{-1}$, the primordial scalar amplitude and spectral tilt are observed to be $A_s = 2.1 \times 10^{-9}$ and $n_s = 0.97$ \cite{Planck:2018jri}.
The slow-roll parameter $\epsilon$ is only constrained to be less than 0.0039, so let us take $\epsilon = 0.003$ as a representative value. Finally, the range of modes whose fluctuations we are currently able to observe in the CMB have $k$ in the range $k_\mrm{min} = 10^{-4}~\mrm{Mpc}^{-1}$ to $k_\mrm{max} = 10~\mrm{Mpc}^{-1}$. We show a plot of $\Delta^2_\mathcal{R}$ and we illustrate the predicted correction for $\mu = 1$ in \Fig{fig:prediction}a.
For these cosmological parameters, the dimensionless oscillation frequency $\omega(\sigma)$ is high over the relevant range of $k$.
We can get a sense of this by plugging in some numbers:
For a realistic value $\sigma\sim10^{-5}$ we see from Eq.~\eqref{eq:oscillation-frequency} that $\omega\sim 10^{10}$.
Over the five orders of magnitude that $k$ varies in \Fig{fig:prediction}a, the corresponding values of $\sigma$ for $\mu = 1$ and the cosmological parameter values given above are in the relatively narrow range of $4.1 \times 10^{-6}$ to $4.9 \times 10^{-6}$.
The dimensionless frequency $\omega(\sigma)$ hence remains the same order of magnitude throughout the entire visible $k$ window (ranging from $7.2 \times 10^{11}$ to $5.0 \times 10^{11}$).
When viewed on a logarithmic plot, the oscillation frequency is approximately constant as a function of $\log k$; whenever $k$ increases by an order of magnitude, there are approximately 16,000 full oscillations.
Perhaps more interestingly is the scaling of the oscillation frequency and amplitude with $\mu$, which is the one free parameter assuming a sharp cutoff.
The dimensionless frequency scales like $\omega \propto \mu^{-2}$, while the product $\omega(k) \sigma(k)$ (and therefore the frequency of oscillations in the $k$ or $\log k$ domains) scales like $\mu^{-1}$.
Either way, the oscillation frequency is lower for lower cutoff energies.
Meanwhile, up to a logarithmic factor, the amplitude of the oscillations scales like $\mu^{3/2}$, and so the predicted signal gets stronger for lower cutoff energies.
Altogether, as the cutoff scale is brought close to the Hubble scale, the correction becomes so prominent that it should be possible to bound the value of $\mu$, or equivalently $\Omega$, for a sharp cutoff even with existing observational data.
For illustration, in \Fig{fig:prediction}b we show a plot of $\Delta^2_\mathcal{R}$ and its correction for the extreme value $\mu = 20,000$, which locates the cutoff scale $\Omega$ about one order of magnitude above the Hubble scale over the range of $k$ shown.
\Fig{fig:scaling} illustrates the scaling of the oscillations' amplitude and frequency as $\Omega$ varies between such extreme cutoff scales up to the Planck scale.
The prediction for the primordial tensor power spectrum is completely analogous to the scalar case.
The only difference is that we write $\sigma$ in terms of the tensor parameters as
\begin{equation}
\left. \sigma \right|_{aH=k} = \frac{\mu \sqrt{\pi A_t}}{4} \left(\frac{k}{k_\star}\right)^{n_t}.
\end{equation}
In principle, the absolute correction $\delta \Delta^2_\mathcal{T}$ will differ from $\delta \Delta^2_\mathcal{R}$ due to the fact that $\Delta^2_\mathcal{T}$ and $\Delta^2_\mathcal{R}$ differ by multiplicative prefactors and because the scalar perturbations see a modified scale factor $z$, whereas the tensor perturbations see the unmodified scale factor $a$.
However, the multiplicative prefactors cancel in $\delta \Delta^2_\mathcal{T}/\Delta^2_\mathcal{T}$, and we previously argued that $\dot z/z = \dot a/a + O(\epsilon)$.
Therefore, we can obtain a tensor prediction from any scalar prediction by making the replacements
\begin{equation}
\epsilon A_s \rightarrow \frac{A_t}{16} \qquad \mrm{and} \qquad n_s - 1 \rightarrow n_t.
\end{equation}
\section{Discussion}
\label{sec:discussion}
\subsubsection*{Summary}
We calculated the signature that a generic, quantum gravity-motivated, natural UV cutoff would leave in primordial inflationary power spectra.
The UV cutoff that we considered takes the form of a cutoff on the spectrum of a scalar field's d'Alembertian, i.e., it is a large eigenvalue cutoff for scalar fields.
As such, it covariantly generalizes the notion of a maximum Fourier frequency to arbitrary Lorentzian manifolds.
It also admits a natural information theoretic interpretation as a cutoff on the density of field degrees of freedom in spacetime, in the sense of Shannon sampling theory; see \cite{Kempf:2012sg,Chatwin-Davies:2016byj}.
We implemented the natural UV cutoff in the language of path integrals by restricting the space of fields integrated over in a path integral to only those fields that are spanned by eigenfunctions of the d'Alembertian whose eigenvalues are less than the cutoff scale.
Conceptually, this can be thought of as discarding, in a covariant way, the contributions to the path integral of field configurations which fluctuate too far off shell.
In practice, this is equivalent to constructing projectors and using them to restrict operators to the subspace defined by the cutoff.
We illustrated this process by calculating the covariantly cut-off Feynman propagator for a massless scalar field in de Sitter spacetime and we explained how to generalize the result to slowly-rolling FLRW spacetimes in \Sec{sec:covariant-cutoff}.
Furthermore, we used the fact that the primordial scalar perturbation and each polarization of the tensor perturbation are massless scalar fields that propagate on a slowly-rolling FLRW background, and that the power spectra of their fluctuations are straightforward to calculate in terms of the Feynman propagator.
This allowed us to use our results for the covariantly bandlimited Feynman propagator to compute the correction that a covariant natural ultraviolet cutoff produces on the primordial power spectra.
While our calculations for the tensor and scalar spectra are analogous, we focused on the scalar spectrum, $\Delta^2_\mathcal{R}(k)$, due to its better experimental prospects.
We found that the correction induced by the cutoff, $\delta \Delta^2_\mathcal{R}(k)$, takes the form of small $k$-dependent oscillations;\footnote{Log-oscillations, i.e., corrections to the primordial power spectrum that schematically oscillate like $\sin(\omega \ln k)$, are not unique to our prediction; see, e.g., \Ref{Calcagni:2016ofu}, as well as Refs.~[27-43] therein. To our knowledge, the chirping $k$-dependence of $\omega(k)$ in \Eq{eq:oscillation-frequency}, as well as the overall amplitude in \Eq{eq:main_prediction} for the case of a sharp cutoff, are specific to our prediction.} see Eqs.~\eqref{eq:main_prediction} and \eqref{eq:oscillation-frequency}.
The amplitude and phase of the oscillations depend moderately on the precise details of the cutoff, namely, how smoothly and over how many Planck lengths it turns on.
However, the oscillation frequency is a particularly robust characteristic of the prediction.
The frequency, as a function of $k$, tracks the ratio of the cutoff and Hubble scales when the mode $k$ crossed the Hubble length during inflation, and its functional form is completely fixed according to \Eq{eq:oscillation-frequency}.
The only free parameter of this prediction is the location of the cutoff; parameterizing the cutoff $\Omega$ in terms of the Planck mass via $\Omega = \Mpl/\mu$, the oscillation frequency decreases with increasing $\mu$.
Therefore, in the case that the cutoff scale is located before the Planck scale, the predicted signature is stronger due to a larger amplitude and lower frequency.
This scenario could correspond to quantum gravitational effects becoming important at, e.g., the string scale.
Furthermore, it should be possible to place a bound on $\mu$ based on the highest resolvable frequency and smallest resolvable amplitude in experimental data.
\subsubsection*{Effective field theory of inflation}
Effective field theory has been used to systematically study the possible high energy corrections to inflation \cite{Cheung:2007st}. As always in effective field theory, one considers a Lagrangian with all possible local interaction terms that satisfy the appropriate symmetries, and one provides a Wilsonian cutoff below which the coupling coefficients to these terms can be deduced from experiments. By integrating out higher energy contributions to the path integral, the Wilsonian cutoff is lowered and the coupling coefficients flow to different values. Of course, in the absence of experiments, effective field theory does not pick out a particular theory (i.e. a particular set of coupling coefficients) as the correct one.
What we are doing in this paper, on the other hand, is to study the free theory of inflaton and linearized gravitational perturbations, with a covariant cutoff imposed on the path integral. Unlike in the effective field theory framework, we are studying, therefore, one particular theory; there are no undetermined coefficients to fix using experiments. In particular, we emphasize that we did \textit{not} obtain our field theory by integrating out trans-Planckian contributions to the path integral.
Instead, we discard these contributions from the quantum field theoretic path integral and study the resulting modified theory.
Nevertheless, the formalism of effective field theory may well be broad enough to encompass also the UV modification to quantum field theory that we are considering here.
To this end, a possible starting point is as follows.
A path integration over the space $B_\mathcal{M}(\Omega)$ of covariantly bandlimited fields can be re-written as a path integral over the space of all fields with an indicator function weighting the integration.
Explicitly,
\begin{equation}
\int_{B_\mathcal{M}(\Omega)} \mathcal{D}\phi~e^{iS[\phi]} = \int \mathcal{D}\phi ~ \Pi_\Omega[\phi] e^{iS[\phi]},
\end{equation}
where the indicator function $\Pi_\Omega[\phi] = 1$ if $\phi \in B_{\mathcal{M}}(\Omega)$ and vanishes otherwise, or is a more complicated functional in the case of a soft cutoff.
We may then exponentiate the indicator to formally obtain
\begin{equation}
\int \mathcal{D}\phi~e^{i(S[\phi] - i \ln \Pi_\Omega[\phi])}.
\end{equation}
One could then attempt to understand $i \ln \Pi_\Omega[\phi]$ as higher-order corrections to the free field action $S[\phi]$.
We leave these considerations to future work.
\subsubsection*{Observational prospects}
A natural covariant UV cutoff, $\Omega$, is widely expected to exist somewhere between the Hubble scale during inflation and the Planck scale.
$\Omega$ should be no larger than the Planck scale, because this is the scale at which we expect quantum gravitational effects to dominate, and hence the low-energy path integral description to break down. Also, $\Omega$ is expected to be not too close to the Hubble scale during inflation, in order to be consistent with the standard quantum field theoretic description of cosmological perturbations, which is supported by observations.
Here, we calculated concrete predictions for the corrections to the primordial scalar and tensor power spectra that arise from such a cutoff, as a function of $\Omega$. The main prediction is that the presence of this UV cutoff results in oscillations on top of the familiar, nearly-scale-invariant, primordial power spectrum curve; see Fig. \ref{fig:prediction}.
If the natural UV cutoff is located at what is presumably the upper limit of its possible range, i.e., at the Planck scale with $\Omega = \Mpl$, then, as Fig.~\ref{fig:prediction}a shows, the predicted oscillations have an amplitude which is very small (about 9 to 10 orders of magnitude smaller than the mean value of the power spectrum) and their frequency is very high (there are roughly $10^5$ oscillations in the observable window $10^{-4}~\mrm{Mpc}^{-1}<k<10~\mrm{Mpc}^{-1}$). These numbers would seem to suggest that in order to measure this signal, measurements which are 9 to 10 orders of magnitude more sensitive than what is currently possible would be needed.
Moreover, it would seem necessary to measure at $\sim10^5$ different $k$ values in order to fully resolve the oscillations.
Fortunately, however, not quite as much accuracy should be needed to test the predictions, even if $\Omega$ is at the Planck scale. This is because the prediction for the signal in the primordial power spectrum is very robust by effectively depending only on one free parameter, which is the value of the cutoff scale $\Omega$. In particular, the frequency of the predicted oscillations in the primordial power spectra depends essentially only on $\Omega$, even if the cutoff is softened. If the cutoff is sharp, then even the phase and amplitude of the predicted oscillations depend only on $\Omega$.
This means that the prediction can be thought of as a one-parameter family of template waveforms, parameterized by $\Omega$ (or the ratio of the natural UV cutoff scale to the Hubble scale).
Experimentally then, the search for these template waveforms should offer a significantly improved signal to noise ratio as, in effect, template search allows one to filter out the noise from that part of the function space that is orthogonal to the space spanned by the template functions, similar to using a low-pass filter to remove noise with frequencies above a signal's frequencies.
Template search methods have of course recently been used to great effect in the detection of gravitational waves, e.g., from black hole mergers. There, a three-parameter family of template waveforms was successfully used to detect the passage of gravitational waves with a strain of only $10^{-24}$ \cite{LIGOScientific:2016aoc, Privitera:2013xza}.
Alternatively, if the natural UV cutoff scale $\Omega$ is located below the Planck scale, then the lower the value of $\Omega$, the larger is the predicted effect, since the oscillation frequency is then predicted to be lower while the amplitude is predicted to be larger.
It will be very interesting, therefore, to compare the predictions here with present and upcoming observation-based precision reconstructions of the scalar primordial power spectrum. This will yield at least ever higher lower bounds on the location of a natural UV cutoff scale $\Omega$ and it may eventually yield positive evidence for the existence of a natural UV cutoff, a possible experimental signal of quantum gravitational origin.
\begin{center}
{\bf Acknowledgments}
\end{center}
\noindent
We thank Panos Betzios, Fran\c cois Bouchet, Richard Easther, Simon Foreman, Lukas Hergt, Arjun Kar, Jorma Louko, Rob Martin, and Mark Van Raamsdonk for helpful discussions during the preparation of this manuscript, as well as Gianluca Calcagni and Albert Roura for comments on the first version.
A.C.D. acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number PDF-545750-2020].
A.C.D. was supported for a portion of this work as a postdoctoral fellow (Fundamental Research) of the National Research Foundation -- Flanders (FWO), Belgium. AK acknowledges support through a Discovery Grant of the National Science and Engineering Council of Canada (NSERC) and a Discovery Project grant of the Australian Research Council (ARC). PS acknowledges support from the NSERC CGS-D award.
\appendix
\section{Equivalence between path integrals and projectors}
\label{app:PI-projector-equivalence}
Here, we show that the two expressions for $G_F^\Omega$ given in Eqs.~\eqref{eq:cutoff-GF-PI} and \eqref{eq:GFc} are exactly equivalent for scalar field actions of the form \eqref{eq:scalar-action}.
Starting with \Eq{eq:GF}, act on its left and right with projectors $P_\Omega$:
\begin{align*}
i (P_\Omega G_F P_\Omega)(x,x') &= i \iint \dee y \, \dee z ~ P_\Omega(x,y) G_F(y,z) P_\Omega(z,x') \\
&= \frac{1}{\mathcal{N}} \int \mathcal{D}\phi \left( \int \dee y~P_\Omega(x,y) \phi(y) \right) \left( \int \dee z~\phi(z)P_\Omega(z,x') \right) e^{iS[\phi]}
\end{align*}
In the above, we denote by $\mathcal{N}$ the normalization
\begin{equation}
\mathcal{N} = \int \mathcal{D}\phi ~ e^{iS[\phi]},
\end{equation}
and we abbreviate the spacetime integral measures, for example $\dee^{d+1}y \sqrt{-g(y)}$ by simply $\dee y$.
Next, because $P_\Omega$ is symmetric, it follows that $P(z,x') = P(x',z)$, and so we may write
\begin{equation} \label{eq:PI-step1}
i (P_\Omega G_F P_\Omega)(x,x') = \frac{1}{\mathcal{N}} \int \mathcal{D}\phi ~ P_\Omega \phi(x) P_\Omega \phi(x') e^{iS[\phi]} .
\end{equation}
Suppose now that $S[\phi]$ is of the form \eqref{eq:scalar-action}, and insert a resolution of the identity $I = P_\Omega + P_\Omega^\perp$:
\begin{equation} \label{eq:S-res-ID}
\begin{aligned}
S[(P_\Omega + P_\Omega^\perp)\phi] &= \int \dee x ~ (P_\Omega \phi) F(\Box) (P_\Omega \phi) + \int \dee x ~ (P_\Omega^\perp \phi) F(\Box) (P_\Omega^\perp \phi) \\
& \qquad + \int \dee x ~ (P_\Omega \phi) F(\Box) (P_\Omega^\perp \phi) + \int \dee x ~ (P_\Omega^\perp \phi) F(\Box) (P_\Omega \phi)
\end{aligned}
\end{equation}
Consider one of the cross terms in the second line of \Eq{eq:S-res-ID}, and expand the integrand in terms of eigenfunctions of the d'Alembertian, $\psi_\lambda$.
Explicitly, writing
\begin{equation}
\phi(x) = \sum_{\lambda \, \in \, \mrm{spec} \, \Box} \phi_\lambda \psi_\lambda(x),
\end{equation}
we find the following:
\begin{align*}
\int \dee x ~ (P_\Omega \phi) F(\Box) (P_\Omega^\perp \phi) &= \int \dee x ~ \left( \sum_{|\lambda|\leq \Omega^2} \phi_\lambda \psi_\lambda(x) \right) F(\Box) \left( \sum_{|\lambda'| > \Omega^2} \phi_{\lambda'} \psi_{\lambda'}(x) \right) \\[2mm]
&= \sum_{|\lambda|\leq \Omega^2} \sum_{|\lambda'| > \Omega^2} \phi_\lambda \phi_{\lambda'} F(\lambda') \int \dee x ~ \psi_\lambda(x) \psi_{\lambda'}(x) \\[2mm]
&= \sum_{|\lambda|\leq \Omega^2} \sum_{|\lambda'| > \Omega^2} \phi_\lambda \phi_{\lambda'} F(\lambda') \delta_{\lambda \lambda'}
\end{align*}
Of course, the sums above are shorthand for a sum over spectrum and should be replaced with integrals wherever $\Box$ has continuous spectrum.
To go to the final line, we used the fact that the d'Alembertian's eigenfunctions are orthonormal.
However, notice that the values of $\lambda$ and $\lambda'$ that are summed over do not overlap, and so the cross-terms in \Eq{eq:S-res-ID} vanish.
We therefore have that $S[\phi] = S[P_\Omega \phi] + S[P_\Omega^\perp \phi]$.
Since $P_\Omega \phi$ and $P_\Omega^\perp \phi$ are independent degrees of freedom, we can path-integrate over them separately.
Therefore, \Eq{eq:PI-step1} altogether reads
\begin{equation}
i (P_\Omega G_F P_\Omega)(x,x') = \frac{1}{\mathcal{N}} \int \mathcal{D}(P_\Omega \phi) ~ P_\Omega \phi(x) P_\Omega \phi(x') e^{iS[P_\Omega \phi]} \int \mathcal{D}(P_\Omega^\perp \phi) e^{iS[P_\Omega^\perp \phi]}.
\end{equation}
Similarly, it follows that
\begin{equation}
\mathcal{N} = \int \mathcal{D}(P_\Omega\phi) ~ e^{iS[P_\Omega \phi]} \int \mathcal{D}(P_\Omega\phi^\perp) ~ e^{iS[P_\Omega^\perp \phi]},
\end{equation}
and so we recover the path integral expression \eqref{eq:cutoff-GF-PI} for the covariantly bandlimited Feynman propagator.
\section{Bandlimited scalar fluctuations in de Sitter}
\label{app:details}
An important part of \Sec{sec:dS-fluctuation-spectrum} was determining the relative change in the fluctuation power spectrum of a massless scalar field, $\hat \phi$, in de Sitter spacetime:
\begin{equation}
\frac{\delta \Delta_\phi^2}{\Delta_\phi^2} = \mrm{Re} \left( \frac{\delta G_F}{G_F} \right).
\end{equation}
$G_F$ denotes the field's Feynman propagator and $\delta G_F = P_\Omega^\perp G_F P_\Omega^\perp - P_\Omega^\perp G_F - G_F P_\Omega^\perp$ is the correction induced by a covariant cutoff, $\Omega$.
We only outlined the six major steps of this calculation in the main text; here we go through each of these steps in full detail.
These steps are:
\begin{enumerate}
\item Determining the spectrum of the $k$-d'Alembertian, $\Box_k$.
\item Determining the self-adjoint realizations of $\Box_k$.
\item Fixing the choice of self-adjoint realization.
\item Determining the continuous spectrum eigenfunctions.
\item Using the projector $P_\Omega^\perp$ to compute $\delta \Delta_\phi^2/\Delta_\phi^2$.
\item Making approximations for calculating $P_\Omega^\perp \chi_0$.
\end{enumerate}
\subsubsection*{Step 1: Determining the spectrum of the $k$-d'Alembertian, $\Box_k$}
Consider the flat slicing of de Sitter spacetime in four spacetime dimensions.
Explicitly, we take the line element to be
\begin{equation}
\dee s^2 = a^2(\eta)\left[-\dee \eta^2 + \dee x_i \dee x^i \right]
\end{equation}
and the scale factor to be $a(\eta) = (-H\eta)^{-1}$, $\eta \in (-\infty, 0)$.
With these choices, the Sturm-Liouville eigenvalue problem $\Box_k u = \lambda u$ reads
\begin{equation} \label{eq:ee}
(a^2 u')' + k^2 a^2 u + \lambda a^4 u = 0,
\end{equation}
where $'$ denotes a derivative with respect to the conformal time $\eta$ and where we recall that $k = |{\bm k}|$ is the Fourier variable for $\bm x$.
The eigenfunction equation \eqref{eq:ee} has two linearly independent solutions,
\begin{align}
u_J(\eta) &= (-\eta)^{3/2} J_{p(\lambda)}(-k\eta) \label{eq:SL-J}\\
u_Y(\eta) &= (-\eta)^{3/2} Y_{p(\lambda)}(-k\eta), \label{eq:SL-Y}
\end{align}
where $J$ and $Y$ denote Bessel functions of the first and second kind, respectively, and where
\begin{equation} \label{eq:pfunc}
p(\lambda) = \sqrt{\frac{9}{4}-\frac{\lambda}{H^2}}.
\end{equation}
Because we will ultimately be concerned with a realization of $\Box_k$ as a self-adjoint operator and because the spectra of self-adjoint operators are real, we need only consider real $\lambda$.
Notice that when $\lambda < 9H^2/4$, $u_J$ is normalizable while $u_Y$ is not.
Therefore, self-adjoint realizations of $\Box_k$ will have point spectrum in this range.
Both solutions are non-normalizable for $\lambda \geq 9H^2/4$, and so this range will be the continuous spectrum.
\subsubsection*{Step 2: Determining the self-adjoint realizations of $\Box_k$}
A subtlety that we must address, however, is that the differential expression $\Box_k$ has no unique realization as a self-adjoint operator on $L^2((-\infty,0), a^4(\eta) \, \dee \eta)$.
As a symmetric operator, its deficiency indices are $(1,1)$, meaning that $\Box_k$ has a one-parameter family of self-adjoint extensions corresponding to different (generalized) boundary conditions \cite{naimark1968linear,amrein2005sturm,zettl2012sturm}.
That the deficiency indices are $(1,1)$ can be shown directly by inspecting the solutions of $\Box_k u = \pm i u$, i.e., Eqs.~\eqref{eq:SL-J} and \eqref{eq:SL-Y} with $\lambda = \pm i$.
In both cases, $u_J$ is normalizable while $u_Y$ is not; each deficiency space is therefore one-dimensional and the deficiency indices are $(1,1)$.
A convenient way of parameterizing these self-adjoint extensions is by the location of the largest eigenvalue in the point spectrum.
In particular, a set of square-integrable orthonormal eigenfunctions is given by
\begin{equation} \label{eq:pt_spec_eigenf}
\psi_n(\eta) = H^2 \sqrt{2 p_n} (-\eta)^{3/2} J_{p_n}(-k\eta),
\end{equation}
where $p_n = p_0 + 2n$, $n \in \mathbb{Z}$, $n \geq 0$, and $p_0 \in (0,2]$.
The tower of discrete eigenvalues in the point spectrum is then
\begin{equation} \label{eq:ptspec}
\lambda_n = H^2\left(\frac{9}{4} - p_n^2\right),
\end{equation}
and so different choices of $p_0$ produce different point spectra that correspond to the different self-adjoint extensions of $\Box_k$.
Orthonormality of the point spectrum eigenfunctions \eqref{eq:pt_spec_eigenf} follows immediately from computing $\langle \psi_m, \psi_n \rangle$, where the inner product is given by
\begin{equation}
\langle u, v \rangle = \int_{-\infty}^0 \dee \eta \, a^4(\eta) ~ u^*(\eta) v(\eta)
\end{equation}
for $u, v \in L^2((-\infty,0), a^4(\eta) \dee \eta)$, and from the integral identity \cite[\href{https://dlmf.nist.gov/10.22.E57}{Eq.~10.22.57}]{NIST:DLMF}
\begin{equation}
\int_0^\infty \frac{\dee x}{x} J_m(x) J_n(x) = \frac{2 \sin(\tfrac{\pi}{2}(m-n))}{\pi(m^2-n^2)} .
\end{equation}
\subsubsection*{Step 3: Fixing the choice of self-adjoint realization}
Having identified the different self-adjoint extensions of $\Box_k$, the obvious question is which one should we use?
We will choose the self-adjoint extension that corresponds to choosing the Bunch-Davies vacuum state for the scalar field, and we will determine the value of $p_0$ to which this corresponds by examining the action of the full (unmodified) Feynman propagator on a test eigenfunction.
Before getting into the above procedure and its meaning, we first need to write down the Feynman propagator.
According to canonical quantization, one can express the Feynman propagator as a time-ordered expectation value,
\begin{equation}
G_F(x,x') = \bra{0} \mathcal{T} \hat\phi(x) \hat\phi(x') \ket{0}.
\end{equation}
In the conformal coordinates of \Eq{eq:FLRW-line-element}, making a Fourier transform with respect to ${\bm x} - {\bm x}'$, and choosing the Bunch-Davies vacuum state for the mode functions of $\hat \phi$, one arrives at \cite{Birrell:1982ix}
\begin{equation} \label{eq:GF-dS}
\begin{aligned}
G_F(\eta,\eta';k) = -\frac{i \pi}{4} \frac{\sqrt{\eta\eta'}}{a(\eta)a(\eta')}&\left[\theta(\eta-\eta') H_{3/2}^{(1)}(-k\eta) H_{3/2}^{(2)}(-k\eta') \right. \\ & \left. \qquad + \theta(\eta'-\eta) H_{3/2}^{(2)}(-k\eta) H_{3/2}^{(1)}(-k\eta') \right],
\end{aligned}
\end{equation}
where $H^{(1)}$ and $H^{(2)}$ denote Hankel functions of the first and second kind, respectively.
For reasons that will soon become apparent, note that the propagator above has both Hermitian and anti-Hermitian parts, given by
\begin{equation} \label{eq:GF-herm}
G_F^h(\eta,\eta';k) = \frac{1}{2} \left[ G_F(\eta,\eta';k) + G_F(\eta',\eta;k)^* \right]
\end{equation}
and
\begin{equation} \label{eq:GF-antiherm}
G_F^{ah}(\eta,\eta';k) = \frac{1}{2} \left[ G_F(\eta,\eta';k) - G_F(\eta',\eta;k)^* \right],
\end{equation}
respectively.
The Feynman propagator is a right inverse of the d'Alembertian, and so an eigenfunction of the d'Alembertian with eigenvalue $\lambda \neq 0$ is also an eigenfunction of the propagator, but with eigenvalue $\lambda^{-1}$.
Given the expression for $G_F$ in \Eq{eq:GF-dS}, we select a particular self-adjoint extension by dialing the value of $p_0 \in (0,2]$ (which parameterizes the choice of self-adjoint extension) so that the action of $G_F$ on a test eigenfunction of the form \eqref{eq:pt_spec_eigenf} is equal to the same eigenfunction, but multiplied by $\lambda^{-1}$.
In other words, the choice of the Bunch-Davies vacuum in canonical quantization implies a particular choice of self-adjoint extension in the functional analytic language that we are using here, and we are deducing what that choice is.
An important subtlety is that \eqref{eq:GF-dS} is not the integral kernel of an operator on the Hilbert space $L^2((-\infty,0),a^4(\eta) \, \dee \eta)$.
As operators, the d'Alembertian and Feynman propagator satisfy
\begin{equation}
\hat \Box_k \hat G_{F(k)} = \hat{I},
\end{equation}
where we use a hat to explicitly indicate that this is an operator relation on $L^2((-\infty,0),a^4(\eta) \, \dee \eta)$.
However, the integral kernel $G_F(\eta,\eta';k)$ in \eqref{eq:GF-dS} does not define a good operator on this Hilbert space.
It is straightforward to check that its anti-Hermitian part, \Eq{eq:GF-antiherm}, maps elements of $L^2((-\infty,0),a^4(\eta) \, \dee \eta)$ outside of the Hilbert space.
Rather, it is the Hermitian kernel \eqref{eq:GF-herm} that defines an integral operator whose domain and range are both contained in $L^2((-\infty,0),a^4(\eta) \, \dee \eta)$.
Once a self-adjoint extension has been fixed, each $k$-d'Alembertian can be expressed in terms of its spectrum and eigenfunctions as
\begin{equation}
\hat \Box_k = \sum_{\mrm{spec}~\Box_k} \lambda~ \ketbra{\lambda}{\lambda},
\end{equation}
where we adopt bra-ket notation to denote vectors and their dual linear functionals.
As its right inverse, we may therefore write
\begin{equation} \label{eq:GFoperator}
\hat G_{F(k)} = \sum_{\mrm{spec}~\Box_k \setminus \{0\}} \frac{1}{\lambda} ~ \ketbra{\lambda}{\lambda} + (\ketbra{0}{f} + \ketbra{f}{0}).
\end{equation}
If zero is in the spectrum of the chosen self-adjoint extension of the d'Alembertian, then the Feynman propagator's range can have support on this eigenspace as an operator since it is only a right-inverse of the d'Alembertian (i.e. in this case the vector $\ket{f}$ is nonzero).
This is the role played by the bracketed term in \Eq{eq:GFoperator} above.
(Note that the term is written out in a symmetric way, to make manifest the fact that $\hat G_{F(k)}$ is Hermitian.)
We will find that this is indeed the case.
Let $\psi_n(\eta)$ be a test eigenfunction as given by \Eq{eq:pt_spec_eigenf} with $p_0$ not yet fixed.
According to the spectral expansion \eqref{eq:GFoperator}, the action of the (Hermitian part of the) Feynman propagator on $\psi_n(\eta)$ must give
\begin{equation} \label{eq:propagator-condition}
(G_F^h \psi_n)(\eta) \overset{!}{=} \frac{1}{\lambda_n} \psi_n(\eta) + \alpha (-\eta)^{3/2} J_{3/2}(-k\eta)
\end{equation}
where the constant $\alpha$ may not vanish if $\lambda = 0$ is in the point spectrum.
Let us evaluate the left-hand side:
\begin{align*}
(G_F^h \psi_n)(\eta) &\equiv \int_{-\infty}^0 a^4(\xi)\, \dee \xi~ G_F^h(\eta,\xi;k) \psi_n(\xi) \\
&= -\frac{\pi}{4} \sqrt{2 p_n} (-\eta)^{3/2} \left\{ J_{3/2}(-k\eta) \int_0^\infty \frac{\dee x}{x} Y_{3/2}(x) J_{p_n}(x) \right. \\
&\qquad\qquad\qquad\qquad\qquad - Y_{3/2}(-k\eta) \int_0^\infty \frac{\dee x}{x} J_{3/2}(x) J_{p_n}(x)\\
&\qquad\qquad\qquad\qquad -2 J_{3/2}(-k\eta) \int_0^{-k\eta} \frac{\dee x}{x} Y_{3/2}(x) J_{p_n}(x) \\
&\left. \qquad\qquad\qquad\qquad\qquad - 2 Y_{3/2}(-k\eta) \int_0^{-k\eta} \frac{\dee x}{x} J_{3/2}(x) J_{p_n}(x) \right\}
\end{align*}
In the second equality, we changed the integration variable to $x = -k\xi$.
The following two antiderivatives are useful, assuming that $m^2 \neq n^2$ \cite[\href{https://dlmf.nist.gov/10.22.E6}{Eq.~10.22.6}]{NIST:DLMF}:
\begin{equation}
\int \frac{\dee x}{x} J_m(x) J_n(x) = \frac{x(J_{m-1}(x) J_n(x) - J_m(x) J_{n-1}(x)) - (m-n)J_m(x)J_n(x)}{m^2-n^2}
\end{equation}
\begin{equation}
\int \frac{\dee x}{x} Y_m(x) J_n(x) = \frac{x(Y_{m-1}(x) J_n(x) - Y_m(x) J_{n-1}(x)) - (m-n)Y_m(x)J_n(x)}{m^2-n^2}
\end{equation}
In particular, both antiderivatives vanish as $x \rightarrow 0^+$ if $m < n$.
Therefore, let us choose $p_n$ with $n \geq 1$ so that the $x=0$ endpoint of the integrals above do not contribute.
Making this choice, we arrive at
\begin{equation}
\begin{aligned}
(G_F^h \psi_n)(\eta) &= \frac{1}{\lambda_n} \psi_n(\eta) + \frac{1}{2(\tfrac{9}{4}-p_n^2)} \sqrt{2 p_n} (-\eta)^{3/2} \\
& \qquad \cdot \left\{ -J_{3/2}(-k\eta) \cos(\tfrac{\pi}{2}(\tfrac{3}{2}-p_n)) + Y_{3/2}(-k\eta) \sin(\tfrac{\pi}{2}(\tfrac{3}{2}-p_n)) \right\}.
\end{aligned}
\end{equation}
The first term in the equation above is what we expect from inverse action, and the second term is in the kernel of the d'Alembertian \emph{as a differential expression}.
However, the $Y_{3/2}$ contribution is not normalizable, and so it must vanish if we require the Hermitian part of the Feynman propagator \emph{as an operator} to map into the Hilbert space.
This happens when $p_0 = 3/2$, giving $\sin(\tfrac{\pi}{2}(\tfrac{3}{2}-p_n)) = 0$.
Altogether, we then arrive at
\begin{equation}
(G_F^h \psi_n)(\eta) = \frac{1}{\lambda_n} \psi_n(\eta) - \frac{(-1)^n}{\lambda_n} \sqrt{\frac{p_n}{6}} \psi_0(\eta),
\end{equation}
and we have concluded that the self-adjoint extension implicit in the choice of Bunch-Davies vacuum for the Feynman propagator is the one for which $p_0 = 3/2$.
Notice that $\lambda_0 = 0$ is therefore in the point spectrum.
\subsubsection*{Step 4: Determining the continuous spectrum eigenfunctions}
Once we have fixed the self-adjoint extension of $\Box_k$ as well as the orthonormal eigenvectors in the point spectrum, we can determine the eigenfunctions for $\lambda > 9H^2/4$ in the continuous spectrum.\footnote{We will not need the edge case $\lambda = 9H^2/4$, for which $\varphi_0(\eta)$ can be expressed as a linear combination of $J_0(-k\eta)$ and $Y_0(-k\eta)$.}
Let us take as an ansatz
\begin{equation}
\varphi_q(\eta) = H^2 (-\eta)^{3/2} N_q (a_q \, \Re \, J_{iq}(-k\eta) + b_q \, \Im \, J_{iq}(-k\eta)),
\end{equation}
where
\begin{equation} \label{eq:qfunc}
q \equiv q(\lambda) = \sqrt{\frac{\lambda}{H^2}-\frac{9}{4}},
\end{equation}
and the weights $a_q$, $b_q$ as well as the overall normalization $N_q$ are all real.
$J_{iq}$ is a Bessel function of the first kind of purely imaginary order, and we use its real and imaginary parts to form linearly independent solutions of the eigenfunction equation.
First, we can determine $a_q$ and $b_q$ by demanding that the $\psi_n$ and $\varphi_q$ must be orthogonal:
\begin{align*}
\langle \psi_n, \varphi_q \rangle &= \int_{-\infty}^0 a^4(\eta) \, \dee \eta ~ H^4 (-\eta)^3 \sqrt{2 p_n} N_q J_{p_n}(-k\eta) (a_q \, \Re \, J_{iq}(-k\eta) + b_q \, \Im \, J_{iq}(-k\eta)) \\[2mm]
&\propto \int_0^\infty \frac{\dee x}{x} J_{p_n}(x) (a_q \, \Re \, J_{iq}(x) + b_q \, \Im \, J_{iq}(x))
\end{align*}
In the second line we again defined $x = -k\eta$.
According to \cite[\href{https://dlmf.nist.gov/10.22.E57}{Eq.~10.22.57}]{NIST:DLMF}, we have that
\begin{equation}
\begin{aligned}
\int_0^\infty \frac{\dee x}{x} J_{p_n}(x) J_{iq}(x) &= \frac{\Gamma(\tfrac{p_n}{2}+\tfrac{iq}{2})}{2\Gamma(1+\tfrac{p_n}{2}+\tfrac{iq}{2})\Gamma(1+\tfrac{p_n}{2}-\tfrac{iq}{2})\Gamma(1-\tfrac{p_n}{2}+\tfrac{iq}{2})} \\[2mm]
&= \frac{\sin\left(\tfrac{\pi}{2}(p_n - iq)\right)}{\tfrac{\pi}{2}(p_n^2+q^2)} \\[2mm]
&= \frac{(-1)^n \sqrt{2}}{\pi(p_n^2+q^2)} \left( \cosh(\tfrac{\pi}{2}q) + i \sinh(\tfrac{\pi}{2}q) \right).
\end{aligned}
\end{equation}
To go to the second line, we used gamma function identities, and in the third line we used that $p_n = \tfrac{3}{2} + 2n$.
Consequently,
\begin{equation}
\langle \psi_n, \varphi_q \rangle \propto a_q \cosh(\tfrac{\pi}{2}q) + b_q \sinh(\tfrac{\pi}{2}q).
\end{equation}
Motivated by hindsight, a convenient choice is $a_q = \sech(\tfrac{\pi}{2} q)$ and $b_q = -\csch(\tfrac{\pi}{2} q)$, giving
\begin{equation}
\varphi_q(\eta) = H^2 (-\eta)^{3/2} N_q \left[ \sech(\tfrac{\pi}{2} q) \, \Re \, J_{iq}(-k\eta) - \csch(\tfrac{\pi}{2} q) \, \Im \, J_{iq}(-k\eta) \right].
\end{equation}
To fix the normalization, we will require that
\begin{equation}
\langle \varphi_q, \varphi_{q'} \rangle = \delta(q - q').
\end{equation}
This amounts to setting $N_q = \sqrt{\tfrac{1}{2} q \tanh(\pi q)}$, which can be verified by numerically integrating $\langle \varphi_q, \varphi_{q'} \rangle$ around a small neighbourhood of $q - q' = 0$ or via the following analytic argument.\footnote{We thank Jorma Louko for pointing this out to us.}
On grounds of orthonormality, we know that $\langle \varphi_q, \varphi_{q'} \rangle$ is proportional to $\delta(q-q')$.
As such, when evaluating $\langle \varphi_q, \varphi_{q'} \rangle$, we need only look for contributions to its distributional part.
By definition, we have that
\begin{equation} \label{eq:distrib1}
\langle \varphi_q, \varphi_{q'} \rangle = \int_0^\infty \frac{\dee x}{x} \, \frac{1}{2} \left[ \sqrt{q \tanh(\pi q)} \left( \sech(\tfrac{\pi}{2} q) \, \Re \, J_{iq}(x) - \csch(\tfrac{\pi}{2} q) \, \Im \, J_{iq}(x) \right) \right] \cdot \left[ q \rightarrow q' \right].
\end{equation}
For large $x$, $|J_{iq}(x)|/x \leq C/x^{3/2}$ for some constant $C$, and so the integration over any interval $(\epsilon, \infty)$ with $\epsilon > 0$ will not give a distributional contribution.
For small $x$,
\begin{equation}
J_{iq}(x) = \left(\frac{x}{2}\right)^{iq} \frac{1}{\Gamma(1+iq)} (1 + O(x^2) ).
\end{equation}
Therefore, we may replace $J_{iq}(x)$ with $(x/2)^{iq}/\Gamma(1+iq)$ to compute the distributional contribution coming from the $x=0$ endpoint of integration.
Making this replacement, we find that
\begin{equation} \label{eq:distrib2}
\begin{aligned}
\sech(\tfrac{\pi}{2} q) \, \Re \, J_{iq}(x) - \csch(\tfrac{\pi}{2} q) & \, \Im \, J_{iq}(x) \\ & \rightarrow ~ \frac{1}{2} \left[ (\sech(\tfrac{\pi}{2}q) + i \csch(\tfrac{\pi}{2}q)) \left( \frac{x}{2} \right)^{iq} \frac{1}{\Gamma(1+iq)} \right. \\
& \left. \qquad + (\sech(\tfrac{\pi}{2}q) - i \csch(\tfrac{\pi}{2}q)) \left( \frac{x}{2} \right)^{-iq} \frac{1}{\Gamma(1-iq)} \right].
\end{aligned}
\end{equation}
The distributional part will therefore come from terms of the form
\begin{equation} \label{eq:distrib3}
\int_0^\epsilon \frac{\dee x}{x} \left( \frac{x}{2} \right)^{ i \gamma} = \int_{\tilde \epsilon}^\infty \dee t ~ e^{-i \gamma t} = \pi \delta(\gamma) + (\text{non-distributional}),
\end{equation}
where we let $x = 2 e^{-t}$ in the first equality.
We can then read off the coefficient of $\delta(q-q')$ from Eqs.~\eqref{eq:distrib1}, \eqref{eq:distrib2}, and \eqref{eq:distrib3}; it is obtained by collecting the coefficients of the cross-terms $(x/2)^{\pm i(q-q')}$, setting $q = q'$, and multiplying by $\pi$, as we are instructed to do by \eqref{eq:distrib3}:
\begin{equation}
\begin{aligned}
&\pi \cdot \frac{1}{2} q \tanh(\pi q) \cdot \frac{1}{4} \left( 2 \frac{(\sech(\tfrac{\pi}{2}q) + i \csch(\tfrac{\pi}{2}q))(\sech(\tfrac{\pi}{2}q) - i \csch(\tfrac{\pi}{2}q))}{\Gamma(1+iq)\Gamma(1-iq)} \right) \\[2mm]
&= \frac{\pi}{4} q \tanh(\pi q) \left( \sech^2(\tfrac{\pi}{2}q) + \csch^2(\tfrac{\pi}{2}q) \right) \frac{\sinh(\pi q)}{\pi q} \\[2mm]
&= 1
\end{aligned}
\end{equation}
In going to the second and third lines, we used standard gamma function and hyperbolic identities, respectively.
Therefore, it follows that
\begin{equation} \label{eq:cts_spec_eigenf}
\varphi_q(\eta) = H^2 (-\eta)^{3/2} \sqrt{\tfrac{1}{2}q \tanh(\pi q)} \left[ \sech(\tfrac{\pi}{2} q) \, \Re \, J_{iq}(-k\eta) - \csch(\tfrac{\pi}{2} q) \, \Im \, J_{iq}(-k\eta) \right]
\end{equation}
are indeed continuum-orthonormalized in the index $q$.
\subsubsection*{Step 5: Using the projector $P_\Omega^\perp$ to compute $\delta \Delta_\phi^2/\Delta_\phi^2$}
With all of the ingredients in hand, we can now construct the projector $P_\Omega^\perp$.
The (integral kernel of the) projector is given by
\begin{equation} \label{eq:P-Omega-perp}
P_\Omega^\perp(\eta,\eta') = \int_Q^\infty \dee q ~ \varphi_q(\eta) \varphi_q(\eta') + \sum_{n \geq N} \psi_n(\eta) \psi_n(\eta'),
\end{equation}
where $Q = q(\Omega^2)$, as given by \Eq{eq:qfunc}, and $N = \min \{n : \lambda_n < -\Omega^2\}$ (cf. \Eq{eq:ptspec}).
We must now compute $\delta \Delta^2_\phi / \Delta^2_\phi$, as given by \Eq{eq:reldiffscalar}.
First, note that this expression simplifies if we carefully examine the structure of the Feynman propagator.
The Feynman propagator evaluated at equal times is a purely imaginary quantity,
\begin{equation}
G_F(\eta=\eta';k) = - \frac{i \pi}{4} (-\eta)^3 H^2 \left( J_{3/2}(-k\eta)^2 + Y_{3/2}(-k\eta)^2 \right).
\end{equation}
Moreover, the Hermitian (\Eq{eq:GF-herm}) and anti-Hermitian (\Eq{eq:GF-antiherm}) parts of the Feynman propagator, and hence also the two correction terms $\delta G_F^h$ and $\delta G_F^{ah}$, are purely real and purely imaginary, respectively.\footnote{In fact, they are purely real and purely imaginary not just at equal times, but for all times.}
Therefore, we have that
\begin{equation} \label{eq:reldiff}
\frac{\delta \Delta^2_\phi}{\Delta^2_\phi} \approx \left. \frac{\delta G_F^{ah}}{G_F} \right|_{\eta=\eta'},
\end{equation}
and so we only need to compute the correction to the anti-Hermitian part of the propagator.
In analogy with the eigenfunction for the eigenvalue $\lambda = 0$, let us define the following function:
\begin{equation} \label{eq:chi0}
\chi_0(\eta) = H^2 \sqrt{3} (-\eta)^{3/2} Y_{3/2}(-k\eta)
\end{equation}
Then, we can write $G_F^{ah}(\eta,\eta';k)$ as
\begin{equation}
G_F^{ah}(\eta,\eta';k) = -\frac{i \pi}{12 H^2} \left[\psi_0(\eta)\psi_0(\eta') + \chi_0(\eta) \chi_0(\eta') \right],
\end{equation}
as well as its bandlimited version as
\begin{equation}
\begin{aligned}
G_F^{ah,\Omega} &= P_\Omega G_F^{ah} P_\Omega \\
&= G_F^{ah} + (P_\Omega^\perp G_F^{ah} P_\Omega^\perp - P_\Omega^\perp G_F^{ah} - G_F^{ah} P_\Omega^\perp) \\
&\equiv G_F^{ah} + \delta G_F^{ah}.
\end{aligned}
\end{equation}
Since $\ip{\psi_\lambda}{\psi_0} = 0$ for $\lambda \neq 0$, it is straightforward to show that
\begin{equation}
\delta G_F^{ah}(\eta=\eta') = \frac{i\pi}{6 H^2} (P_\Omega^\perp \chi_0)(\eta) \left[ \chi_0(\eta) - \frac{1}{2}(P_\Omega^\perp \chi_0)(\eta) \right], \label{eq:dGFah}
\end{equation}
where we explicitly have that
\begin{equation} \label{eq:P-perp-chi}
(P_\Omega^\perp \chi_0)(\eta) = \int_Q^\infty \dee q ~ \langle \varphi_q, \chi_0 \rangle \varphi_q(\eta) + \sum_{n \geq N} \langle \psi_{n}, \chi_0 \rangle \psi_{n}(\eta).
\end{equation}
\subsubsection*{Step 6: Making approximations for calculating $P_\Omega^\perp \chi_0$}
Next we turn to computing $(P_\Omega^\perp \chi_0)(\eta)$.
Consider first the contribution from the continuous spectrum, i.e., the first term on the right side of \Eq{eq:P-perp-chi}.
The inner product appearing in the integral is given by
\begin{equation}
\langle \varphi_q, \chi_0 \rangle = \frac{-2\sqrt{3 q \tanh(\pi q)}}{\pi (\tfrac{9}{4} + q^2)},
\end{equation}
and so the integrand reads
\begin{equation} \label{eq:damnintegrand}
\langle \varphi_q, \chi_0 \rangle \psi_q(\eta) = -\frac{\sqrt{6}}{\pi} \frac{q \tanh(\pi q)}{(\tfrac{9}{4} + q^2)} H^2 (-\eta)^{3/2} \left[ \sech (\tfrac{\pi}{2} q) \Re J_{iq}(-k\eta) - \csch(
\tfrac{\pi}{2} q) \Im J_{iq}(-k\eta) \right].
\end{equation}
Unfortunately, this cannot be integrated in closed form, and furthermore, it is totally intractable numerically due to its oscillatory nature.
Fortunately, for large values of $q$, we can make several approximations to make this integrand more tractable.
We approximate
\begin{equation}
\frac{q \tanh(\pi q)}{\tfrac{9}{4} + q^2} \approx \frac{1}{q},
\end{equation}
as well as
\begin{equation}
\sech(\tfrac{\pi}{2} q) \approx \csch(\tfrac{\pi}{2} q) \approx 2 e^{-\tfrac{\pi}{2} q}.
\end{equation}
Then,
\begin{equation}
\langle \psi_q, \chi_0 \rangle \psi_q(\eta) \approx - \frac{2\sqrt{6}}{\pi} H^2 (-\eta)^{3/2} \frac{1}{q} e^{-\tfrac{\pi}{2} q} \left[ \Re J_{iq}(-k\eta) - \Im J_{iq}(-k\eta) \right].
\end{equation}
To make further progress, we can invoke an asymptotic expansion for Bessel functions of large order:
\begin{equation} \label{eq:large-order}
J_\mu (z) \approx \frac{1}{\sqrt{2 \pi \mu}} \left( \frac{ez}{2\mu} \right)^{\mu}
\end{equation}
According to \cite[\href{https://dlmf.nist.gov/10.19.E1}{Eq.~10.19.1}]{NIST:DLMF} this expansion holds for large positive orders $\mu$.
However, by inspection, it seems to continue to hold if we analytically continue $\mu = i \nu$.
After a bit of algebraic manipulation, we find that
\begin{equation}
\Re J_{iq}(z) - \Im J_{iq}(z) \approx \frac{1}{\sqrt{\pi q}} \Re \left[ \left( \frac{ez}{2 i q} \right)^{iq} \right].
\end{equation}
We further have that
\begin{equation}
\begin{aligned}
\left( \frac{ez}{2 i q} \right)^{iq} &= \exp \left\{ iq \ln\left( \frac{ez}{2iq} \right) \right\} \\
&= \exp\left\{ \frac{\pi}{2}q + i q \ln\left( \frac{ez}{2q} \right) \right\},
\end{aligned}
\end{equation}
where we made a branch cut for the complex logarithm.
It therefore follows that
\begin{equation} \label{eq:large-order-applied}
e^{-\tfrac{\pi}{2} q} (\Re J_{iq}(z) - \Im J_{iq}(z)) \approx \frac{1}{\sqrt{\pi q}} \cos \left( q \ln\left( \frac{ez}{2q} \right) \right).
\end{equation}
To get a sense of the goodness of the approximation, both sides of \Eq{eq:large-order-applied}, as well as their difference, is plotted in \Fig{fig:large-order}.
Altogether, we arrive at a nice approximate but compact expression for the integrand \eqref{eq:damnintegrand}:
\begin{equation} \label{eq:integrand-approximation}
\langle \psi_q, \chi_0 \rangle \psi_q(\eta) \approx - \sqrt{3} H^2 \left( - \frac{2 \eta}{\pi q} \right)^{3/2} \cos \left( q \ln\left( \frac{2q}{-k \eta e} \right) \right)
\end{equation}
This is a vast improvement, but there is still no closed form expression for the antiderivative
\begin{equation}\label{eq:oscillatory-integral}
\int \dee q ~ q^{-3/2} \cos(q \ln(A q)) .
\end{equation}
Nevertheless, it converges when integrated over the interval $[Q, \infty)$, due to the overall power of $q^{-3/2}$.
A numerically efficient way of evaluating this integral is to instead perform an equivalent integration along a contour in the complex plane.
Define the integral
\begin{equation}
\tilde I(Q,A) = \int_Q^\infty \dee q ~ e^{i q \ln(A q) - \tfrac{3}{2} \ln q} ,
\end{equation}
so that its real part is the integral that we wish to compute.
Denote the argument of the exponential by
\begin{equation}
\mathcal{I}(q,A) = i q \ln(A q) - \frac{3}{2} \ln q .
\end{equation}
Its only singularities are at $q = 0$ and $q \rightarrow \infty$, and so taking the branch cut of the complex logarithm along the negative real axis, we can safely deform the domain of integration to a contour which starts at $q = Q$ and goes up vertically into the complex plane, to wit,
\begin{equation}
\tilde I(Q,A) = \int_{q = Q}^{q = Q + i \infty} \dee q ~ e^{\mathcal{I}(q,A)} = \int_0^\infty i \, \dee b ~ e^{\mathcal{I}(Q + ib,A)} .
\end{equation}
Write $q = a + ib$ and split $\mathcal{I}$ into its real and imaginary parts by writing
\begin{equation}
\mathcal{I}(a + ib,A) = w(a,b;A) + i W(a,b;A) .
\end{equation}
One then finds that
\begin{align}
w(a,b;A) &= - \frac{1}{2} \left(b + \frac{3}{2}\right) \ln\left(a^2+b^2\right) - a \arctan\left(\frac{b}{a}\right) - b \ln A \label{eq:little-w}\\
W(a,b;A) &= \frac{a}{2} \ln\left(a^2+b^2\right) - \left(b + \frac{3}{2}\right) \arctan\left(\frac{b}{a}\right) + a \ln A . \label{eq:big-W}
\end{align}
We only need the real part of $\tilde I$; therefore, we need only compute
\begin{equation} \label{eq:I-integral-app}
\begin{aligned}
\Re \left[\tilde I(Q,2/(-k\eta e))\right] &= - \int_0^\infty \dee b ~ e^{w(Q,b;2/(-k\eta e))} \sin\left( W[Q,b;2/(-k\eta e)] \right) \\[2mm]
&\equiv I(Q,-k\eta) .
\end{aligned}
\end{equation}
In particular, notice that even though there is an oscillatory component to this integral, the $h$-dependent prefactor decays exponentially quickly with $y$, and so this integral converges extremely rapidly.
$I(Q,-k\eta)$ is virtually indistinguishable from $\int_Q^\infty \dee q ~ q^{-3/2} \cos(q \ln[2q/(-k\eta e)])$, as shown in \Fig{fig:IQA}, and the difference between the two is at the level of machine precision.
Altogether, the projection $(P_\Omega^\perp \chi_0)(\eta)$ therefore contains a contribution from the continuous spectrum given by
\begin{equation} \label{eq:Pperp}
(P_\Omega^\perp \chi_0)(\eta) \supset - \sqrt{3} H^2 \left( - \frac{2 \eta}{\pi} \right)^{3/2} I\left(Q,2/(-k\eta e)\right).
\end{equation}
Next we consider the contribution from the point spectrum, i.e., the second term on the right side of \Eq{eq:P-perp-chi}.
Here, the inner product appearing in the summand is given by
\begin{equation}
\ip{\psi_n}{\chi_0} = - \sqrt{6 p_n} \frac{(-1)^n}{\pi n (3 + 2n)},
\end{equation}
and so $(P_\Omega^\perp \chi_0)(\eta)$ contains a contribution from the point spectrum given by
\begin{equation}
(P_\Omega^\perp \chi_0)(\eta) \supset - \sqrt{3} H^2 \left( - \frac{2 \eta}{\pi} \right)^{3/2} \sum_{n \geq N} \sqrt{\frac{\pi}{2}} \frac{(-1)^n}{n} \frac{(\tfrac{3}{2}+2n)}{(3+2n)} J_{p_n}(-k\eta).
\end{equation}
Because the sum is an alternating series, its magnitude is bounded from above by
\begin{equation}
\sqrt{\frac{\pi}{2}} \frac{1}{N} \frac{(\tfrac{3}{2}+2N)}{(3+2N)} |J_{p_N}(-k\eta)| \approx \frac{1}{(2 N)^{3/2}} \left( \frac{-k\eta e}{4 N} \right)^{2 N},
\end{equation}
where the approximation holds for large $N$.
In particular, we approximated $p_N \approx 2N$ and used the large-order approximation \eqref{eq:large-order}.
This bound is laughably tiny.
For illustration, when $\Omega \gg H$, it follows that $N \approx \Omega/(2H)$, and even for, e.g., $\Omega/H = 100$, the bound evaluates to $7 \times 10^{-194}$ at horizon crossing (when $-k\eta = 1$).
The value of $I(Q,2/e)$ is roughtly $1.1 \times 10^{-4}$, and so we may safely neglect the contribution from the point spectrum.
It is then a matter of straightforward algebraic manipulations to arrive at the expression \eqref{eq:final_answer} for $\delta \Delta_\phi^2/\Delta_\phi^2$.
\section{Adiabatic approximation}
\label{sec:approximation}
Here we elaborate and further justify our use of the adiabatic, or ``slow-roll'' approximation for nearly de Sitter spacetimes.
Suppose we wish to compute an inflationary observable $\mathcal O_k$ which depends on the FLRW scale factor $a(\eta)$ and the time $\eta_k$ at which the mode $k$ crosses the horizon. That is, $\eta_k$ is defined as the solution of
\begin{align}
k = a(\eta_k)H(\eta_k),
\end{align}
where $H = a'/a^2$. Hence, we can write $\mathcal O_k = \mathcal O_k[a,\eta_k]$. The adiabatic approximation is the statement that
\begin{align}
\mathcal O_k[a,\eta_k] \approx \mathcal O_k[\tilde a,\tilde \eta_k],
\end{align}
where $\tilde a(\eta)\equiv -1/\tilde H \eta$ is a de Sitter scale factor. We define the de Sitter Hubble constant $\tilde H$ and the time $\tilde \eta_k$ at which the mode $k$ crosses the de Sitter horizon via the equations
\begin{align}
k &= \tilde a (\tilde \eta_k)\tilde H,\label{eq:dS_k_crossing}\\
\tilde a(\tilde \eta_k) &= a(\eta_k).\label{eq:a=adS}
\end{align}
Equation \eqref{eq:dS_k_crossing} is the usual horizon crossing condition for the mode $k$, while \eqref{eq:a=adS} ensures that the de Sitter scale factor is equal to the ``true'' scale factor at the mode crossing time. It is a generic feature of inflationary cosmology that observables associated with a mode $k$ only depend on the FLRW evolution near the time at which the mode $k$ crosses the horizon. Our adiabatic approximation is simply making use of this fact to approximate an observable in a generic FLRW spacetime with the same observable in a de Sitter spacetime, but with the de Sitter spacetime tuned in such a way that near the $k$-mode crossing time it looks similar to the true spacetime. The utility of the approximation arises when it is difficult to compute the observable in the true spacetime, but it is relatively easy to do so in the de Sitter spacetime.
For example, in this paper we were interested in the case where the observable $\mathcal O_k$ is the covariantly bandlimited scalar or tensor power spectrum. This quantity is unfeasible to compute for a generic FLRW spacetime, but we are able to compute it in the de Sitter case. The adiabatic approximation is therefore useful.
Let us now consider a simpler observable: the scalar power spectrum \textit{without a cutoff.} This is the quantity to which we are computing Planck-scale corrections in this paper. For a FLRW spacetime with scale factor $a(\eta)$ and for a massless scalar field $\phi$, it is defined as
\begin{align}
\Delta^2_\phi(k) = \frac{k^3}{2\pi^2 a^2(\eta_k)}|v_k(\eta_k)|^2,
\end{align}
where $v_k$ is the solution to the mode equation
\begin{align}\label{eq:mode_equation}
v_k''+\left(k^2 - \frac{a''}{a}\right)v_k=0,
\end{align}
with initial condition $v_k\rightarrow \frac{1}{\sqrt{2k}}e^{-i k \eta}$ as $\eta\rightarrow -\infty$ for the Bunch-Davies vacuum. This quantity can be computed exactly for both power law and de Sitter spacetimes, allowing us to use it to explicitly test the validity of the adiabatic approximation.
In other words, suppose that the ``true'' spacetime is a power law spacetime, with a scale factor
\begin{align}
a = A t^c = A[A(c-1)(-\eta)]^{c/(1-c)},
\end{align}
where we take $c>2$ so that the spacetime is inflating.
In terms of cosmic time $t$, conformal time is given by \begin{align}
\eta = -\frac{1}{A(c-1)t^{c-1}}.
\end{align}
The Hubble parameter is
\begin{align}
H = \frac{c}{t} = c[A(c-1)(-\eta)]^{1/(c-1)}.
\end{align}
Notice that the first slow-roll parameter
\begin{align}\label{eq:epsilon_power_law}
\epsilon \equiv \frac{d}{dt}\left(\frac{1}{H}\right)=\frac{1}{c}
\end{align}
vanishes in the limit $c\rightarrow\infty$. Thus, we might expect that in this limit the power law spacetime behaves similarly to a de Sitter spacetime. However, even at finite $c$, we will find that the adiabatic approximation can accurately estimate the power spectrum $\Delta_\phi^2$ for a scalar field in the power law spacetime.
To see this, let us first compute the exact answer for $\Delta_\phi^2$. Taking the power law scale factor, the solution to the mode equation \eqref{eq:mode_equation} with the required initial conditions is
\begin{align}
v_k(\eta) = -\frac{\sqrt{\pi}}{2}\sqrt{-\eta}H^{(1)}_{\nu_c}(-k\eta),
\end{align}
where $\nu_c = \frac{3}{2}+\frac{1}{c-1}$. Hence, the exact power spectrum is
\begin{align}\label{eq:exact}
\Delta_\phi^2(k) =
\frac{c^2}{8\pi} \left(\frac{Ac}{k}\right)^{\frac{2}{c-1}}
\frac{c}{c-1}
\left|H^{(1)}_{\nu_c}\left(\frac{c}{c-1}\right)\right|^2.
\end{align}
Let us now approximate the power spectrum using the adiabatic approximation. To do so, we need the expression for $\Delta^2_\phi$ for a de Sitter spacetime. In this case, the mode function solution of \eqref{eq:mode_equation} can be written in closed form as
\begin{align}
\tilde v_k(\tilde \eta) = -\frac{\sqrt{\pi}}{2}\sqrt{-\tilde\eta}H^{(1)}_{\nu_\infty}(-k\tilde \eta),
\end{align}
where $\nu_\infty = 3/2$. The adiabatic approximation instructs us to set the conformal time to $\eta_k$ and take the de Sitter Hubble constant as $\tilde H$, where $\eta_k$ and $\tilde H$ are solutions to equations \eqref{eq:dS_k_crossing} and \eqref{eq:a=adS}. The solutions to these equations read
\begin{align}
\eta_k &= \frac{c}{(1-c)k},\\
\tilde H &= c\left(\frac{Ac}{k}\right)^{\frac{1}{c-1}},
\end{align}
where we have also made use of the fact that the de Sitter horizon crossing time is $\tilde \eta_k = -1/k$.
The adiabatic approximation for the power spectrum is thus
\begin{align}\label{eq:approx}
\tilde\Delta_\phi^2(k) =
\frac{c^2}{8\pi} \left(\frac{Ac}{k}\right)^{\frac{2}{c-1}}
\left|H^{(1)}_{\nu_\infty}(1)\right|^2.
\end{align}
Let us compare the approximate expression $\tilde \Delta_\phi^2$ in equation \eqref{eq:approx} to the exact expression $\Delta_\phi^2$ in equation \eqref{eq:exact}. We find
\begin{align}\label{eq:adiabatic_approx_ratio}
\left|\frac{\tilde \Delta_\phi(k)}{\Delta_\phi(k)}\right|
=
\sqrt{\frac{c-1}{c}}
\left|\frac{H^{(1)}_{\nu_\infty}(1)}{H^{(1)}_{\nu_c}\left(\frac{c}{c-1}\right)}\right|
=
1 - \frac{\alpha}{c} +\mathcal O\left(\frac{1}{c^2}\right),
\end{align}
where $\alpha = 0.10098...\,$. For large $c$, we can combine this with the expression \eqref{eq:epsilon_power_law} for the first slow-roll parameter $\epsilon$ to obtain the relative error due to the adiabatic approximation in this simple example:
\begin{align}\label{eq:adiabatic_approx_rel_error}
\text{relative error} \approx \alpha\epsilon.
\end{align}
There are a couple of things to note regarding the results \eqref{eq:adiabatic_approx_ratio} and \eqref{eq:adiabatic_approx_rel_error}. First, notice that \eqref{eq:adiabatic_approx_ratio} is independent of $k$.
In this simple example of a non-bandlimited power spectrum, the relative error incurred through the use of the adiabatic approximation is the same for all modes. Extrapolating this intuition to our covariantly bandlimited power spectra, this suggests that although the use of the adiabatic approximation likely resulted in small errors in the amplitude and phase of our predicted signal, it seems plausible that our most universal prediction, the frequency of the signal, is unaffected by the use of the adiabatic approximation.
Let us now look at the value of $\tilde \Delta_\phi/\Delta_\phi$ as a function of the only variable on which this quantity depends: the power $c$ of the power law expansion. The result is shown in Fig. \ref{fig:adiabatic_approx}. Although we expected that the adiabatic approximation might do well in the $c\rightarrow\infty$ limit, we see that the adiabatic approximation in fact does much better than anticipated. For example, for $c=2$, which is certainly far from a de Sitter expansion, the error due to the adiabatic approximation is only 8\%. Thus, for the slowly rolling spacetimes which we consider in this paper---which \textit{are} very close to de Sitter---we expect that the adiabatic approximation is in fact very accurate.
As a quantitative estimate of the accuracy of the adiabatic approximation, note that observations put a constraint $\epsilon \ll 0.004$ on the first slow-roll parameter \cite{Planck:2018jri}. From \eqref{eq:epsilon_power_law} we see that to get this small of a value for $\epsilon$ via a power law expansion we require $c>250$. Equation \eqref{eq:adiabatic_approx_ratio}, or the large $c$ approximation \eqref{eq:adiabatic_approx_rel_error}, then give the relative error due to the adiabatic approximation to be only $0.04\%$. Since there seems to be no reason to expect that the adiabatic approximation should do worse when we introduce a covariant cutoff, we estimate that the relative error incurred due to our use of the adiabatic approximation in the covariantly bandlimited case is also likely under one part in one thousand.
\section{Detailed features of the predicted signal}
\label{sec:detailed-features}
After establishing how a covariant natural UV cutoff, $f(\Box)$, is defined in \Eq{eq:cutoff}, we almost exclusively focused on the case of a sharp cutoff, for which $f(\Box) = \theta(\Omega^2 - |\Box|)$.
The class of possible UV cutoffs that one could consider is much larger, however, being parameterized by a functional degree of freedom, $f$.
In principle, $f$ need not even resemble a cutoff; the definition goes through for any non-negative function.
If we insist that $f$ resemble what is normally thought of as a cutoff, however, then we should have $f(\lambda) \approx 1$ for $|\lambda| \ll \Omega^2$, $f(\lambda) \approx 0$ for $|\lambda| \gg \Omega^2$, and a monotonic transition for $|\lambda| \sim \Omega^2$.
Here we investigate how the smoothness and the size of the interval over which $f(\lambda)$ drops from 1 to 0 impacts $\delta \Delta_\phi^2/\Delta_\phi^2$.
While we of course cannot characterize the full functional freedom in $f$, a convenient tool for our investigations is the \emph{smooth step function}:
\begin{equation}
S_n(x) = \left\{ \begin{array}{ll}
0 & x \leq 0 \\
x^{n+1} \sum_{k=0}^n {\binom{n+k}{k}} (1-x)^k & 0 < x < 1 \\
1 & x \geq 1
\end{array} \right.
\end{equation}
It has the property that its first $n$ derivatives are continuous at $x = 0$ and $x = 1$, where $n \in \mathbb{Z}$ and $n \geq 0$, and its value increases from 0 to 1 over the interval $0 < x < 1$.
Let us use it to define
\begin{equation} \label{eq:cutoff-smooth}
f(\lambda) = 1 - S_n\left( \frac{|\lambda| - \Omega^2}{\nu} + 1 \right).
\end{equation}
This profile is a $C^n$ function and it drops from 1 to 0 over the intervals $|\lambda| \in (\Omega^2 - \nu, \Omega^2)$.
The parameter $n$ therefore characterizes the smoothness of the cutoff, and the parameter $\nu$ characterizes the size of the ramp over which the cutoff turns on.
If we replace the sharp profile $f(\lambda) = \theta(\Omega^2 - |\lambda|)$ with $f(\lambda)$ as defined in \Eq{eq:cutoff-smooth}, after percolating through the steps of the calculation, the final consequence is that the integral $I(Q,x)$ changes in the final expression \eqref{eq:final_answer} for $\delta \Delta_\phi^2 / \Delta_\phi^2$.
This is because this integral came from $P_\Omega^\perp \chi_0$; see Eqs.~\eqref{eq:reldiff} and \eqref{eq:dGFah}.
It is convenient to reparameterize the width $\nu$ with a new parameter, $B$, in term of which we replace $I(Q,x)$ with a new integral, $I_n(Q,x,B)$, given by
\begin{equation}
\begin{aligned}
I_n(Q,x,B) &\equiv \int_{0}^\infty \dee q~ S_n\left( \frac{q-Q}{B}+1 \right) q^{-3/2} \cos(q \ln (2q/xe) ) \\
&= I(Q,x) + \int_{Q-B}^Q \dee q~ S_n\left( \frac{q-Q}{B}+1 \right) q^{-3/2} \cos(q \ln (2q/xe) ).
\end{aligned}
\end{equation}
In terms of the integration variable $q$, which is related to the eigenvalue $\lambda$ via \Eq{eq:qfunc}, the cutoff smoothly ramps up on an interval $[Q-B,Q]$.
Since $Q \approx \Omega/H = \sigma^{-1}$ when $\Omega \gg H$, it is perhaps most meaningful to think of the ramp-up width $B$ as the number of Planck lengths\footnote{Or more generally cutoff lengths, if the cutoff is at some scale other than the Planck scale.} over which the cutoff turns on if we view the cutoff scale $\Omega$ as being fixed.
\Fig{fig:variable-smoothing-width-fluctuations} shows what happens to $\delta \Delta_\phi^2 / \Delta_\phi^2$ as we increase the width, $B$.
The general trend is that increasing this width decreases the amplitude of the oscillations and shifts their phase, but it does not seem to change the frequency.
Although increasing $B$ suppresses the signal amplitude, it does not appear to be sending it to zero; see \Fig{fig:smoothing-limit}.
Note that we cannot let $B$ grow too large; otherwise, the approximations made in arriving at the expression \eqref{eq:integrand-approximation} for the integrand of $I_n(Q,x,B)$ break down.
\Fig{fig:variable-smoothing-smoothness-fluctuations} shows what happens when we adjust the smoothness by changing the parameter $n$.
Again, adjusting the smoothness changes the phase and amplitude, but not the frequency of oscillations.
It furthermore appears that a smooth ramp-up suppresses the amplitude less as it is made smoother.
Of course, precise details of how $\delta \Delta_\phi^2 / \Delta_\phi^2$ changes depends on the choice of smoothing function; however, we expect that the basic qualitative take-home lessons discussed here remain applicable in general.
\clearpage
\bibliographystyle{utphys-modified}
\bibliography{refs.bib}
|
Title:
Cluster environment quenches the star formation of low-mass satellite galaxies from the inside-out |
Abstract: Environment plays a critical role in the star formation history of galaxies.
Tidal and hydrodynamical stripping, prominent in cluster environment, can
remove the peripheral gas of galaxies and star formation may thus be
environmentally suppressed from the outside-in. We revisit the environmental
dependence of the radial gradient of specific star formation rate (sSFR)
profile. We probe the radial gradient by using the archival spectral indices
D4000n and HdA measured from SDSS fiber spectra, to indicate central sSFR, and
the total sSFR from fitting the spectral energy distribution. Despite the low
spatial resolution, the wealth of SDSS data allows to disentangle the
dependences on stellar mass, sSFR, and environment. We find that low-mass
satellite galaxies in the mass range 9 < log M/M_solar < 9.8 on average quench
in more inside-out pattern compared to isolated galaxies matched in mass, sSFR,
and fiber coverage. This environmental effect is particularly strong for
galaxies below the star formation main sequence, and peaks for those in the
core of massive clusters where the phase-space diagram reveals clear links
between the inside-out quenching and orbital properties. Our results suggest
that both tidal and hydrodynamical interactions in cluster environment suppress
the star formation of satellites mainly from the inside-out. As accreted gas of
low angular momentum from hot gas halos is an important source for replenishing
central gas reservoir, we discuss how gas stripping in clusters may lead to
starvation and cause inside-out quenching when the outer star-forming discs are
not significantly affected.
| https://export.arxiv.org/pdf/2208.14004 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
Galaxy: general - Galaxy: formation - galaxies: groups: general - galaxies: star formation
\end{keywords}
\section{Introduction}
\label{sec:intro}
In the local universe, the density of galaxies spans several orders of magnitude, from $\sim0.2\,\rho_{0}$ (where $\rho_{0} \sim 10^{-29.7} g/\mathrm{cm}^{3}$ is the mean field density) in sparse void regions all the way up to $\sim100\,\rho_{0}$ in the cores of massive clusters and $\sim1000\,\rho_{0}$ in most compact groups \citep{1989Sci...246..897G}.
A large variety of galaxy properties are observed to correlate with galaxy environments such as star formation or quenched galaxy fraction \citep{2006MNRAS.373..469B, 2008MNRAS.385.1903L, 2010ApJ...721..193P, 2013MNRAS.430.1447K, 2013MNRAS.428.3306W, 2017ApJ...838...87C}, morphology \citep{1978ApJ...226..559B, 1980ApJ...236..351D, 1999ApJ...518..576P, 2009MNRAS.393.1324B, 2011MNRAS.416.1680C}, kinematics \citep{2011MNRAS.416.1680C, 2020MNRAS.495.1958W}, interstellar medium \citep{2011MNRAS.415.1797C, 2012MNRAS.425..273P, 2015MNRAS.453.2399W, 2017MNRAS.466.1275B, 2019MNRAS.483.5409D} and nuclear activity \citep{2004MNRAS.353..713K, 2011MNRAS.418.2043E, 2013MNRAS.430..638S, 2015MNRAS.448L..72S}.
Generally, red galaxies with early-type morphology and little cold gas content tend to populate the inner part of group \footnote{hereafter group refers to the structure where galaxies are bound within one large dark matter halo while it does not indicate the group mass or richness.
Cluster refers to a massive group.} environment while blue, late-type and gas-rich galaxies are mainly found away from crowded regions.
All these apparent links encourage the idea that environment-related processes are an important driver of the galaxy evolution.
Indeed there are abundant pieces of evidence from both observational and theoretical point of view showing the existence of multiple environmental effects (see the review by \citealt{2006PASP..118..517B}).
Sources of these effects can be broadly classified into two types.
The first type is through gravitational interactions with both galaxies and the entire group potential well.
Gravitational tides from neighbours may supply angular momentum to galaxies \citep{1969ApJ...155..393P,1984ApJ...286...38W} and can condition their overall shape \citep{1979MNRAS.188..273B}.
Depending on velocity dispersion within the group, galaxy-galaxy interactions can either have long duration in small groups, such as during preprocessing \citep{2004ogci.conf..341D,2004PASJ...56...29F}, or have higher frequency but short duration in massive clusters, the so-called galaxy harassment \citep{1996Natur.379..613M,1998ApJ...495..139M}.
When the group mass is large, the tidal force exerted by the entire group potential well becomes effective for perturbing group galaxies \citep{1984ApJ...276...26M,1996ApJ...459...82H}.
The second type is through various kinds of hydrodynamic interactions occurring between gaseous components of galaxies and the hot intergalactic medium (hereafter IGM).
Its importance has been suggested ever since when it became clear that hot IGM is ubiquitous among clusters \citep{1977ApJ...215..401M,1977egsp.conf..369O}.
Such type of interaction can happen in various forms, including ram-pressure stripping \citep{1972ApJ...176....1G,2017ApJ...844...48P}, viscous stripping \citep{1982MNRAS.198.1007N,2015ApJ...806..104R} and thermal evaporation \citep{1977Natur.266..501C,2007MNRAS.382.1481N} all of which are able to remove cold gas of galaxies, particularly for the low-mass ones \citep[e.g.,][]{2013AJ....146..124H, 2020MNRAS.494.2090J}.
Several prototypical galaxies under gas stripping in the Virgo cluster are highlighted in a series of works based on radio interferometry \citep{2004AJ....127.3361K, 2007ApJ...659L.115C, 2009AJ....138.1741C, 2012A&A...537A.143V}.
Though originating from different processes, in some cases several mechanisms can have similar effects to galaxies.
One example is galaxy starvation \citep{1980ApJ...237..692L}, in which the loosely bound outer gaseous halos of galaxies are removed by both tidal interactions and ram-pressure stripping preventing further gas accretion \citep{2002ApJ...577..651B}.
It is difficult to discern the relative importance of all these mechanisms in certain environments.
But one consensus reached by the majority of previous studies is that they are more effective on satellite galaxies, i.e. the less massive galaxies that are gravitationally bound by more massive galaxies.
The high-speed relative motion in hot IGM and their shallow potential well both make them more vulnerable to these effects.
Early studies of M31/M32 system \citep[e.g.,][]{1962AJ.....67..471K,1973ApJ...179..423F} and Milky Way/Magellanic clouds system \citep[e.g.,][]{1976ApJ...203...72T,1982MNRAS.198..707L} have been classic paradigm showing such vulnerability of satellites.
The most massive galaxy in the gravitationally bounded system is often called a "central" galaxy.
Analyses of environmental effects are thus commonly undertaken with the satellite and central galaxy dichotomy \citep[e.g.,][]{2009MNRAS.394.1213W,2012ApJ...757....4P,2013MNRAS.428.3306W}, which is also adopted in this work.
Despite the fact that these environment-related mechanisms are able to partly explain the various correlations with galaxy environment, it is still under debate to what extent they have played a role.
Is there strong causality between environment and various galaxy properties just like what is shown by those superficial correlations?
Or is this apparent link with environment merely a by-product of other more fundamental processes?
This question lies at the heart of the "nature or nurture" problem.
One embodiment of this problem is the controversy over morphology-density relation \citep{1980ApJ...236..351D, 2003MNRAS.346..601G} which was originally thought to be caused by environmental effects.
Following studies argued for the existence of other more important drivers \citep[e.g.,][]{2009MNRAS.393.1324B,2016ApJ...818..180C,2017ApJ...851L..33G,2019MNRAS.485..666B} such as stellar mass, colour and sSFR.
Without doubt we are still not clear how important these environmental effects are.
Useful information comes from studying the environmental dependence of specific star formation rate (sSFR) radial gradient ($\nabla\,\mathrm{sSFR}$), because various mechanisms at work in group environments can affect different parts of the galactic star-forming discs.
For example, ram-pressure stripping is thought to be more efficient at removing loose peripheral atomic hydrogen gas (HI) than affecting inner dense molecular gas disks \citep{2017MNRAS.467.4282M, 2022arXiv220505698Z}, thus probably tending to suppress outer star formation.
While tidal force by cluster potential well can induce gas inflows and boost star formation in galactic central regions \citep[e.g.,][]{1990ApJ...350...89B}.
So, studying environmental dependence of $\nabla\,\mathrm{sSFR}$ helps to figure out what processes in group environment are important in terms of affecting galactic star formation histories.
Or if we eventually find only weak dependence on environment, the effectiveness of those proposed mechanisms should be doubted.
Previous studies along this thread have been carried out using narrow-band $\mathrm{H}\alpha$ imaging \citep[e.g.,][]{2004ApJ...613..851K,2004ApJ...613..866K,2013A&A...553A..91F}, resolved photometry \citep[e.g.,][]{2007ApJ...658.1006M,2008ApJ...677..970W} and more recently integral field spectroscopy \citep[IFS; e.g.,][]{2013MNRAS.435.2903B,2017MNRAS.464..121S,2018MNRAS.476..580S,2019A&A...621A..98C,2019ApJ...872...50L}.
However, these studies have acquired very different and sometimes discrepant knowledge about how star formation distributions of galaxies are affected in group environment.
The conclusions include 1) outside-in truncation of star formation \citep[e.g.,][]{2004ApJ...613..851K,2013A&A...553A..91F,2017MNRAS.464..121S,2019A&A...621A..98C}, 2) preferential suppression of star formation in inner regions \citep[e.g.,][]{2008ApJ...677..970W,2019A&A...621A..98C} and 3) weak or no effect \citep[e.g.,][]{2007ApJ...658.1006M,2013MNRAS.435.2903B,2018MNRAS.476..580S}.
Even when the general conclusions are similar, the signals they found can still be in tension.
For instance, both using IFS data, \citealt{2017MNRAS.464..121S} found outside-in truncation for massive galaxies with stellar mass in the range $10<\mathrm{log}\,\mathcal{M}_{\star}/\mathcal{M}_{\odot}<11$ while the outside-in signal in \citealt{2019A&A...621A..98C} is for less-massive galaxies only ($9<\mathrm{log}\,\mathcal{M}_{\star}/\mathcal{M}_{\odot}<10$), and they found preferential central suppression for massive galaxies.
In this work, we revisit the environmental dependence of the spatial distribution of star formation by combining SDSS fiber spectral indices (for galaxy central region) and global sSFR measurements to indicate the (relative) shape of sSFR\footnote{We approach the profiles of sSFR instead of SFR because characterizing the stellar population by the fraction of newborn stars is more representative of star formation status of galaxies.} profiles.
This brings sufficient statistics to the investigation, which is crucial, because unambiguous environmental dependence can only be extracted when other important factors, such as stellar mass and total star formation level, are properly controlled.
Current IFS samples can still lack such statistics, especially for low-mass galaxies among which the environmental effects are usually the strongest.
Even with currently the largest IFS survey MaNGA \citep{2015ApJ...798....7B}, the sample size is at least an order of magnitude smaller than the sample studied in this work, and would limit the parameter control when we aim to explore in more detail how the sSFR profiles correlate with galaxy environment (see section \ref{subsec:env}).
Throughout this paper we adopt cosmological parameters from WMAP-9 \citep{2013ApJS..208...20B} in which $\mathrm{H}_0=69.3\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}$, $\Omega_\mathrm{m}=0.286$ and $\Omega_{\Lambda}=0.714$ and a Chabrier IMF.
\section{Sample}
\label{sec:data}
\subsection{MPA-JHU and GSWLC catalogues}
\label{subsec:cat}
Our galaxy sample is assembled out of the MPA-JHU catalogue and the version 2 of GALEX-SDSS-WISE Legacy Catalogue \citep[GSWLC-2,][]{2016ApJS..227....2S,2018ApJ...859...11S}.
The MPA-JHU catalogue is based on the Sloan Digital Sky Survey Data Release 7 \citep[SDSS DR7,][]{2000AJ....120.1579Y,2009ApJS..182..543A}, providing both spectral and photometric measurements from SDSS as well as value-added derived quantities for more than 800,000 unique galaxies. We heavily use the spectral indices (more details in section \ref{subsec:less}) measured from SDSS spectra which were extracted from fibers of 3 arcsec diameter centered on galaxies. We also take the radius enclosing 50\% of the total r-band Petrosian flux $\mathrm{R_{50}}$ as the apparent angular size of galaxies.
Despite the fact that MPA-JHU catalogue does provide SFR, we use the values from GSWLC-2 instead. GSWLC-2 is a value-added catalogue for SDSS galaxies within the GALEX \citep[Galaxy Evolution Explorer,][]{2005ApJ...619L...1M} footprint.
It provides better SFR measurements in overall by adopting the ultra-violet (UV) data in the multi-band spectral energy distribution (SED) fitting. The UV data is from GALEX, which is a space telescope mapping the sky in two UV bands, FUV (1350-1750 {\rm \AA}) and NUV (1750-2800 {\rm \AA}). Compared with optical SDSS bands, these UV bands are more sensitive to short-lived massive stars, thus to recent star formation.
GSWLC-2 also uses the 22 \mum\ mid-infrared (MIR) band taken by WISE \citep[Wide-field Infrared Survey Explorer,][]{2010AJ....140.1868W}, which is another space telescope providing all sky images in MIR bands. The 22 \mum\ band can trace the absorbed UV light re-emitted by the dust, improving the estimation of recent SFR.
For consistency, we also use the stellar mass from GSWLC-2 which is derived by the same SED fitting procedure.
We use the medium UV depth version of the GSWLC catalogue, taking a balance between the depth of GALEX images and the sky coverage. Our sample thus have a sSFR detection limit of $\mathrm{sSFR} > 10^{-11.7}\,\mathrm{yr^{-1}}$, satisfying the main goal of studying galaxies at low star formation level. The matching between MPA-JHU and GSWLC-M2 is done with a 3 arcsec searching radius, giving a sample of 343,791 galaxies. Changing the matching radius has negligible effect to our sample (differing by less than 0.03\% when matching radius ranges from 1 arcsec to 5 arcsec).
We further constrain our sample with the following criteria:
\begin{equation}
\label{equ:cut}
\begin{aligned}
\qquad \qquad \qquad \qquad 0.01&<z<0.085; \\
14.5&<\mathrm{m}_\mathrm{r}<17.77; \\
9&<\mathrm{log}\,\mathcal{M}_{\star}<11.5; \\
0.259&<\mathrm{b/a} \, ,
\end{aligned}
\end{equation}
where $z$ is redshift, $\mathrm{m}_\mathrm{r}$ is apparent Petrosian magnitude \citep{1976ApJ...209L...1P} in SDSS r-band, $\mathcal{M}_{\star}$ is stellar mass, and $\mathrm{b/a}$ is the ratio between minor and major axis of the 25 $\mathrm{mag}/\mathrm{arcsec}^2$ isophote in SDSS r-band.
The axial ratio cut (equivalent to inclination angle smaller than $75^{\circ}$ for a razor-thin disc) removes edge-on galaxies to avoid large uncertainty in correcting for strong dust extinction.
We limit the redshift below 0.085 as a compromise between sample size and completeness (as also in e.g., \citealt{2006MNRAS.373..469B}).
At $z=0.085$, the SDSS spectroscopic survey is complete for galaxies with absolute r-band magnitude $\mathrm{M}_\mathrm{r}<-19.5$ or stellar mass about $\mathcal{M}_{\star}>10^{10}\,\mathcal{M}_{\odot}$ \citep{2006ApJS..167....1B}.
This magnitude limit is the same as the one adopted for group galaxies defined as halo proxy in the group catalogue used in this work (see section \ref{subsec:yang}), making the halo mass more reliable below $z=0.085$.
Even though the sample is not complete for galaxies with $\mathcal{M}_{\star}<10^{10}\,\mathcal{M}_{\odot}$ out to $z=0.085$, the analyses throughout this work make proper control of stellar mass and sSFR so that the low-mass galaxies in different environment are compared in the same subvolume where they are complete.
The lower redshift limit and the brighter apparent r-band Petrosian magnitude limit are applied to exclude nearby galaxies with too large angular size as their photometry are not properly handled by the SDSS pipeline \citep{2011AJ....142...31B}.
After this cut, our sample size reduces to 119,820.
Our analysis is applied only to galaxies with $\ssfr > 10^{-11.7}\,\mathrm{yr^{-1}}$, the nominal detection limit of the GSWLC-M2 catalogue.
Below this limit, the error in the total SFR surges to 0.7 dex and probing the sSFR radial profile by central spectral indices and total sSFR thus becomes highly uncertain.
\subsection{Galaxy environment}
\label{subsec:yang}
We use the group catalogue constructed by \citet{2012ApJ...752...41Y} to classify the environment of each galaxy. It was built by applying an iterative group finder algorithm to SDSS galaxies. In each iteration the halo properties of the tentative galaxy groups (identified via friends-of-friends algorithm) are computed and then used to update the group membership for next iteration \citep{2007ApJ...671..153Y}. The catalogue associates each galaxy to one galaxy group, hence one dark matter halo as well. Based on this, we classify the galaxies into three categories: central, satellite and isolated galaxies. Centrals and satellites are the members of multi-member groups, with the former to be the most massive one. The isolated galaxies belong to the groups with only one member.
The catalogue also provides dark matter halo mass estimation, based on the total stellar mass or luminosity of bright group members (absolute r-band magnitude $\mathrm{M}_\mathrm{r}<-19.5$) via abundance matching. A mock test suggests its typical uncertainty is about 0.3 dex \citep{2012ApJ...752...41Y}. The halo mass links with a certain virial radius of the halo $R_{200}$:
\begin{equation}\label{r200}
\qquad \qquad \qquad \mathrm{R}_{200}=\Bigg[\frac{\mathcal{M}_{200}}{\frac{4\pi}{3}200\Omega _\mathrm{m} \frac{3\mathrm{H}_0^2}{8\pi \mathrm{G}}}\Bigg]^{\frac{1}{3}}\,\,(1+z)^{-1}.
\end{equation}
Among the several catalogues with slightly different redshift completeness, we take the group catalogue constructed with SDSS redshifts only, which contains 599,301 galaxies.
Using the other versions makes negligible difference.
After matching with the group catalogue, we get a sample of 112,028 galaxies.
\section{Results}
\label{sec:result}
\subsection{Suppressed star formation in the center of satellite galaxies}
\label{subsec:less}
The SDSS single-fiber spectra are extracted from the central part of the galaxies, within a physical radius of 0.3, 1.5, 2.4 kpc respectively at $z=0.01,0.05,0.085$, where 0.05 is about the mean redshift of our sample.
We use the \dfourk\ and the Balmer absorption feature \hda\ to indicate the central sSFR (see also \citealt{2004MNRAS.353..713K}).
\dfourk\ is a break feature at around $4000$ {\rm \AA} mainly due to a series of metal absorption lines on the blueward side of $4000$ {\rm \AA}.
These lines are most prominent for stars with spectral types later than K \citep{1985ApJ...297..371H}, i.e. old stellar populations.
While the opacity at Balmer line \hda\ peaks among young massive stars with spectral types around A.
Therefore if galaxies are more dominated by young stars (i.e. high sSFR), \hda\ and \dfourk\ are respectively higher and lower.
These two indices are insensitive to dust extinction as they are flux ratios in adjacent and narrow spectral windows.
This is particularly important because the central regions of galaxies are usually highly dust obscured which may introduce large uncertainty in the measured sSFR \citep{2017MNRAS.469.4063W}.
With the central sSFR indicated by SDSS spectral indices and total sSFR from SED fitting, it becomes possible to roughly probe the gradient of the sSFR radial profiles.
Though the central and total sSFR are not measured in a consistent way, we prove in Appendix \ref{app:fea} the feasibility in a statistical sense with a smaller sample of galaxies with IFS data.
We investigate the environmental dependence of the relative difference in sSFR radial gradient by comparing the central sSFR of satellite and isolated galaxies at fixed total sSFR and stellar mass.
To ensure that the fiber measurements are on similar scales, we match the apparent angular size of galaxy $\mathrm{R_{50}}$ so that fibers cover similar fractions of galaxy total light.
An alternate aperture controlling is to match redshift, to make fibers cover the same physical scales.
We have tested and found that the two ways lead to the same conclusion.
Specifically, in a certain bin of stellar mass and total sSFR, we minimally trim the satellite and isolated galaxy samples to reach the same $\mathrm{R_{50}}$ distribution in 0.2 arcsec resolution (i.e. getting the maximally overlapping distribution).
The trimming is done in every $\mathrm{R_{50}}$ bin by sampling with replacement a same number (i.e. minimum of $[\mathrm{N_{sat},N_{iso}}]$) of the isolated and satellite galaxies.
We repeat this matching process for 1000 times to estimate the statistical uncertainty in distribution moments (see also \citealt{2008MNRAS.385.1903L} and \citealt{2015MNRAS.448L..72S}).
We compute the median \dfourk\ and \hda\ for each matched isolated and satellite sample respectively, and the mean and the standard deviation of the 1000 values are taken as the final measurement and its uncertainty.
In Fig. \ref{fig:100bs1}, we show the relation between the central sSFR, indicated by \dfourk\ (left panel) and \hda (right panel), and the total sSFR for satellite and isolated galaxies matched in $\mathrm{R_{50}}$.
At given total sSFR, more massive galaxies have lower central sSFR (i.e. higher \dfourk\ and lower \hda).
It is consistent with the well established observation that massive galaxies generally show more positive sSFR profiles \citep[e.g.,][]{2016ApJ...819...91P,2018MNRAS.474.2039E,2018ApJ...856..137W}.
Noteworthily, in the lowest mass bin and at given total sSFR, satellite galaxies show prominently higher \dfourk\ (hereafter we term this "the central \dfourk\ excess") when compared to their isolated counterparts (left panel).
This signal of environmental dependence of sSFR radial gradients is strongest when the total sSFR of galaxies is well below the star formation main sequence (SFMS; shown by the blue shaded region), whose location is defined by the peak sSFR at given mass of the volume-corrected number density distribution of our sample galaxies (see also the Appendix A of \citet{2020MNRAS.495.1958W}).
Similar trend is also spotted in \hda\ versus sSFR diagram (right panel), where the low-mass satellite galaxies have systematically lower \hda\ values.
This suggests that environmental effects preferentially suppress the central star formation of galaxies, making the sSFR profile gradient more positive in a relative sense.
Conclusion remains the same if we use total SFR derived from a different recipe, for example measured directly from UV and MIR luminosity (see Appendix \ref{app:nuvmir}).
\subsection{The dependence on galaxy environment}
\label{subsec:env}
In this section we further explore what suppresses the central star formation in low-mass satellite galaxies by studying how the \dfourk\ excess correlates with galaxy environment.
We investigate this environmental dependence in two sSFR windows: $10^{-10.4}-10^{-9.4}\,\mathrm{yr}^{-1}$ and $10^{-11.4}-10^{-10.4}\,\mathrm{yr}^{-1}$.
These two windows respectively cover normal star-forming galaxies around the SFMS, and galaxies below the SFMS but still with detectable star formation activity.
Our galaxies in the low sSFR bin have a median NUV-r colour index of $\sim4$ which falls onto the conventional green valley on the colour-magnitude diagram (e.g., as in \citealt{2007ApJS..173..267S}).
We use three parameters to quantify environment of satellites: the halo mass of the group \mhalo, the normalized projected distance to the central galaxy \rbcg\ (which is effectively the distance to the halo center\footnote{In groups with few members the weighted-geometric center can be a better tracer of the bottom of the group potential well, as there may not be a dominating central galaxy. We have tested for small groups using this alternate definition of group center and found consistent results that leave our conclusion unchanged.}) and the group richness \nmem\ (i.e. number of galaxies within the group).
Fig. \ref{fig:Edep} represents the \dfourk\ excess as function of these group properties in low and high sSFR bins. The \dfourk\ excess is again calculated by comparing satellite galaxies and their matched isolated counterparts with $\Delta\log(\mstar) < 0.1$, $\Delta\log(\mathrm{sSFR}) < 0.1$ and $\Delta \mathrm{R_{50}} < 0.2\,\mathrm{arcsec}$.
For galaxies with low sSFR, the \dfourk\ excess apparently correlates with all environment properties.
The satellite galaxies have redder cores (i.e. more suppressed central star formation) when they are: 1) in more massive halos; 2) closer to the center of galaxy groups; 3) in groups with more members.
The correlation steepens toward lower stellar mass.
For galaxies in the high sSFR bin, the environmental dependence is much weaker.
Clear \dfourk\ excess only exists in the largest \mhalo\ and \nmem\ bins, and only for low-mass galaxies.
We note that for massive galaxies with high sSFR shown by the red lines in the bottom panels, in the most massive groups, the \dfourk\ signal is not excess but deficiency, indicating enhanced central star formation compared with galaxies in the field environment.
To further break down the environmental dependences of the \dfourk\ excess of low-mass satellites of low sSFR, in Fig. \ref{fig:Edep1} we apply more environment control to the correlation between the \dfourk\ excess and certain environment properties.
In the first panel, we show the central \dfourk\ excess as a function of \mhalo\ in bins of high/low \rbcg\ and \nmem\ respectively (split by the median value, i.e. 0.44 and 30, of the low-mass and low-sSFR satellite sample).
The second and third panel show the other two dependences on \rbcg\ and \nmem\ with further environment control in a similar manner.
We note that there are 88 individual massive groups included in the $\mathcal{M}_h>10^{13.7}\,\mathcal{M}_{\odot}$ bin, making the result in this bin statistically representative for large groups.
The relations for the low-mass and low-sSFR satellites without further environment control in Fig. \ref{fig:Edep} are shown for reference by black symbols.
We find that \mhalo\ and \nmem\ are almost interchangeable.
In the first panel, the relations of \dfourk\ excess and \mhalo\ in bins of high/low \nmem\ (light red and light blue) are just the general relation (black symbols) at higher and lower \mhalo\ end.
The same case is seen in the third panel, and in the second panel the binning by \mhalo\ or \nmem\ gives the same relations.
This is resulted from the tight correlation between \mhalo\ or \nmem\ and among satellite galaxies with non-zero host halo mass catalogued in \citealt{2012ApJ...752...41Y}, the Spearman rank correlation coefficient between \mhalo\ or \nmem\ is as high as 0.92.
Comparing between the left two panels, generally the \dfourk\ excess depends more on \mhalo\ than on \rbcg\ .
The \dfourk\ excess is small in less massive halos, nearly irrespective of groupcentric radius.
This is shown by the overlapping dark red and dark blue bands at low \mhalo\ end in the first panel and also the relatively flat relation in the second panel (dark blue band).
\dfourk\ excess is present in massive halos even at very large \rbcg\ .
The dependence on \rbcg\ starts to become significant in massive halos, especially at the center where we observe \dfourk\ excess as high as 0.2.
These results together seem to suggest the first-order importance of halo mass and also that the physical mechanism gets strongly enhanced in cluster center.
Taking a step further we introduce the relative velocity of satellites into the analysis to try to link the central \dfourk\ excess to the dynamic status of satellites in their host halos.
Fig. \ref{fig:psd} shows low-mass satellites ($10^9-10^{9.8}\,\mathcal{M}_{\odot}$) in massive halos ($\mathcal{M}_h>10^{13.7}\,\mathcal{M}_{\odot}$) on the phase-space diagram \citep[i.e. normalized relative velocity versus normalized projected distance; See also][]{2015MNRAS.448.1715J}.
We calculate the absolute difference of line-of-sight velocities between the satellite and cluster as $|\Delta v| = c|z-z_c|/(1+z_c)$ where $z_c$ is the luminosity weighted redshift of cluster member galaxies.
The velocity difference is then normalized by the cluster velocity dispersion $\sigma_{200}$ (equation 6 of \citealt{2007ApJ...671..153Y}).
We mark the boundary of virialized area by a black straight line, below which galaxies are approximately within the part of the cluster in dynamical equilibrium.
The black dashed curve represents the normalized projected escape velocity $v_\mathrm{esc}/\sigma_{200}$ based on a Navarro-Frenk-White halo \citep{1996ApJ...462..563N} of concentration $c_\mathrm{NFW}=6$.
Starting from the mass profile of a halo one can calculate the potential and thus the escape velocity:
\begin{equation}\label{equa:vesc}
\qquad
v_\mathrm{esc,3D}=\sqrt{\frac{2GM_{200}}{R_{200}}\times g(c_\mathrm{NFW}) \times \frac{ln(1+c_\mathrm{NFW}x)}{x}}
\end{equation}
where
\begin{equation}
\qquad
g(c_\mathrm{NFW})=\Big [ ln(1+c_\mathrm{NFW})-\frac{c_\mathrm{NFW}}{1+c_\mathrm{NFW}} \Big ] ^{-1}
\end{equation}
and
\begin{equation}
\qquad
x=r_\mathrm{3D}/R_{200}
\end{equation}
We project velocity along the line of sight and project the distance on the sky plane using the average relations $v_\mathrm{esc} = \frac{1}{\sqrt{3}}v_\mathrm{esc,3D}$ and $r = \frac{\pi}{4}r_\mathrm{3D}$.
The same as previous analyses we match the satellites by isolated galaxies of stellar mass, sSFR and $R_{50}$ differences less than 0.1 dex, 0.1 dex and 0.2 arcsec respectively.
The central \dfourk\ excess averaged over 100 times matching is recorded for every satellite and we do this analysis separately for satellites in low ($10^{-11.4}-10^{-10.4}\,\mathrm{yr}^{-1}$; the upper row of Fig. \ref{fig:psd}) and high ($10^{-10.4}-10^{-9.4}\,\mathrm{yr}^{-1}$; the bottom row of Fig. \ref{fig:psd}) sSFR ranges.
The right column shows the locally averaged results using the locally weighted regression method LOESS by \citet{Cleveland1988} as implemented\footnote{We use the Python package \textsc{loess} v2.0.11 available from https://pypi.org/project/loess/} by \citet{2013MNRAS.432.1862C}, to reveal the underlying trend.
We adopt a smoothing factor \texttt{frac} = 0.3, and a linear local approximation, but the conclusion does not depend on these certain parameter choices.
In the upper right panel of Fig. \ref{fig:psd} for satellites of low sSFR, LOESS reveals certain structure of \dfourk\ excess at low groupcentric radii.
The largest \dfourk\ excess is not seen evenly for all the galaxy populations near cluster center, but is particularly linked with the satellites of either small or large relative velocities.
Satellites of intermediate velocities of about $|\Delta v|/\sigma_{200} = 0.7$ only show moderate \dfourk\ excess comparable to those at much larger groupcentric radii.
This result indicates an apparent connection between the \dfourk\ excess and the orbit configuration of satellites.
In the lower right panel for satellites of high sSFR, which are probably in the early stages of environmental processing, the \dfourk\ excess is low but noteworthily shows the same pattern as the low-sSFR satellites.
The consistency suggests that the observed pattern of locally averaged \dfourk\ excess reflects the true trend underlying the noisy data in the left column.
\section{Summary and discussion}
\label{sec:discuss}
In this paper, we have investigated the environmental dependence of the relative difference in sSFR radial gradient for 0.1 million SDSS galaxies at $z \sim 0$.
We compare the central sSFR, indicated by indices \dfourk\ and \hda\ measured from SDSS fiber spectra, between satellite and isolated galaxies at the same total sSFR, so that we extract how galaxy environment affects the sSFR radial gradient in a relative sense.
With fiber coverage properly matched for the comparison, the large sample size facilitates the study of detailed correlations with a variety of environmental properties when the mass and star formation level of galaxies are controlled.
Our findings are summarized as below:
\begin{enumerate}[(i)]
\item Low-mass satellite galaxies ($\mathcal{M}_{\star}=10^9-10^{9.8}\,\mathcal{M}_{\odot}$) below the SFMS have lower central sSFR compared to isolated counterpart galaxies at given total sSFR (Fig. \ref{fig:100bs1}).
\item The phenomenon of more suppressed central star formation (i.e. the central \dfourk\ excess at given total sSFR) among low-mass satellites becomes more noticeable in host halos of higher mass (equivalently of more member galaxies), and when closer to the group center, while more massive galaxies below the SFMS show consistent trend but with smaller amplitude (Fig. \ref{fig:Edep}).
The dependence on halo mass is of first-order importance and the dependence on groupcentric radius is secondary (Fig. \ref{fig:Edep1}).
\item In the center of massive halos, phase-space diagram reveals that the phenomenon is strongest among satellites of either lowest or highest relative velocities to the halo (Fig. \ref{fig:psd}), indicating the connection between the suppressed central star formation and orbital configuration of satellite galaxies.
\end{enumerate}
\subsection{The physical mechanisms}\label{subsec:phy}
The more suppressed central star formation of satellites compared to field galaxies of the same total sSFR suggests that additional physical processes in galaxy groups make the quenching of star formation happen more inside-out.
The environmentally promoted inside-out quenching is especially shown by the sharp increase of central \dfourk\ with decreasing total sSFR among the low-mass satellites (Fig. \ref{fig:100bs1}).
The SFR profiles of low-mass satellites can even deviate more from the profiles of their field counterparts because we find, as shown in Fig. \ref{fig:fiber_mu}, that the central stellar mass density within fiber area of low-mass satellites of low sSFR is smaller than field galaxies which is consistent with \citealt{2017MNRAS.464.1077W}.
The stellar mass measurements inside fiber area are taken from the MPA-JHU catalogue, with a small mean difference of $\sim0.1$ dex compared to GSWLC stellar mass \citep{2016ApJS..227....2S}.
The lower central stellar mass density of satellites seems to result from the integrated effect of their suppressed central star formation.
So far it is unclear, among miscellaneous physical processes occurring in group environment, which mechanism is mainly responsible for the central \dfourk\ excess of low-mass satellite galaxies.
In Fig. \ref{fig:Edep1}, we see that the high \dfourk\ excess is preferentially found in massive clusters, especially in the cluster center.
The strongest effect in the cluster center is seen among satellites with either lowest or highest velocities on the phase-space diagram.
The former satellite population with lowest velocity generally have low orbital energy as a result of their low potential energy (i.e. at the bottom of potential well) and low kinetic energy.
Suggested by simulations \citep[e.g.,][]{2013MNRAS.431.2307O}, these satellites joined the cluster during ancient infalls and have thus been trapped in the center for long time.
The latter satellite population with high velocity in the vicinity of cluster center are suggested to be recent infallers that are experiencing their first or second pericenter.
Projection of velocity and position of satellites can smear such connection between orbital properties and the position on phase-space diagram.
However the clear consistency across satellite populations of high and low sSFR living in a large number of different groups rejects the possibility that the result is due to random projection.
From the perspective of environmental effect, the former satellite population experience in long term the enormous tidal force from the massive cluster, which anti-scales with cubic groupcentric distance and can play an important role in shaping the star formation and morphology of galaxies \citep{1984ApJ...276...26M,1990ApJ...350...89B}.
The latter satellite population, when they pass the orbit pericenter, on short timescales not only do they feel the strong cluster tidal field but also large ram pressure due to both the high density of intracluster medium and their high velocities.
The middle panel of Fig. \ref{fig:Edep1} shows that there is non-negligible \dfourk\ excess at even the outskirt of massive halos, where the cluster tidal field weakens dramatically.
While hydrodynamic gas stripping can still be effective in the outskirt of halos for satellites with high velocities, and some cases were indeed caught in action \citep[e.g.,][]{2018MNRAS.476.4753J}.
This also coincides with the fact shown in the upper right panel of Fig. \ref{fig:psd} where we see that at large groupcentric radii satellites of higher velocities manifest larger central \dfourk\ excess.
These together seem to suggest that both tidal and hydrodynamic interactions are responsible for the phenomenon of suppressed central star formation of satellite galaxies.
It is known that tidal interactions can strip the loosely bound peripheral gas of galaxies in synergy with the hydrodynamic gas stripping, which together result in galaxy starvation and prevent further gas accretion \citep{2002ApJ...577..651B}.
In starvation, galaxies tend to quench inside-out due to the one order of magnitude faster gas depletion in the center than in the outer part \citep{2008AJ....136.2782L}.
Starvation promotes inside-out quenching also because that the radial gas inflows on galactic disks may be largely reduced.
As accretions of gas from gaseous halos can drive radial gas inflows due to even just a small mismatch of angular momentum between the accreted gas and the disks \citep{2016MNRAS.455.2308P}.
\citealt{2012MNRAS.426.2266B} reports that this process is one of the most dominant processes inducing radial inflows, making the process an important channel of fuelling central star formation.
So the central star formation is less supported in a satellite with a largely stripped gaseous halo (i.e. in starvation).
By contrast, during the quenching of isolated galaxies, as long as the hot gaseous halos still exist, their central parts are more likely to be fed by cold gas compared to those highly stripped satellites.
We illustrate this scenario in Fig. \ref{fig:illus} ($\spadesuit - \clubsuit - \diamondsuit$ for satellites and $\spadesuit - \heartsuit$ for isolated galaxies).
Starvation as an explanation for the phenomenon shown in this work seems to be in line with \citealt{2015Natur.521..192P, 2019MNRAS.tmp.2878T} which point out the major role of starvation in quenching the low-mass galaxy populations and the growing significance of starvation in denser environments.
Though not reporting on the spatial distribution of star formation, \citealt{2017MNRAS.464..508B} found that the same mechanism drives the enhancement of gas metallicity of satellite galaxies in the EAGLE simulations \citep{2015MNRAS.446..521S}.
They found that the central gas metallicity is enhanced effectively when starvation suppresses the radial inflow of gas, which is predominantly metal-poor.
\subsubsection{Evidence in recent star formation history}\label{subsubsec:sfhs}
The scenario above can have detectable consequences for the recent star formation history (SFH) in the central part of satellite and isolated galaxies.
We probe the recent SFH by the combination of \dfourk\ and \hda\ which trace stellar populations of different ages (see also \citealt{2003MNRAS.341...33K}).
Fig. \ref{fig:sfhs1} shows satellite and isolated galaxies of low mass and low sSFR on the \hda-\dfourk\ plane, overlaid with evolutionary tracks of \citealt{2003MNRAS.344.1000B} models (BC03).
The probability density function of galaxies (filled contours) is derived via kernel density estimation with $V_{\mathrm{max}}$ corrections.
We use Gaussian kernel of width determined by Scott's rule \citep{Scott2015Multivariate}.
Then, we identify the ridge line (following \citealt{2016ApJ...823...18C} and is shown by hatched area) for each density distribution as the representative track for the galaxy population.
In producing model tracks of exponentially declining SFHs (black dashed lines: declining timescale $\tau=0.2,0.4,0.6\,\mathrm{Gyr}$; black solid lines: $\tau=2,4,6\,\mathrm{Gyr}$), we use MILES stellar library of solar metallicity and Padova 1994 library for stellar evolution prescription.
Using other empirical or theoretical stellar libraries and other stellar evolution prescriptions provided in BC03 generates model tracks significantly incompatible with our data.
The contours show that, compared to isolated galaxies (left panel), a significantly higher fraction of satellites (right panel) populate the lower right area indicating again the suppressed central star formation of satellite galaxies.
Moreover, the distribution of isolated galaxies is more concentrated around the ridge while that of satellites has a broader shape.
This may imply that group environment can diversify the SFH of galaxies.
Noteworthily, while the ridge line of satellites can be overall matched by continuously declining SFHs of long timescales over Gyrs, the ridge line of isolated galaxies deviates obviously toward models of shorter timescales.
Such deviation is due to a non-negligible fraction of isolated galaxies with high \hda\ at given \dfourk.
As \hda\ mainly traces A-type young stars, this elevated \hda\ indicates the significance of recent burst of star formation (see also the Fig. 6 in \citealt{2003MNRAS.341...33K}) in the central part of isolated galaxies.
The observed difference in recent SFH between satellite and isolated galaxies fits into the scenario described before.
The existing hot gas halo of low-sSFR isolated galaxies can still fuel some small bursts of star formation, when the inefficient gas cooling (expected from low sSFR) is only able to drive gas radial flows episodically.
By contrast, the central part of satellites in starvation are more likely to turn red quiescently and smoothly when without further gas supply.
\subsection{Comparison with previous works}\label{subsec:comparison}
The discussion above does not incorporate gas stripping caused outside-in quenching as a major driver of the cessation of total star formation in group environments.
Instead, environments are observed to render quenching of low-mass galaxies more inside-out.
However, it has to be clarified that the results do not indicate that gas stripping does not influence outer star formation to any extent.
The results only suggest that the inner parts of galaxies contribute primarily to the total decline of star formation under environmental effects, while the suppression of star formation in the outskirt is only secondary.
The conclusion is echoed by \citealt{2019ApJ...872...50L}, who found that inside-out quenching is the highly dominant channel even for satellites in massive halos and the fraction of galaxies experiencing outside-in quenching does not depend on halo mass at all.
The same conclusion was not reached by many other works in the literature, which are also contradicting among themselves.
Using 1,494 MaNGA galaxies, \citealt{2018MNRAS.476..580S} compared the sSFR radial profiles of central and satellite galaxies.
Their Fig. 7 indicates that, in the intermediate and high mass bins, the sSFR of satellites are systematically lower than the central galaxies particularly outside 0.5 effective radius.
For galaxies in the low-mass bin, this pattern appears to be reversed, showing more inside-out quenching for satellites.
In spite of the general consistency among low-mass galaxies between \citealt{2018MNRAS.476..580S} and our work, our data do not indicate the outside-in quenching for massive satellite galaxies.
\citealt{2019A&A...621A..98C} used a smaller sample of 275 late-type CALIFA galaxies and carried out similar analyses.
As entirely opposed to the results in \citealt{2018MNRAS.476..580S}, for low-mass galaxies in groups they found more suppressed star formation in the outer parts compared with galaxies in the field, and for the massive galaxies, more suppressed in the inner parts.
Rather than being suppressed, the low-mass satellite galaxies studied by \citealt{2019MNRAS.489.1436L} show centrally enhanced star formation in the densest environments.
Apart from these recent works based on IFS data, \citealt{2009MNRAS.394.1213W} studied the g-r colour profiles of galaxies in the SDSS Data Release 4.
They found outside-in quenching pattern for the satellite galaxies in their high mass bin.
In their low-mass bin, the colour profiles of the satellites are globally redder compared to the central galaxies.
Their sample almost does not cover the low-mass range of our data.
The intricate discrepancies between works in the literature can result from a variety of reasons.
Noteworthily, the samples were selected with diverse criteria.
For example, \citealt{2017MNRAS.464..121S} only selected galaxies with central regions classified as star-forming by emission line diagnostics.
This may have biased their sample against centrally quenched galaxies, which would have weak emission lines in the center.
\citealt{2019MNRAS.489.1436L} introduced thresholds for signal to noise of emission lines during the sample selection.
The sample of \citealt{2019A&A...621A..98C} was preselected by Hubble type.
Moreover, a problem in some previous studies is that sSFR radial profiles are not compared at the same level of total sSFR for galaxies in different environments.
While many IFS studies \citep[e.g.,][]{2018MNRAS.477.3014B,2018ApJ...856..137W} have shown that sSFR radial gradients clearly depend on the level of total sSFR.
Therefore, extracting a more unambiguous dependence on environment needs better control of total sSFR, as we have done in this work.
\section*{Acknowledgements}
BW acknowledges the elaborated and constructive comments from the anonymous referee which significantly helped improve this manuscript.
BW thanks Li Shao for his insightful and decisive comments on this work, and thanks Jing Wang, Min Du, and Jingjing Shi for the fruitful discussions with them.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\section*{Data Availability}
The data used in this work are all publicly available.
We take the MPA-JHU catalogue from https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/ and the GSWLC catalogue from https://salims.pages.iu.edu/gswlc/ and the group catalogue from https://gax.sjtu.edu.cn/data/Group.html for SDSS galaxies.
\bibliographystyle{mnras}
\newpage
\appendix
\section{Feasibility of probing sSFR radial gradient by central and total sSFR}\label{app:fea}
The basic idea of our analysis is that at a given total sSFR, the variation of central sSFR reflects the change in sSFR radial gradient across the disks.
Below we prove this in a statistical sense by using a small sample of galaxies with MaNGA IFS data.
First, at given total sSFR we divide our sample into four quarters with respect to their central \dfourk\ values and the loci of the quartiles are shown as black dashed lines in Fig. \ref{fig:s3}.
By doing this we roughly classify our sample into four subsamples with different sSFR radial gradients.
For galaxies in each quarter we search for reduced MaNGA cubes in a value-added catalogue P{\sc ipe}3D \citep{2016RMxAA..52...21S,2016RMxAA..52..171S}, released as a part of SDSS DR14.
This allows us to look into their spatially resolved maps of sSFR.
We find 262, 187, 166 and 230 galaxies included in P{\sc ipe}3D for the first to the fourth quarters (red to blue points in Fig. \ref{fig:s3}) respectively after excluding cubes flagged as bad.
We derive the \dfourk\ radial profile of these galaxies as follows.
Their \dfourk\ maps are binned, with a step of 0.25 $\mathrm{R_{50}}$, by six ellipses of position angle and ellipticity determined for the galaxy by P{\sc ipe}3D pipeline \citep{2016RMxAA..52..171S}.
An example of the \dfourk\ map and the binning for a galaxy belonging to the forth quarter are shown in the right panel of Fig. \ref{fig:s31} together with the corresponding SDSS g--r--i image composite in the left panel.
The expected significantly negative \dfourk\ radial gradient (because the galaxy is in the 4th quarter) is clearly seen on that map.
We then measure \dfourk\ radial profiles by calculating the median \dfourk\ of spaxels in each radial bin without cut of signal-to-noise ratio.
The derived profiles for galaxies in different quarters, each normalized to the third radial bin, are displayed in Fig. \ref{fig:s33} as black lines.
In each quarter, the median relation is shown by the red line with one sigma error estimated by 1000 bootstrap samples.
Fig. \ref{fig:s33} shows a systematic change of \dfourk\ radial gradient as expected from our original classification according to the central and total sSFR of galaxies, and proves the feasibility of our method.
We get the same conclusion by analyzing the maps of \hda\ and \halpha.
\section{Confirming the main result using NUV+MIR SFRs}\label{app:nuvmir}
In this appendix we reproduce the left panel of Fig. \ref{fig:100bs1} using SFRs measured directly from GALEX NUV and WISE W4 fluxes.
For galaxies in GSWLC-M2 catalogue, we retrieve their GALEX NUV magnitude from the GALEX final data release (DR6/7)
\footnote{http://galex.stsci.edu/GR6/}
via MAST Casjobs
\footnote{https://galex.stsci.edu/casjobs/}.
The nontrivial matching between SDSS and GALEX has been done in \citealt{2016ApJS..227....2S} so that we can retrieve GALEX data by directly using GALEX \textit{objid} in Casjobs.
Among 305,094 galaxies in GSWLC-M2 that have a valid GALEX \textit{objid}, 280,101 galaxies are detected in NUV after excluding several hundred duplications.
The WISE four bands photometry (W1 to W4 at 3.6, 4.6, 12 and 22 $\mu m$ respectively) is taken from unWISE
\footnote{http://unwise.me}
\citep{2016AJ....151...36L}, where SDSS detections served as forced photometry priors in the reduction of WISE data.
Since the unWISE photometry is based on SDSS DR10 detections, every source has already been matched with SDSS.
We directly take the unWISE data for every galaxy in GSWLC-M2 through SDSS \textit{ObjID}.
Before converting GALEX NUV magnitude to luminosity
\footnote{https://asd.gsfc.nasa.gov/archive/galex/FAQ/counts\_background.html}
, galactic reddening is corrected for using color excess derived from dust maps of \citealt{1998ApJ...500..525S} assuming a \citealt{1989ApJ...345..245C} extinction curve.
And WISE W4 flux is corrected for ``red" sources by giving a 8\% reduction
\footnote{http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4\_4h.html\#example}
.
With all data requirements included in above procedures, finally we got 203,191 galaxies (56.2\% of GSWLC-M2) having corrected NUV and W4 luminosities.
We take the NUV star formation calibration in \citealt{2013seg..book..419C}, which is obtained from stellar population models assuming a Kroupa IMF in mass mange $0.1-100\,\mathcal{M}_{\odot}$ and constant star formation rate over $100\,\mathrm{Myr}$.
We further applies a factor of 95\% to adjust from Kroupa to Chabrier \citep{2007ApJS..173..267S} as GSWLC SFRs are based on the latter and finally:
\begin{equation}\label{nuvsfr}
\mathrm{SFR}_{\mathrm{NUV}}/(\mathcal{M}_{\odot}\,yr^{-1})=6.46\times10^{-44}\times L_{\mathrm{NUV},\,\mathrm{total}}/(\mathrm{erg}\,\,s^{-1})
\end{equation}
The NUV flux received by GALEX is the unobscured part, we compensate for dust extinction according to \citealt{2011ApJ...741..124H}:
\begin{equation}\label{dust}
L_{\mathrm{NUV},\,\mathrm{total}}=L_{\mathrm{NUV},\,\mathrm{obscured}}+2.26\times L_{25\,\mu m}
\end{equation}
where $L_{25\,\mu m}$ stands for luminosity in Infrared Astronomical Satellite (IRAS) band centered at 25 $\mu m$.
The flux difference between IRAS 25 $\mu m$ and Spitzer MIPS 24 $\mu m$ is negligible \citep{2009ApJ...703.1672K} and that between Spitzer MIPS 24 $\mu m$ and WISE W4 is around 16 per cent \citep{2014MNRAS.443.1329H}.
Thus, we adopt a relation $L_{25\,\mu m}=1.19 \times L_{22\,\mu m}$ to convert WISE W4 luminosity to IRAS 25 $\mu m$ luminosity.
The comparison between GSWLC-M2 SED SFRs and NUV plus MIR SFRs derived above is presented in the left panel of Fig. \ref{fig:s1} where both SFRs are divided by stellar masses in GSWLC-M2.
Above SED sSFR $\sim10^{-11}\,\mathrm{yr}^{-1}$, the relation is very close to a 1:1 relation (black dashed line) but it deviates strongly below this threshold with NUV plus MIR sSFRs saturated and SED sSFRs extending further toward lower values.
This has made the linear fitting with orthogonal distance regression method give a super-linear relation (green dashed line), while the general dispersion is not large (0.25 dex).
Considering that toward low sSFR, an increasing fraction of SED SFRs are derived without detected FUV flux in SED fitting, here we test if the inclusion of detected FUV (and also explicitly for NUV and W4), for which case the results of SED fitting should be more accurate, can ease the tension between SED SFRs and NUV+MIR SFRs.
The result is shown in the right panel of Fig. \ref{fig:s1}.
Due to the requirement of FUV detection, galaxies on this plane mainly populate the high sSFR part.
Indeed the incorporation of FUV information reduces the scatter.
However, below sSFR $\sim10^{-11}\,\mathrm{yr}^{-1}$ galaxies deviate from a 1:1 relation in the same manner as in the left panel.
This suggests that the discrepancy is not due to data quality but intrinsic, and a reasonable explanation is that the coefficients in NUV+MIR SFRs are fixed while it is possible that the contribution to NUV and MIR fluxes from young stars change significantly for galaxies with different star formation levels \citep{2016A&A...591A...6B}.
Despite the large systematics at low sSFR regime, here in Fig. \ref{fig:s2} we reproduce the main signal shown in the left panel of Fig. \ref{fig:100bs1} using NUV+MIR sSFRs.
The central \dfourk\ excess at a given total sSFR of galaxies in the lowest mass bin still increases below the SFMS and reaches a level of around 0.1.
This shows that our main conclusion does not depend on the choice of SFR recipe.
\bsp %
\label{lastpage} |
Title:
Origin of Plutonium-244 in the Early Solar System |
Abstract: We investigate the origin in the early Solar System of the short-lived
radionuclide 244Pu (with a half life of 80 Myr) produced by the rapid (r)
neutron-capture process. We consider two large sets of r-process
nucleosynthesis models and analyse if the origin of 244Pu in the ESS is
consistent with that of the other r and slow (s) neutron-capture process
radioactive nuclei. Uncertainties on the r-process models come from both the
nuclear physics input and the astrophysical site. The former strongly affects
the ratios of isotopes of close mass (129I/127I, 244Pu/238U, and 247Pu/235U).
The 129I/247Cm ratio, instead, which involves isotopes of a very different
mass, is much more variable than those listed above and is more affected by the
physics of the astrophysical site. We consider possible scenarios for the
evolution of the abundances of these radioactive nuclei in the galactic
interstellar medium and verify under which scenarios and conditions solutions
can be found for the origin of 244Pu that are consistent with the origin of the
other isotopes. Solutions are generally found for all the possible different
regimes controlled by the interval ($\delta$) between additions from the source
to the parcel of interstellar medium gas that ended up in the Solar System,
relative to decay timescales. If r-process ejecta in interstellar medium are
mixed within a relatively small area (leading to a long $\delta$), we derive
that the last event that explains the 129I and 247Cm abundances in the early
Solar System can also account for the abundance of 244Pu. Due to its longer
half life, however, 244Pu may have originated from a few events instead of one
only. If r-process ejecta in interstellar medium are mixed within a relatively
large area (leading to a short $\delta$), we derive that the time elapsed from
the formation of the molecular cloud to the formation of the Sun was 9-16 Myr.
| https://export.arxiv.org/pdf/2208.02074 |
\section{Introduction}
\label{sec:intro}
There are 17 short-lived %
radioactive (SLR, with~half lives of the order of 0.1 to 100 Myr) nuclei known to have been (or potentially have been) present in the early Solar System \mbox{(ESS) ~\cite{lugaro18rev}. }Three of them, \iso{129}I, \iso{244}Pu, and~\iso{247}Cm, have the specific property to be produced in the Galaxy almost exclusively by the process of $rapid$ neutron captures (the $r$ process). Among~those three, live \iso{244}Pu from the present-time interstellar medium has also been detected in young sediments of the ocean floor~\cite{wallner15,wallner21}. Furthermore, \iso{244}Pu and \iso{247}Cm are actinides located beyond Pb and Bi at mass numbers around 208-210, the~end point of the $slow$ neutron-capture ($s$) process~\cite{ratzel04}. Therefore, they are exclusively of $r$-process origin. Being located beyond the classical third $r$-process peak at Pt and Au, actinides are typically produced if the number of neutrons per seed is relatively large.
Instead, \iso{129}I belongs to the classical second $r$-process peak. It has only a very minor (a few percent) contribution from the $s$ process because the unstable isotope that precedes it on the $s$-process path, \iso{128}I, has a half life of 25 min only and decays faster than the typical time required for capturing a neutron.
These three $r$-process isotopes have most likely the same $r$-process origin (as indicated by elemental abundance observed in halo stars,~\cite{cowan21}). They can be studied individually or together to provide evidence on the history of the material that made up the Solar System~\cite{lugaro14science} and to set constraints on the $r$-process astrophysical site and its nuclear input, which are both extremely uncertain~\cite{hotokezaka15,cote21science,wang21a,wang21b}. In~particular, C\^ot\'e~et~al.~\cite{cote21science} (hereafter Paper I) constrained the last $r$-process source to have contributed to the solar material by comparing the \iso{129}I/\iso{247}Cm ratio observed in primitive meteorites to nucleosynthesis calculations based on neutron star (NS-NS) merger, black hole--neutron star (NS-BH) merger, and~magneto-rotational supernova simulations. Here, we extend that study to \iso{244}Pu, to~investigate if it is possible to find an explanation for the presence of this SLR isotope in the ESS compatible with the explanation for the presence of the other SLR isotopes heavier than iron, also well known to have been present in the ESS.
Table~\ref{tab:intro} summarises the main properties and information available on the four isotopic ratios under consideration here: \iso{129}I/\iso{127}I, \iso{244}Pu/\iso{238}U, \iso{247}Cm/\iso{235}U, and~\iso{129}I/\iso{247}Cm. We only analyse isotopic ratios because the most direct evidence that comes from the analysis of meteoritic material on ESS values is not absolute abundances, but~abundance values relative to each other. Absolute abundances suffer from many uncertainties, e.g.,~chemical separation in the nebula, in~the meteorite parent body, and/or during chemical analysis, as~well as dilution from the original stellar source. The~ratios of interest are those of each estimated SLR abundance relative to a long-lived or a stable isotope. These ratios are directly measured in primitive meteorites and their components (the first three rows of Table~\ref{tab:intro}), or derived from the ratios directly measured (last row), as~in the case of the \iso{129}I/\iso{247}Cm ratio. This last ratio provides us with a further observational constraint because \iso{129}I and \iso{247}Cm have very similar half lives~\cite{yague21PaperIII}\endnote{The mean-life $\tau_{ratio}$ of the \iso{129}I/\iso{247}Cm ratio given in the table was obtained by Monte Carlo sampling of the uncertainties on the mean lives of the two isotopes, $\tau_{129}$ and $\tau_{247}$, which are 5\% and 6\%, respectively, at~2$\sigma$ (for comparison, the uncertainty for \iso{244}Pu is 2\%) within the usual formula: $\tau_{129} \times \tau_{247} /( \tau_{129} -\tau_{247})$. Using the recommended values, $\tau_{ratio}$ would be equal to 2449 Myr, however, sampling of the uncertainties produces a lower value most of the time because the uncertainties make $\tau_{129}$ and $\tau_{247}$ move away from each other, and~therefore their difference, at~denominator in the formula above, increases.
In general, it would be extremely useful if the half lives of \iso{129}I and \iso{247}Cm could be measured with higher precision than currently available.
A more detailed statistical analysis should also be carried out considering that
the peak value reported in the table is probably not the best statistical choice due to the exponential behaviour of the decay. In~fact, although~$\tau \sim 270$ Myr is the most common value, when $\tau \gtrsim 1000$ Myr abundances do not vary much anymore within the time scales, roughly $< 200$ Myr are %
of interest for the ESS (discussed in Section~\ref{sec:galaxy}). Therefore, a~more statistically significant value may be higher than the peak value reported in the table, probably around 900 Myr.
For the other ratios, the~values of the mean lives at numerator and denominator in the equation above are so different that $\tau_{\rm ratio}$ is always within 2\% of the $\tau$ of the short-lived isotope. A~statistical analysis of the uncertainties would not affect those values, although~we will analyse statistically the impact of the uncertainties on all the mean lives when we derive timescales in Section~\ref{sec:galaxy}.}.
This allowed to remove several theoretical uncertainties in Paper I, providing a direct window into the astrophysical conditions of the $r$-process site that produced the \iso{129}I and \iso{247}Cm in the ESS.
Note that, instead, it is not possible to extract any further meaningful constraints from the \iso{129}I/\iso{244}Pu and \iso{247}Cm/\iso{244}Pu ratios because their half lives are very different from each other~\cite{yague21PaperIII}.
\begin{table}[H]
\caption{Properties
of the three ratios that involve SLR nuclei of $r$-process origin that were present in the ESS: the mean lives of the isotopes at numerator, at~denominator, and~of their ratio ($\tau_{\rm num}$, $\tau_{\rm den}$, and~$\tau_{\rm ratio} =
\tau_{\rm num} \times \tau_{\rm den} /( \tau_{\rm num} -\tau_{\rm den})$, respectively, all in Myr), and~the ESS values (at 2$\sigma$, from~\cite{lugaro18rev}). We also show, in the last column, the three values of the $K$ factor that affect each of the ratios when predicted by the GCE model. This factor accounts for the star formation history and efficiency, the~star-to-gas mass ratio, and~the galactic outflows (Section~\ref{sec:galaxy}). The~uncertainties on these quantities result in a minimum ($K_{\rm min}$), a~best-fit ($K_{\rm best}$), and~a maximum ($K_{\rm max}$) value of each ratio.
\label{tab:intro}}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{adjustwidth}{-\extralength}{0cm}
\begin{tabularx}{\fulllength}{CCCCCC}
\toprule
\textbf{Ratio} & \boldmath{\textbf{$\tau_{\rm num}$}} & \boldmath{\textbf{$\tau_{\rm den}$}} & \boldmath{ \textbf{$\tau_{\rm ratio}$}} & \textbf{ESS Ratio} & \boldmath{\textbf{$K_{\rm min}$, $K_{\rm best}$, $K_{\rm max}$}} \\
\midrule
\iso{129}I/\iso{127}I & 22.6 & stable & 22.6 & ($1.28\pm0.03)\times 10^{-4}$ & 1.6, 2.3, 5.7 \\
\iso{244}Pu/\iso{238}U & 115 & 6447 & 117 & ($7\pm2)\times 10^{-3}$ & 1.5, 1.9, 4.1 \\
\iso{247}Cm/\iso{235}U & 22.5 & 1016 & 23.0 & ($5.6\pm0.3)\times 10^{-5}$ & 1.1, 1.2, 1.8 $^b$ \\
\midrule
\iso{129}I/\iso{247}Cm & 22.6 & 22.5 & 270 $^a$ (100--3000) & $438\pm184$ & 1, 1, 1 \\
\bottomrule
\end{tabularx}
\end{adjustwidth}
$^a$ Values taken from the asymmetric $\tau_{\rm ratio}$ distribution shown in Figure~S4 of Paper I. The~first value is roughly the peak of the distribution, and~the values in parenthesis represent most of its total range. $^b$ Values corrected relative to those reported in Paper I.
\end{table}
Out of the four ratios reported in Table~\ref{tab:intro}, \iso{244}Pu/\iso{238}U has not been considered yet within a global analysis of origin of the SLR nuclei heavier than iron in the ESS. This is for two main reasons: first, its half life of 80 Myr is very different from that of the other two isotopes of roughly 15 Myr, therefore, the~modelling of its abundance in the interstellar medium (ISM) is likely to present a different behaviour (see discussion in Section~\ref{sec:galaxy}). Second, its ESS abundance is less certain than those of the other two isotopes. The~ESS \iso{129}I/\iso{127}I ratio has an uncertainty of roughly 2\% at 2$\sigma$ and many studies agree on its value, suggesting that systematic uncertainties are not significant~\cite{gilmour06}. The~\iso{247}Cm/\iso{235}U was established with an uncertainty of roughly 6\% at 2$\sigma$ thanks to the discovery of the special meteoritic inclusion, named Curious Marie, rich in U~\cite{tissot16}. More data on different samples is still needed to completely establish this~value.
In the case of the ESS \iso{244}Pu abundance (i.e., the~\iso{244}Pu/\iso{238}U ratio), instead, not only is the uncertainty for the value reported in Table~\ref{tab:intro} roughly 30\%, but~also there are potential systematic uncertainties in the determination of the ESS value. The~ESS \iso{244}Pu abundance can be estimated by xenon isotope studies of meteorites, since \iso{129}Xe and the heavy \iso{131-136}Xe are stable isotopes produced by the spontaneous fission of \iso{244}Pu. Moreover, solids are extremely poor in noble gases and the radiogenic and fissiogenic xenon signatures may become significant over time and, hence, can be quantified at high precision. Studies have been focusing on gas-poor meteoritic materials: mineral separates \citep{Wasserburg1969PhRvL}; CAIs \citep{Marti1977LPI, Podosek1972E&PSL}; differentiated meteorites with simple cooling histories, such as angrites \citep{Lugmair1977E&PSL}) and\mbox{ eucrites \citep{Shukolyokov1996GeCoA};} and high-metamorphic-grade ordinary chondrites \citep{hudson89}. Currently, there are two ``best estimates'' of ESS using different approaches. \citet{Lugmair1977E&PSL} normalized \iso{244}Pu to \iso{150}Nd, an~$r$-process-only isotope of Nd, because~they found an achondrite (Angra dos Reis) where they could prove that the geochemical analogue of Pu is Nd, and~potential modification of the Pu/Nd ratio with respect to the Solar System abundances can be ruled out. They reported \iso{244}Pu/\iso{238}U ratios $\simeq$0.0043 at the adjusted time of Solar-System formation \citep{connelly12}. The~value reported in Table~\ref{tab:intro} (0.007) is a different estimate by~\cite{hudson89}, who used a different approach. As~the fissiogenic signature is dominated by \iso{244}Pu-derived xenon in meteorites, they irradiated an exceptionally gas-poor ordinary chondrite (St Severin) with thermal neutrons to induce the fission of \iso{235}U and derived the \iso{244}Pu/\iso{238}U ratio from the component analysis of xenon isotope measurements alone. This value is almost twice as high as the value provided by the Angra dos Reis study, and~it is in better agreement with the more recent analysis of Xe in ancient terrestrial zircons from Western Australia~\cite{turner07}. In~summary, the~major challenge is to find a meteorite sample that is representative of the Solar System, and~for which geochemical processes that could potentially modify the relative abundances of Pu to U or rare earth elements with respect to the chondritic composition is well-understood, and~the effect can be corrected for.
Here, we will consider for the ESS \iso{244}Pu/\iso{238}U ratio the value reported in Table~\ref{tab:intro}. If~the ``true'' value was eventually found to be lower, for~example, by~a factor of two, all the times calculated and reported in our analysis below would have to be increased by 80~Myr.
The aim of this paper is to investigate possible self-consistent solutions for the origin of the abundances of all the SLR nuclei heavier than iron observed to have been present in the ESS, including \iso{244}Pu. These observed abundances are represented by the four $r$-process ratios reported in Table~\ref{tab:intro}, as~well as the SLR isotopes produced by $slow$ neutron captures (the $s$ process, specifically \iso{107}Pd, \iso{135}Cs, and~\iso{182}Hf, as~discussed in~\cite{trueman22}). We start by discussing predictions from state-of-the-art models of the $r$ process for the three SLR nuclei of interest and their reference isotopes (Section~\ref{sec:yields}). Then, in Section~\ref{sec:galaxy}, we consider the temporal evolution of the \iso{244}Pu/\iso{238}U ratio in the ISM and discuss if there are solutions for its ESS value that are consistent with the abundances of the other SLR nuclei heavier than iron. Finally, in Section~\ref{sec:conclusions}, we present our summary, conclusions, and~suggestions for future~work.
\section{Nucleosynthesis~Calculations}
\label{sec:yields}
We consider the large set of $r$-process abundances published with Paper I and calculated with the nuclear network code WINNET%
\endnote{\url{https://zenodo.org/record/4446099\#.YgKVxWAo-mk} (accessed on 15 June 2022)} ~\cite{marius_yields,Winteler2012} and the nucleosynthesis network \hl{PRISM}%
\endnote{\url{https://zenodo.org/record/4456126\#.YgKV0GAo-mk} (accessed on 15 June 2022)} ~\cite{nicole_yields,mumpower18}.
All the abundances reported and~used in this work, are taken at 1 Myr after the nucleosynthetic event, i.e.,~they are not decayed completely, given that we are interested in SLR nuclei.
Table~\ref{tab:WINNET} lists all the WINNET models considered here and the relationship between the labels used in Paper I and the shorter labels used here. The~sites and the nuclear physics sets, with~all their relevant references, are described in detail in Paper I and Ref.~\cite{eichler19}. Here, we remind briefly that the nomenclature of the nuclear input is as follows: [D,J,Jm] denotes the mass model (D for Duflo Zuker, J for JINA reaclib, Jm for JINA with Marketin theoretical $\beta$ decays). The~``h'' indicates that the nuclear heating subroutine was turned on~\cite{freiburghaus99}), modifying the temperature evolution of the trajectory. Finally, [f1,f2,f4] represent three different fission fragment models.
There are, in total, 3 (top)$\times$3 (bottom) nuclear labels, i.e.,~nine sets of nuclear inputs (right side of the table), and~seven astrophysical sites (left side of the table), therefore, a~total of 63 WINNET models. The~tabulated abundances of the six isotopes of interest here (together with the Eu isotopes and \iso{232}{Th} for future reference) can be found in Supplementary Table~S1.
\begin{table}[H]
\caption{The correspondence %
of the astrophysical site and nuclear input labels are used here to indicate the WINNET models and %
those used in Paper I, where a full description of each site and nuclear model and relevant references can also be found. The~total mass ejected by each site is also indicated. \label{tab:WINNET}}
\begin{tabularx}{\textwidth}{CCC|CC}
\hline
\textbf{Site Label} & \textbf{Site Label (Paper I)} & \textbf{Mass Ejected (\msun)} & \textbf{Nuclear Label} & \textbf{Nuclear Label (Paper I)} \\
\hline
R1010 & NS-NS merger dyn. ejecta (R) & $7.64\times 10^{-3}$ & Dhf & DZ10 \\
R1450 & NS-BH merger dyn. ejecta (R) & $2.38\times 10^{-2}$ & Jhf & FRDM \\
Bs125 & NS-NS merger dyn. ejecta (B) & $5.50\times 10^{-4}$ & Jmhf & FRDM(D3C*) \\
FMdef & NS-NS merger disk ejecta 1 & $1.70\times 10^{-3}$ & 1 & Panov \\
FMs6 & NS-NS merger disk ejecta 2 & $1.27\times 10^{-3}$ & 2 & K \& T \\
FMv0.10 & NS-NS merger disk ejecta 3 & $4.06\times 10^{-3}$ & 4 & ABLA07 \\
Wmhd & MR SN & $6.72\times 10^{-3}$ & & \\
\hline
\end{tabularx}
\end{table}
In the case of the PRISM models, the~nomenclature is identical to that used in Paper I (Table S3), and~references therein. In~this case, four sites are considered (the dynamical ejecta of two NS--NS mergers and two NS--BH models), and~combined with ten different mass models, of~which four are also investigated using alternative $\beta$ decays (the ``Mkt'' label, here, corresponding to the ``D3C*'' label in Paper I). The~total is, therefore, 4 $\times$ 14 = 56 PRISM models. The~tabulated ratios of interest here can be found in Supplementary Table~S2.
The four ratios of interest from all the models are plotted in Figure~\ref{fig:ratios129247} and \ref{fig:ratios}. As~expected, ratios of isotopes of similar mass (Figure~\ref{fig:ratios}) are much less dependent on the model than the \iso{129}I/\iso{247}Cm ratio (Figure~\ref{fig:ratios129247}). Variations in those three ratios are typically of factors $\sim$2.5 to 3 in the WINNET models. Additionally found by some of these models is the \iso{129}I/\iso{127}I ratio of 1.35 derived from the $r$-process abundance of the stable \iso{129}Xe in the Solar System\endnote{This is calculated using the residual method, where the $r$-process abundance is the total solar abundance of \iso{129}Xe minus the predicted $s$-process abundance. This method cannot be applied to \iso{247}Cm and \iso{244}Pu as these isotopes do not have one daughter stable nucleus produced exclusively by their decay.}.
When considering the PRISM models, which explored a larger set of nuclear inputs, variations are somewhat larger, especially in the case of \iso{247}Cm/\iso{235}U (up to a factor of 10).
All the models show variations in the \iso{129}I/\iso{247}Cm ratio of up to three orders of magnitudes (Figure~\ref{fig:ratios129247}, corresponding to Figure~S2 of Paper I, but~with the PRISM models also included). Of~the WINNET models, 13 of them (the nine FMdef models, the~three FMs6 Jmhf models, and~the FMs6 Jhf4 model) could match the observed \iso{129}I/\iso{247}Cm in the ESS. The~PRISM models all represent dynamical ejecta and, therefore, provide similar results to the corresponding WINNET models (Bs125, R1010, R1450). As~noted above, the PRISM models explore a larger set of nuclear inputs and only one out of those 14 choices (TF\_Mkt) provides a solution for the \iso{129}I/\iso{247}Cm ratio in all the four~sites.
A %
quick estimate indicates that self-consistent values of the time elapsed from the last
$r$-process event (Section~\ref{sec:last}) would result from models with similar values of the
\iso{129}I/\iso{127}I and \iso{247}Cm/\iso{235}U ratios. This is because both the ESS value and
the $K$ values for \iso{129}I/\iso{127}I are roughly twice those of \iso{247}Cm/\iso{235}U.
Therefore, the~two differences cancel each other out in the calculation of the ISM ratio needed
to derive the time interval of the decay by comparison to the ESS ratio\endnote{When we also
consider the difference due to fact that while \iso{127}I is stable, \iso{235}U will also
decay. For~time intervals of the order of 100-200 Myr, this corresponds to a small effect on
the \iso{247}Cm/\iso{235}U ratio of roughly 10-20\%.}. While there are no models with the same
values of the \iso{129}I/\iso{127}I and \iso{247}Cm/\iso{235}U ratio, when we consider the
uncertainties of the $\tau$ and ESS values, many solutions can be found for a much larger range
of relative ratios, as~shown in Section~\ref{sec:last}. This is because the time interval of
the decay is a function of the natural logarithm of the abundance ratio; therefore, variations
in the relative ratios up to a factor of 5 (or even 10) result in a difference, by~subtraction,
between~the time intervals of 1.6 (2.3), which correspond to a percent variation by 30\% (50\%)
only, i.e.,~well within the uncertainties. For~\iso{244}Pu/\iso{238}U, instead, it is more
difficult to make a quick estimate because of the very different $\tau$. In the next section,
we evaluate quantitatively, using the WINNET set, the values of the time elapsed between
production and incorporation into the first solids in the ESS to verify which models can match
the three constraints~simultaneously.
\unskip
\unskip
\section{Galactic Evolution and Origin of the SLRs in the~ESS}
\label{sec:galaxy}
When considering the ESS data, we need to process the stellar abundances for their recycling within the ISM material from which the Sun formed. Such recycling implies a certain time delay, which is crucial to consider when analysing radioactive isotopes that decay within a given timescale. C\^ot\'e~et~al.~\cite{cote19PaperI,cote19PaperII} and Yag\"ue L\'opez~et~al.~\cite{yague21PaperIII} provided a methodology and tools to address the evolution in SLR nuclei in the ISM of the Galaxy, and we base our analysis on such~works.
First, we need to take into account the uncertainties related to galactic chemical evolution (GCE) itself over the whole lifetime of the Galaxy. These result in a factor $K$, by which any ratio predicted by nucleosynthesis calculations involving a stable or long-lived reference isotope needs to be multiplied. This factor takes into account the history of the Galaxy and how it influences the evolution, and~therefore the abundance, at the galactic time of the formation of the Sun, of~the stable or long-lived isotope that is used as reference for the ESS ratio. The~values of $K$ we calculated from the full GCE models~\cite{cote19PaperI} are reported in Table~\ref{tab:intro} for the three isotopic ratios considered here. Three values are provided: the middle value is the best-fit case and the other two reflect the GCE uncertainties, which provide a minimum and maximum value of the SLR to stable or long-lived isotope ratios.
{Summarizing %
Table~1 of~\cite{cote19PaperI}, the~GCE parameters that mostly affect the value of $K$ are those related to the first and second infall episodes ($A_1$ and $A_2$) and the star formation efficiency ($f_{star}$). The~observational constraints whose uncertainties affect $K$ the most are the current inflow rate and mass of gas. Due to the feedback between all these quantities, there is not a simple relation with the value of $K$. For~example, the~$K_{\rm max}$ values are found for the highest values of $A_2$, $f_{star}$, and~inflow rate, together with the lowest values of $A_1$ and mass of gas.
The reasons for this behaviour are explained in detail in~\cite{cote19PaperI}.}
We found that, if the reference isotope is stable, as~in the case of \iso{127}I, the~best-fit value of $K$ is 2.3. When the reference isotope is unstable and long-lived (such as \iso{235,238}U), the~value of $K$ decreases with the half life of the nucleus because the abundance is affected by a shorter time scale within the full history of the Galaxy. For~example, in~the case of \iso{235}U, with~a half life of $\sim 1$ Gyr, i.e.,~roughly ten times shorter than the age of the Galaxy, the~$K$ factor decreases by roughly a factor of two.
In the case of \iso{129}I/\iso{247}Cm, there are no values of $K$ to be applied (in other words, $K$ is always equal to 1) because these two isotopes are both short-lived and insensitive to the past history of our Galaxy. This is one the several advantages of using such a ratio, as~discussed in detail in Paper~I.
The other potential problem is that injection of SLR nuclei into the ISM by the stellar objects that produce them is not continuous, because stellar ejection events happen in correspondence to very specific discrete events, e.g.,~supernova explosions or neutron--star %
mergers. For~stable nuclei, this effect is not significant because their ESS abundances are primarily defined by the total number of events that enriched the pre-solar nebula, rather than by the exact times at which the events occurred. However, for~SLR nuclei, this effect can completely control their abundances in the ISM since they freely decay between events. One way to account for this is to consider the average of the interval $\delta$ between additions to a given parcel of ISM gas from events of a given type, and~compare it to the mean-life $\tau$ of the SLR nuclei produced by this type of events. Therefore, the~$\tau/\delta$ ratio is the crucial parameter to consider, or~equivalently $\tau/\gamma$, where $\gamma$ is the time interval between the births of the event progenitors\endnote{Since $\gamma \simeq <\delta>$ (see detailed discussion in~\cite{cote19PaperII}) for our purposes here $\gamma$ will be considered equivalent to $\delta$.}. We do not know a priori the value of $\tau/\gamma$ for any SLRs and their sources because it depends on uncertain effects such as diffusive transport in the ISM, supernova energetic in carrying material in the ISM, spatial distribution of the events, and~distance of the events from the pre-solar ISM parcel of gas (see, e.g.,~~\cite{banerjee22} and Wehmeyer~et~al., in~prep).
Our approach has therefore been to first develop a general framework and then test its implications and derive its predictions for different values \mbox{of $\tau/\gamma$. }
C\^ot\'e~et~al.~\cite{cote19PaperII} found that, if $\tau/\gamma>2$ where $\gamma$ is the time interval between the births of the event progenitors, then we can treat the injection of SLR from such an event as continuous {(hereafter %
Regime I)}. We just need to add an uncertainty resulting from the statistical spread of the SLR abundance. If, instead, $\tau/\gamma<0.3$, the~most likely scenario is that the ESS abundance of the given SLR came from one last event only, without~any memory of the previous events { (hereafter Regime III). For~values of $\tau/\gamma$ in-between 0.3 and 2, the~SLR abundance carries the memory of a few events (hereafter Regime II)}. Finally, we note that considering SLR ratios such as the \iso{129}I/\iso{247}Cm ratio in the last row of Table~\ref{tab:intro} significantly reduces the uncertainties resulting from the discrete nature of stellar ejections, especially for cases when the half lives are comparable, as~discussed in general in Ref.~\cite{yague21PaperIII}, and~in detail for the $r$-process SLR in Paper~I.
In the following, we use the WINNET abundances to derive more information on the early Solar System and the history of presolar matter from the three $r$-process isotopes considered here in different possible scenarios.
We remind that the main difference between \iso{129}I and \iso{247}Cm, on~the one hand, and~\iso{244}Pu, on~the other hand, is that the half life of the latter is roughly five times longer than those of the former two. Therefore, the~$\tau/\gamma$ criterion needs to be applied differently, even if all the three isotopes are exclusively produced by $r$-process~events.
\subsection{One (Regime III) or Few (Regime II) Events and Time Elapsed from Last Event}
\label{sec:last}
In the case of the two $r$-process SLR \iso{129}I and \iso{247}Cm, as~discussed and presented in detail in Paper I, we can justify statistically the assumption that their abundances in the ESS originated from one last event only, {(Regime III %
)}, which occurred roughly 100-200 Myr before the formation of the first solids in the Solar System. The~criterion $\tau/\gamma<0.3$ under which { Regime III} is valid {also for \iso{244}Pu is} that $\gamma$, or~equivalently $\delta$ in the equation, is greater than 345 Myr. Therefore, possible solutions for this scenario {are} those for which $\delta$ is around or larger than this value. To~evaluate the ISM \iso{244}Pu/\iso{238}U ratio under the assumption that \iso{129}I, \iso{247}Cm, and~\iso{244}Pu in the ESS originated from one event, we then use Eq.~S2 of Paper I, as~performed in that paper for \iso{129}I/\iso{127}I and \iso{247}Cm/\iso{235}U, and~the values of $K$ reported in Table~\ref{tab:intro}. Some examples of {the calculation of the time from the last event} are shown in Figure~\ref{fig:lastevent}. There, self-consistent solutions are represented by the overlapping areas of the three different colored bands, each representing one of the three SLR isotopes and their uncertainties. {The trend with the $\delta$ of the time elapsed calculated using \iso{244}Pu is steeper than those calculated using the other two isotopes. This is due to its much longer $\tau$ value and the fact that the time elapsed is a linear function of $\tau$.}
{Figure~\ref{fig:lastevent} also shows some examples of possible solutions for Regime II, which corresponds to} $0.3<\tau/\delta<2$, i.e.,~$\delta$ = 68\endnote{This lower limit is defined such that $\tau/\delta<0.3$ for \iso{129}I and \iso{247}Cm, but~it is close to the 57.5 Myr value defined by $\tau/\delta>2$ for \iso{244}Pu.}- 345 Myr. {In this case,} \iso{244}Pu originated from {a few discrete} events and the lower the value of $\delta$, the~larger the number of events. The~last event would have contributed only a fraction, {$1 - e^{(-\delta/\tau)}$ (assuming a constant production factor),} of the ESS abundance of \iso{244}Pu. {Therefore, at~the lower limit of Regime II, $\delta=68$ Myr, the~last event contributed 45\% of the ESS abundance of \iso{244}Pu.}
Overall, the~WINNET set comprises 63 sets of models, and~the GCE model provides three values of $K$ for a total of 189 possibilities. For~the first three ratios of Table~\ref{tab:intro}, we found {that
92\% of the models can provide overlapping solutions: 62 of those with $K_{\rm min}$, 60 of those with $K_{\rm best}$, and~52 of those with $K_{\rm max}$.}
Therefore, solutions are common, {partly thanks to the degree of freedom provided by the relatively free parameter $\delta$}. Times of the last event {are} in the range 100-200 Myr as derived in Paper I, {this is expected given that the analysis presented here is} just an extension of that presented there, to check if the \iso{244}Pu/\iso{238}U ratio could also be explained. {A new result is that these elapsed times are lower in Regime II than in Regime III, due to the steeper trend with $\delta$ of those calculated using \iso{244}Pu.}
If we include the requirement that the \iso{129}I/\iso{247}Cm ratio should be between 254 and 622\endnote{Note that the evaluation of the ESS ratio of \iso{129}I/\iso{247}Cm depends on the time from the last event itself, given that its $\tau_{\rm ratio}$ is variable due to the uncertainties in $\tau_{\rm 129}$ and $\tau_{\rm 247}$, as~discussed in Section~\ref{sec:intro}. The~ESS values reported in Table~\ref{tab:intro} and used here were calculated assuming a time from last event in the range 100-200 Myr and composing all the uncertainties, as~discussed in detail in the supplementary material of Paper I. A~more precise analysis would instead use the range of times from the last event for each model solution to derive the range of corresponding ESS \iso{129}I/\iso{247}Cm ratios, and~find if the model matches such a specific range. However, given that, as~noted above, most solutions provide times in the 100-200 Myr range, this more accurate treatment would not change our results.}, the~number of solutions becomes much more restricted.
{In fact,} the \iso{129}I/\iso{247}Cm ratio is a much more stringent constraint because
the two isotopes are very far apart in mass and therefore located in very different regions of the nuclide chart. Such relative abundances are more sensitive to the general features of the process and its astrophysical site (such as the amount of free neutrons) as well as the uncertainties in the nuclear model, than~the ratios of isotopes that are closer to each other in mass (see Figure~\ref{fig:ratios}).
As shown in Figure~S2 of Paper I, the~simulations that produce the best matches to the observations are those dominated by moderately neutron-rich ejecta (in the specific case of the WINNET models, these correspond to the nine FMdef and the three FMs6 Jmhf models).
Out of these models, we find that six out of the nine FMdef models (the three Dhf plus the \mbox{three Jhf cases}) and one of the FMs6 model (Jmhf4) can also account for \iso{244}Pu/\iso{238}U {when using either of the three values of $K$, for~21 solutions in total. The~main difference between using $K_{\rm min}$ and $K_{\rm best}$ versus using $K_{\rm max}$ is that the former two values provide solutions for $\delta$ values typical of Regime III, while the latter corresponds to solutions within Regime II.}
(The FMs6 Jmhf1 and Jmhf2 cases produce \iso{129}I/\iso{247}Cm ratios of 236 and 242, respectively, just outside the required range). In~summary, {more than half (21)} of all the 36 possible models that match the \iso{129}I/\iso{247}Cm (12 models $\times$ 3 values of $K$ = 36) provide a global solution for all the four isotopic~ratios.
Finally, we note that, if the ESS \iso{244}Pu/\iso{238}U ratio was lower than the value used here, the~green shaded area in Figure~\ref{fig:lastevent} would shift upwards, for~example, by~80 Myr if the ESS ratio was twice as low, due to a longer decay time needed to match the lower ESS value. This would remove {most of the Regime III} solutions and shift {the Regime II} solutions to lower values of $\delta$.
\subsection{Steady-State Equilibrium (Regime I) and Isolation Time}
\label{sec:steady}
{In %
Regime I,} $\tau/\delta$ is $> 2$, {corresponding} to $\delta < 57.5$ Myr, and~\iso{244}Pu evolves in steady-state equilibrium. In~this case, \iso{129}I and \iso{247}Cm would be in {Regime II; the~time elapsed from the last event would decrease with $\delta$ (as discussed in Section~\ref{sec:last}), and~reach roughly 80-130 Myr}. %
The steady-state regime for \iso{129}I and \iso{247}Cm would require, instead, roughly $\delta < 11$ Myr {(for this value of $\delta$, at~the limit of Regime II, the~last event contributed roughly 40\% of their ESS abundances)}. This can be excluded with reasonable confidence because it is the typical value obtained for core-collapse supernovae, which are much more frequent than the currently accepted $r$-process~sources.
If \iso{244}Pu was in steady-state equilibrium, then we can use Equation~(11) of~\cite{lugaro18rev} (where $K=k+1$) and consider the productions ratios from the $r$-process models as a continuous wave of enrichment. In~this case, the~time interval needed to decay the ISM ratio to its corresponding ESS ratio is
an isolation time rather than a time from the last event. This time interval can then be compared to the isolation time obtained from the $s$-process isotopes, \iso{107}Pd, \iso{135}Cs, and~\iso{182}Hf, under~the assumption of same regime, i.e.,~$\delta < 5-6$ Myr for the $s$-process events in the Galaxy, which correspond to asymptotic giant branch (AGB) stars of initial mass $\simeq2-4$ \msun\ ~\cite{trueman22}. For~the three values of $K$ to be used when studying SLR/stable isotope ratios (i.e., 1.6, 2.3, 5.7, as~reported in Table~\ref{tab:intro} for \iso{129}I/\iso{127}I), the~isolation times reported by~\cite{trueman22} for the $s$-process SLR isotopes are 9-12, 10-16, and~18-26 Myr, respectively. We also need to consider the statistical uncertainty due to stochasticity and discussed in~\cite{cote19PaperII}. We can use here the uncertainties reported in Table~3 of~\cite{cote19PaperII}, for~the specific case $\tau/\gamma=3.16$ and $\gamma$=31.6 Myr, which are close to the maximum uncertainty that would correspond to the case of \iso{24}Pu in this regime. The~error is almost symmetric and corresponds to variations in the ISM ratio of +1.16 and $-$0.84. These translates into error bars to be applied to each isolation time of $+$17 and $-$19~Myr.
In the case of $K_{\rm max}$, no solutions are present because all the isolation times derived from \iso{244}Pu are in the range 61-213 Myr, much higher than the range derived for the $s$-process SLR nuclei of 18-26 Myr. This is controlled by the large value of $K_{\rm max}$ combined with the long half life of \iso{244}Pu. In~the case of $K_{\rm min}$ and $K_{\rm best}$, instead, there are 18 and 14 solutions possible, respectively, which have an overlap with the ranges of isolation time derived from the $s$-process SLR nuclei. These solutions are all obtained from the models that produce \iso{244}Pu/\iso{238}U abundance ratios in the range 0.19--0.34. As~shown in Figure~\ref{fig:ratios}, these correspond mostly to the WINNET models run with the Jmhf nuclear inputs (out of the total 32 solutions, 23, , i.e.,~72\%, are Jmhf solutions) and the six NS--NS merger PRISM models with SLY4, TF\_Mkt, and~UNEDF0. For~the other nuclear input choices, instead, only specific astrophysical sites results in \iso{244}Pu/\iso{238}U abundance ratios in the required~range.
Out of the 12 models that match the three ratios that involve \iso{129}I and \iso{247}Cm, seven of them also provide solutions for the isolation time from \iso{244}Pu/\iso{238}U compatible with the $s$-process SLR isotopes. However, as~mentioned above for the value of $\delta$ considered here, these two SLR isotopes may have more than one event contributing to their ESS abundances; therefore, such constraints become less strong (see also~\cite{banerjee22}).
We should also consider the case where \iso{244}Pu is in steady-state, but~the $s$-process SLRs came from one last event, which requires roughly $\delta >30$ Myr for $s$-process event in the Galaxy. In~this case, the~last $s$-process event was identified to have occurred at 25 Myr before the formation of the first solids~\cite{trueman22}, therefore, the~isolation time from \iso{244}Pu/\iso{238}U is simply constrained to be smaller than this value. Also in this case solutions do not exist with $K_{\rm max}$, while there are 13 and 7 more solutions for $K_{\rm min}$ and $K_{\rm best}$, respectively. Most of these solutions overlap as they correspond to the same $r$-process models but the different value of $K$, therefore, they correspond to the same range of \iso{244}Pu/\iso{238}U abundance ratios and nuclear models as reported above. The~isolation time from \iso{244}Pu in this case can vary more more freely and there are a few models that given the uncertainties provide values down to zero, which is not a useful constraint. Finally, we note that if the $s$-process SLRs originated from a few events (i.e., $5-6<\delta <30$ Myr), then the time from the last event would increase and a few more models could produce an isolation time lower than this value. A~more detailed statistical analysis would be needed in this~case.
Finally, we note that if the ESS value of the \iso{244}Pu/\iso{238}U ratio was twice as high as the value considered here, we would need to add $+80$ Myr to every isolation time, which would make it impossible to find a solution consistent with the origin of the $s$-process \mbox{SLR isotopes.}
\section{Summary and~Conclusions}
\label{sec:conclusions}
We presented and analysed the relative production of the short-lived and long-lived $r$-process isotopes \iso{129}I, \iso{235}U, \iso{238}U, \iso{244}Pu, and~\iso{247}Cm and the stable \iso{127}I in a large set of 119 $r$-process models from two different sets calculated with the WINNET and PRISM frameworks. We then investigated if it is possible to find solutions for the origin of the ESS abundance of \iso{244}Pu that provide production at the source and time intervals (either from the last event or from the time of the isolation, depending on the $\tau/\delta$ regime) compatible to those of the other $r$-process and $s$-process SLR isotopes. {A summary of the different possibilities, solutions, and~derived time intervals are shown in Table~\ref{tab:summary}. In~brief:}
\begin{table}[H]
\caption{Summary of the different regimes combinations for the different SLRs, their corresponding $\delta$ values in Myr ($\delta_r$ and $\delta_s$, for~the $r$- and $s$-process events, respectively), $r$-process model solutions, and~elapsed time ($t_{{\rm e},r}$ and $t_{{\rm e,}s}$, for~the last $r$- and $s$-process event, respectively) or isolation time ($t_{\rm i}$) in Myr. Notes: $^{a}$For all the four ratios in Table~\ref{tab:intro}: 6 = FMdef(3xDhf,3xJhf) + FMs6Jmhf4, all valid for each of the three values of $K$. We did not check the PRISM %
models for these regimes. $^{b}$ Of which 23 have Jmhf nuclear input. $^{c}$The two NS--NS merger models with the three nuclear inputs: SLY4, TF\_Mkt, and~UNEDF0. \label{tab:summary}}
\begin{adjustwidth}{-\extralength}{0cm}
\setlength{\cellWidtha}{\fulllength/4-2\tabcolsep+0.9in}
\setlength{\cellWidthb}{\fulllength/4-2\tabcolsep-0.3in}
\setlength{\cellWidthc}{\fulllength/4-2\tabcolsep-0.3in}
\setlength{\cellWidthd}{\fulllength/4-2\tabcolsep-0.3in}
\scalebox{1}[1]{\begin{tabularx}{\fulllength}{>{\centering\arraybackslash}m{\cellWidtha}>{\centering\arraybackslash}m{\cellWidthb}>{\centering\arraybackslash}m{\cellWidthc}>{\centering\arraybackslash}m{\cellWidthd}}
\toprule
\textbf{Regime} & \textbf{$\delta$ (Myr)} & \textbf{Solutions} & \textbf{Times (Myr)} \\
\midrule
III for \iso{129}I, \iso{247}Cm, and~\iso{244}Pu & $\delta_r$ $>$ 345 & \multirow{2}{*}{7 WINNET$^{a}$} & \multirow{2}{*}{$t_{{\rm e},r}$ $\simeq$ 100--200}\\
III for \iso{129}I and \iso{247}Cm and II for \iso{244}Pu & 68 $<$ $\delta_r$ $<$ 345 & & \\
\midrule
II for \iso{129}I and \iso{247}Cm and I for \iso{244}Pu, \iso{107}Pd, and~\iso{182}Hf & 11 $<$ $\delta_r$ $<$ 68, $\delta_s$ $<$ 5 & 32 WINNET$^{b}$, 6 PRISM$^{c}$, 0 for $K_{\rm max}$ & \multirow{2}{*}{$t_{{\rm e},r}$ $\simeq$ 80--130, $t_{\rm i}$ $\simeq$ 9--16} \\
OR III for \iso{107}Pd and \iso{182}Hf & 11 $<$ $\delta_r$ $<$ 68, $\delta_s$ $>$ 30 & 20 more than above & $t_{{\rm e},s}$$\simeq$ 25, $t_{\rm i}$ > 0 \\
\bottomrule
\end{tabularx}}
\end{adjustwidth}
\end{table}
\begin{enumerate}
\item In {Section~\ref{sec:last} (top section of Table~\ref{tab:summary}), we considered Regimes II and III for} \iso{244}Pu, {corresponding to $\delta > 68$ Myr and Regime III for} \iso{129}I and \iso{247}Cm. {More than half} of the WINNET models that were already shown to reproduce the three ratios that involve \iso{129}I and \iso{247}Cm in Paper I, also provide a self-consistent solution for \iso{244}Pu. These models all correspond to the NS--NS merger disk cases dominated by moderately neutron-rich ejecta.
\item In {Section~\ref{sec:steady} (bottom section of Table~\ref{tab:summary}), we considered Regime I for \iso{244}Pu, i.e.,~$\delta < 68$ Myr}, where this SLR reaches a steady-state value in the ISM. It is also possible to find a significant number of $r$-process models (mostly corresponding to the Jmhf nuclear input) that provide solutions for the ESS \iso{244}Pu abundance compatible to those of the SLR isotopes produced also by the $s$ process: \iso{107}Pd and \iso{182}Hf (and the current ESS upper limit of \iso{135}Cs). However, no solutions exist {in Regime I for \iso{244}Pu} if the ESS value of \iso{244}Pu was twice as high as the value used here or if the Milky Way model was represented by $K_{\rm max}$.
\end{enumerate}
We cannot determine if the solution to the origin of \iso{244}Pu in the ESS is 1. or 2. above, {and which implications on the timescales are valid,} since we still do not know how far material from $r$-process sources can travel, and, therefore, how many parcels of the ISM are affected by each of these events and the value of $\delta$. {Although we note that recent hydrodynamical models aimed at calculating how far material travels after being ejected by a hypernova predict relatively short distances~\cite{amend22}, which would support large $\delta$ values for the r-process events.}
{In any case, we have established that within Point 1., WINNET solutions within the NS--NS disk ejecta favour the Dhf and Jhf nuclear models. Within~Point 2., all} solutions exclude the case of a Milky Way Galaxy with $K_{\rm max}$, therefore restricting {the isolation time} to 9-16 Myr {(if \iso{107}Pd and \iso{182}Hf are also in Regime I),} still supporting the hypothesis that the Sun was born in a massive, long-living molecular cloud. We can also conclude that a much lower value of the \iso{244}Pu/\iso{238}U ratio in the ESS than that reported in Table~\ref{tab:intro} would be impossible to reconcile {within Regime I, and~would therefore support Regimes II and III}. New, future experiments and analysis are needed to confirm the ESS \iso{244}Pu/\iso{238}U ratio.
\vspace{6pt}
\supplementary{The following supporting information can be downloaded at: \linksupplementary{s1},
Table S1: WINNET-abundances.txt; Table S1: PRISM-ratios.txt %
}
\authorcontributions{All %
mentioned in this part, please add
the authors have contributed to the conceptualization, methodology, software, validation, formal analysis,
and~investigation. The~original draft was prepared by M.L. %
and Marco Pignatari are the same, please indicate them
with Figure~\ref{fig:ratios129247} contributed by B.S., and~Figure~\ref{fig:ratios} by A.Y.L.. A.Y.L., B.S.,
B.C., M. Pet\H{o}, N.V., B.W., and M. Pignatari
contributed to the methodology, as well as the review and editing of the paper. M.L. contributed to the supervision, project
administration, and~funding acquisition for the project. All authors have read and agreed to the published
version of the~manuscript.}
\funding{This research was funded by ERC via CoG-2016 RADIOSTAR (Grant Agreement 724560). The~work of AYL
was supported by the US Department of Energy through the Los Alamos National Laboratory. Los Alamos
National Laboratory is operated by Triad National Security, LLC, for~the National Nuclear Security
Administration of U.S.\ Department of Energy (Contract No.\ 89233218CNA000001). BC acknowledges support
of the National Science Foundation (USA) under grant No. PHY-1430152 (JINA Center for the Evolution of
the Elements).} %
\institutionalreview{Not applicable %
Statement and approval number, if~relevant to your study. You might choose to exclude this statement if
the study did not require ethical approval. Please note that the Editorial Office might ask you for
further information. Please add “The study was conducted in accordance with the Declaration of Helsinki,
and~approved by the Institutional Review Board (or Ethics Committee) of NAME OF INSTITUTE (protocol code
XXX and date of approval).” for studies involving humans. OR “The animal study protocol was approved by
the Institutional Review Board (or Ethics Committee) of NAME OF INSTITUTE (protocol code XXX and date of
approval).” for studies involving animals. OR “Ethical review and approval were waived for this study due
to REASON (please provide a detailed justification).” OR “Not applicable” for studies not involving
humans or animals..
j}
\informedconsent{Not applicable %
this statement. Please add ``Informed consent was obtained from all subjects involved in the study.'' OR
``Patient consent was waived due to REASON (please provide a detailed justification).'' OR ``Not
applicable'' for studies not involving humans. You might also choose to exclude this statement if the
study did not involve humans.Written informed consent for publication must be obtained from participating
patients who can be identified (including by the patients themselves). Please state ``Written informed
consent has been obtained from the patient(s) to publish this paper'' if applicable..
}
\dataavailability{Not applicable %
reported results can be found, including links to publicly archived datasets analyzed or generated during
the study. Please refer to suggested Data Availability Statements in section ``MDPI Research Data
Policies'' at \url{https://www.mdpi.com/ethics}. If~the study did not report any data, you might add
``Not applicable'' here..
}
\acknowledgments{We thank Marius Eichler, Almudena Arcones, and~Thomas Raucher for providing us with the WINNET models and their results of $r$-process nucleosynthesis. We thank Matthew Mumpower, Trevor Spouse, and~Rebecca Surman for their contributions to the PRISM models. We also thank Jamie Gilmour for discussion on the ESS data.
MP acknowledges support of NuGrid from NSF grant PHY-1430152 (JINA Center for the Evolution of the Elements) and STFC (through the University of Hull's Consolidated Grant ST/R000840/1), and~access to {\sc viper}, the~University of Hull High Performance Computing Facility. MP acknowledges the support from the ``Lendület-2014'' Programme of the Hungarian Academy of Sciences (Hungary). We thank the ChETEC COST Action (CA16117), supported by COST (European Cooperation in Science and Technology), and~the ChETEC-INFRA project funded from the European Union’s Horizon 2020 research and innovation programme (grant agreement No 101008324), and~the IReNA network supported by NSF AccelNet.}
\conflictsofinterest{The authors declare no conflict of interest. The~funders had no role in the design of the study; in the collection, analyses, or~interpretation of data; in the writing of the manuscript, or~in the decision to publish the~results.}
\newpage
\abbreviations{Abbreviations}{
The following abbreviations are used in this manuscript:\\
\noindent
\begin{tabular}{@{}ll}
ESS & early Solar System\\
GCE & galactic chemical evolution\\
ISM & interstellar medium \\
NSM & neutron star merger \\
$r$ process & $rapid$ neutron-capture process \\ %
$s$ process & $slow$ neutron-capture process \\ %
SLR & short-lived radioactive
\end{tabular}}
\begin{adjustwidth}{-\extralength}{0cm}
\printendnotes[custom]
\reftitle{References}
\end{adjustwidth}
|
Title:
First post-Newtonian $N$-body problem in Einstein-Cartan theory with the Weyssenhoff fluid: equations of motion |
Abstract: We derive the equations of motion for an $N$-body system in the
Einstein-Cartan gravity theory at the first post-Newtonian order by exploiting
the Weyssenhoff fluid as the spin model. Our approach consists in performing
the point-particle limit of the continuous description of the gravitational
source. The final equations provide a hint for the validity of the effacing
principle at 1PN level in Einstein-Cartan model. The analogies with the general
relativistic dynamics involving the macroscopic angular momentum are also
discussed.
| https://export.arxiv.org/pdf/2208.09839 |
\title{First post-Newtonian $N$-body problem in Einstein-Cartan theory with the Weyssenhoff fluid: equations of motion}
\titlerunning{First post-Newtonian $N$-body problem in Einstein-Cartan theory with the Weyssenhoff fluid: equations of motion}
\author{Emmanuele Battista\thanksref{e1,e2,addr1}
\and
Vittorio De Falco\thanksref{e3,addr2,addr3}}
\thankstext{e1}{e-mail: [email protected]}
\thankstext{e2}{e-mail: [email protected]}
\thankstext{e3}{e-mail: [email protected]}
\authorrunning{Battista \& De Falco (2022)}
\institute{Department of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Vienna, Austria \label{addr1}
\and
Scuola Superiore Meridionale, Largo San Marcellino 10, 80138 Napoli, Italy\label{addr2}
\and
Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia Edificio 6, 80126 Napoli, Italy \label{addr3}}
\date{Received: \today / Accepted: }
\section{Introduction}
\label{sec:intro}
The \emph{$N$-body problem} consists in describing the evolution of $N$ massive objects under their mutual gravitational attractive forces. If we regard the gravitational interaction \emph{\'a la Newton}, we need to solve two issues: (1) determining the equations of motion of the interacting extended bodies (represented by partial-integro differential equations); (2) solving this problem to infer their trajectories. This complex pattern can be drastically simplified if the $N$ bodies keep \emph{mutually well separated} (i.e., their separations are greater than their typical sizes). This configuration permits to neglect, to a good approximation, the contributions ensuing from the quadrupole and higher-order multipole moments of the bodies to their external gravitational fields. Therefore, the extended objects can be modelled as $N$ point-like masses via the \emph{point-particle procedure} \cite{Poisson-Will2014}. This implies that now ordinary differential equations rule the dynamics, and numerical approaches are of fundamental importance to extrapolate the whole motion \cite{Aarseth2009}. For particular configurations it is possible to determine semi-analytical or even analytical solutions \cite{Meyer1981,Wang1991,Pupyshev1994,Goldstein2002}.
The situation completely changes when gravity is framed in general relativity (GR), because the following complications arise: (1) \emph{non-linear geometric structure of GR}, which can spoil the well-posed mathematical formulation of the problem \cite{Ehlers1980,Bruhat1969,Bruhat2014}; (2) \emph{self-referential controversy} manifesting in the fact that the equations of motion are contained in the gravitational field equations \cite{Maggiore:GWs_Vol1,Blanchet2014}; (3) \emph{finite propagation of gravity interaction} (contrarily to the action at a distance in Newtonian physics), which yields retarded-partial-integro differential equations \cite{Maggiore:GWs_Vol1,Blanchet2014,Poisson-Will2014}.
These conceptual and mathematical difficulties can be overcome if we exploit \emph{approximation schemes} and \emph{break the general covariance of the GR theory} by working in special classes of coordinate systems, e.g., harmonic coordinates. Simplifications occur if we assume that the \emph{gravitational source is post-Newtonian (PN)}, namely it is slowly moving, weakly self-gravitating, and weakly stressed \cite{Maggiore:GWs_Vol1,Blanchet2014}. This hypothesis permits to apply in the near zone (which covers the whole gravitational source) the \emph{PN approximation method}, where we expand the model parameters in terms of $1/c$ \cite{Maggiore:GWs_Vol1,Blanchet2014}, engendering the appearance of \emph{static potentials} without retardation effects. Finally, if the bodies are \emph{mutually well separated}, we can apply the \emph{point-particle limit}, pursuing the same strategy of classic physics. In this skeleton process, the integrals underlying basic quantities exhibit divergences exactly at the location of the particles. However, \emph{self-field regularization methods} (represented by Hadamard and dimensional techniques) are employed to heal the infinities (see Ref. \cite{Blanchet2014} and references therein for details).
The PN approximation procedure implies that after having chosen a coordinate system, \emph{the test particles' motion occurs in the Newtonian absolute Euclidean space} \cite{Blanchet2014}, descending thus into the classical physical framework. Nevertheless, \emph{the equations of motion still preserve their relativistic nature}, since they remain invariant under a global PN-expanded Lorentz transformation, admit a correct perturbative limit when $N-1$ masses tend to zero, and are conservative when gravitational radiation-reaction effects are nullified \cite{Blanchet2014}.
The PN approximation scheme was pioneered in 1917 by Lorentz and Droste, who worked out the first post-Newtonian (1PN) corrections to the Newtonian dynamics within GR \cite{Droste1917,Lorentz1937}. In 1938, Einstein, Infeld, and Hoffmann (EIH) \cite{Einstein1938,Infeld1960} re-derived these results for $N$ bodies by making use of the \emph{surface integral method}. Only in 1985, Damour and Deruelle provided the first analytical solution, expressed in a quasi-Newtonian form, to the two-body problem at the 1PN level \cite{Damour1985}. Since these first solid achievements, the theoretical progresses on the GR dynamics attained very high PN orders via various methods. The works can be classified for non-spinning \cite{Blanchet2009a,Marchand2017,Bini2020r,Bern2021dq} and spinning \cite{Bohe2015a,Bohe2015a,Levi_2016,Cho2021} compact binary systems.
All these developments find crucial applications in: the motion of $N$ point-like bodies for the description of planets' dynamics in the Solar System, including also the related GR effects \cite{Einstein1938,Will1993}; the gravitational radiation-reaction force in binary pulsars \cite{Weisberg2005,Weisberg2016}; the emission of gravitational waves from inspiralling compact binaries up to very high PN orders \cite{Schmidt2020,Cho2021,Cho2022}.
In this article, we are motivated to study the $N$-body problem in the Einstein-Cartan (EC) theory, an extension of GR where besides the curvature, which is related to the mass-energy distribution, there is also the torsion tensor, which is linked with the microscopic spin density \cite{Hehl1976_fundations}. Hereafter, the term \qm{spin} will refer to the quantum intrinsic angular momentum of bodies. This work is part of a research program aiming at modelling the gravitational-wave theory in EC geometry \cite{Paper1,Paper2}, which permits to analyze the spin contributions to gravitational phenomena. Besides the latter topic, it would be also interesting to analyse and explore some further applications of our developments in other physical contexts. Our approach relies on the same assumptions as in GR (i.e., PN source and mutually well separated bodies), but it employs the Weyssenhoff fluid \cite{Weyssenhoff1947,Boehmer2006} to treat the spin effects inside the matter.
The article is essentially divided into three parts: derivation of the $N$-body equations of motion in EC theory at the 1PN order (see Sec. \ref{sec:EC}); applications of our findings to binary systems (see Sec. \ref{sec:binary_system}); discussion about our results and future perspectives (see Sec. \ref{sec:end}).
\emph{Notations.} We use metric signature $(-,+,+,+)$. Greek indices take values $0,1,2,3$, while lowercase Latin ones $1,2,3$. The determinant of the metric $g_{\mu \nu}$ is denoted by $g$. $\varepsilon_{kli}$ is the total antisymmetric Levi-Civita symbol. The spacetime coordinates are $x^\mu = (ct,\boldsymbol{x})$. Four-vectors are written as $a^\mu = (a^0,\boldsymbol{a})$, and $\boldsymbol{a} \cdot \boldsymbol{b}:= \delta_{lk}a^l b^k$, $\vert \boldsymbol{a} \vert\equiv a := \left(\boldsymbol{a} \cdot \boldsymbol{a}\right)^{1/2}$, and $\left(\boldsymbol{a} \times \boldsymbol{b}\right)^i := \varepsilon_{ilk} a^l b^k$. The symmetric-trace-free projection of a tensor $A^{ij\dots k}$ is indicated with the symbol $A^{\langle ij\dots k \rangle }$. Round (respectively, square) brackets around a pair of indices stands for the usual symmetrization (respectively, antisymmetrization) procedure, i.e., $A_{(ij)}=\frac{1}{2}(A_{ij}+A_{ji})$ (respectively, $A_{[ij]}=\frac{1}{2}(A_{ij}-A_{ji})$).
\section{Post-Newtonian $N$-body problem}
\label{sec:EC}
In this section, we first delineate briefly the Weyssenhoff fluid in Sec. \ref{sec:Weyssenhoff_fluid}, and then we deal with the $N$-body problem at 1PN level and the related point-particle procedure in Sec. \ref{sec:PPL_Continuous}.
\subsection{The Weyssenhoff fluid}
\label{sec:Weyssenhoff_fluid}
In this section, we introduce the Weyssenhoff model within the EC theory (see Sec. \ref{sec:model}) and its post-Newtonian description (see Sec. \ref{sec:PN_description}).
\subsubsection{Model and dynamics}
\label{sec:model}
The EC model is a theory of gravity defined on a four-dimensional Riemann-Cartan spacetime manifold endowed with a symmetric metric tensor $g_{\alpha \beta}$ and the most general metric-compatible affine connection $\Gamma^{\lambda}_{\mu \nu}$, whose symmetric and antisymmetric parts read as, respectively,
\begin{subequations}
\begin{align}
\Gamma^\lambda_{(\mu \nu)} &= \hat{\Gamma}^{\lambda}_{\mu \nu} +2 S^{\lambda}_{\phantom{\lambda} (\mu \nu)},
\label{eq:symmetric_part}
\\
\Gamma^\lambda_{[\mu \nu]} & := S_{\mu \nu}^{\phantom{\mu \nu} \lambda},
\label{eq:torsion_tensor}
\end{align}
\end{subequations}
where $\hat{\Gamma}^{\lambda}_{\mu \nu}$ corresponds to the \emph{Levi-Civita connection} and
$S_{\mu \nu}^{\phantom{\mu \nu} \lambda}$ is the \emph{Cartan torsion tensor} \cite{Hehl1976_fundations,Gasperini-DeSabbata,Medina2018}. This last term represents the geometrical counterpart of the spin inside the matter, which, along with the mass, fulfils a dynamical role in the EC framework. Hereafter, a hat symbol refers to quantities framed in GR. The affine connection $\Gamma^{\lambda}_{\mu \nu}$ can be also written as $\Gamma^\lambda_{\mu \nu}:=\hat{\Gamma}^{\lambda}_{\mu \nu}-K_{\mu \nu}^{\phantom{\mu \nu} \lambda}$, where $K_{\mu \nu}^{\phantom{\mu \nu} \lambda}$ is the \emph{contortion tensor}. The EC field equations assume the GR-like form
\begin{align}
\hat{G}^{\alpha\beta}&=\frac{8\pi G}{c^4}\left(T^{\alpha\beta}+\frac{8\pi G}{c^4}\mathcal{S}^{\alpha \beta}\right),\label{eq:hat-G-equals-tilde-T}
\end{align}
where $T^{\alpha \beta}$ is the metric energy-momentum tensor, while $\mathcal{S}^{\alpha \beta}$, which we may dub \emph{\qm{torsional stress-energy tensor}}, depends on the \emph{spin angular momentum tensor} $\tau_{\gamma}^{\phantom{\gamma}\beta \alpha}$ (see Eq. (5c) in Ref. \cite{Paper2}).
The Weyssenhoff semiclassical model pertains to the description of a neutral spinning perfect fluid within EC theory \cite{Obukhov1987,Boehmer2006}. First of all, the fluid is characterized by the spin angular momentum tensor
\begin{align}
\tau_{\alpha\beta}{}^\gamma&=s_{\alpha\beta}u^\gamma,
\label{eq:spin-tensor-fluid}
\end{align}
where $s_{\alpha\beta}=s_{[\alpha\beta]}$ and $u^\alpha$ represent the spin density tensor and the timelike four-velocity vector of the fluid, respectively. Furthermore, it is subject to the \emph{Frenkel condition}
\begin{equation} \label{eq:Frenkel_condition}
\tau_{\alpha\beta}{}^\beta= s_{\alpha\beta}\,u^\beta=0,
\end{equation}
which, in turn, leads to the identity \cite{Paper2}
\begin{align} \label{eq:gauge_EC}
S^{\alpha \mu}_{\phantom{\alpha \mu}\mu}=0.
\end{align}
Moreover, the metric and the torsional stress-energy tensors are, respectively, \cite{Paper2}
\begin{align}
T^{\alpha \beta} & = e \dfrac{u^\alpha u^\beta}{c^2}+ \mathcal{P}^{\alpha\beta} P
\nonumber \\
& + 2 \left(\dfrac{u_\mu u^\gamma}{c^2}-\delta^\gamma_\mu\right)\hat{\nabla}_\gamma\left[s^{\mu(\alpha}u^{\beta)}\right]
\nonumber \\
& - \dfrac{16 \pi G}{c^4} \left(s^2 u^\alpha u^\beta + c^2 s^{\alpha}_{\phantom{\alpha}\lambda} s^{\beta \lambda}\right), \label{eq:T_alpha_beta_fluid}
\\
\mathcal{S}^{\alpha \beta} &=2c^2 s^{\alpha}_{\phantom{\alpha}\lambda} s^{\beta \lambda} +s^2 u^\alpha u^\beta -\dfrac{1}{2}s^2 c^2 g^{\alpha \beta},
\label{eq:S-tensor-fluid}
\end{align}
where $e= \rho c^2 + \varepsilon$ is the fluid total energy density ($\rho$ and $\varepsilon$ being the rest-mass and the internal energy densities, respectively), $\mathcal{P}^{\mu\nu}= \frac{u^\mu u^\nu}{c^2}+g^{\mu\nu}$ the projector operator on the hypersurface orthogonal to $u^\alpha$, $P$ the fluid pressure, and $s^2 := s^{\alpha \beta}s_{\alpha \beta}$ the spin density scalar.
The dynamics of the Weyssenhoff fluid is governed by a set of translational and rotational equations \cite{Paper2,Obukhov1987}. The former is represented by the Euler equation
\begin{align} \label{eq:translational_fluid_equation_2}
& \mathcal{P}^\nu_\mu \partial_\nu P + \dfrac{1}{c^2} \left(P+ e \right) a_\mu - \dfrac{2}{c^2} \hat{\nabla}_\nu \left( u^\nu a^\rho s_{\rho \mu} \right)
\nn \\
&+\frac{16 \pi G}{c^4} a^\lambda s_{\lambda \rho} s_{\mu}^{\phantom{\mu}\rho}= - s_{\nu \rho} u^\sigma R_{\mu \sigma}^{\phantom{\mu \sigma}\nu \rho},
\end{align}
whereas the latter reads as
\begin{align} \label{eq:rotational_fluid_equation}
\hat{\nabla}_\lambda \left( s_{\mu \nu} u^\lambda \right) & = \dfrac{a^\sigma}{c^2} \left(u_\mu s_{ \sigma \nu}- u_\nu s_{ \sigma \mu} \right),
\end{align}
where $a^\mu$ is the fluid four-acceleration vector and $R_{\mu \sigma}^{\phantom{\mu \sigma}\nu \rho}$ the Riemann tensor. Note that Eq. \eqref{eq:translational_fluid_equation_2} reduces to the GR Euler equation if the spin vanishes.
\subsubsection{Post-Newtonian description}
\label{sec:PN_description}
The PN description of EC theory can be greatly simplified upon assuming that the torsion tensor has vanishing trace (see Eq. \eqref{eq:gauge_EC}), as in this way it is possible to employ a harmonic gauge having the same form as in GR \cite{Paper1,Paper2}. Therefore, we can write the 1PN-accurate metric tensor in terms of the Poisson-type potentials $U$ and $U_i$, and the superpotential $X$, which are defined by, respectively,
\begin{subequations}
\begin{align}
U\left(t,\boldsymbol{x} \right) &:= G \int \dfrac{{\rm d}^3\boldsymbol{x}^\prime}{|\boldsymbol{x}-\boldsymbol{x}^\prime|}\, \sigma^\prime, \label{eq:U-potential-def}
\\
U_i\left(t,\boldsymbol{x} \right) &:= G \int \dfrac{{\rm d}^3\boldsymbol{x}^\prime}{|\boldsymbol{x}-\boldsymbol{x}^\prime|}\, \sigma_i^\prime
\label{eq:U-i-potential-def}
\\
X \left(t,\boldsymbol{x} \right)&:= G\int \dd^3 \boldsymbol{x}^\prime \, |\boldsymbol{x}-\boldsymbol{x}^\prime| \sigma^{\prime} ,
\label{eq:superpotential-EC-definition}
\end{align}
\end{subequations}
where the primed variables are evaluated at time $t$ and position $\boldsymbol{x}^\prime$ and
\begin{subequations}
\label{eq:def_sigma-sigma_i}
\begin{align}
\sigma &:= \frac{T^{00}+T^{kk}}{c^2} + \frac{8 \pi G}{c^6}\left(\mathcal{S}^{00}+\mathcal{S}^{kk}\right), \\
\sigma_i &:= \frac{T^{0i}}{c} + \frac{8 \pi G}{c^5} \mathcal{S}^{0i}.
\end{align}
\end{subequations}
The metric energy-momentum tensor $T_{\mu\nu}$ admits the same PN structure as in GR (see e.g. Eqs. (9.1.42)--(9.1.44) in Ref. \cite{Weinberg1972}). Moreover, starting from the PN expansion of the spin angular momentum tensor $\tau_{\lambda}^{\phantom{\lambda}\mu \nu}$, it is possible to build the PN series of the torsional stress-energy tensor $\mathcal{S}^{\mu \nu}$; further details can be found in Refs. \cite{Paper1,Paper2}.
Bearing in mind the above premises, it is possible to construct the PN expansions of the main objects underlying the Weyssenhoff model. First of all, if we write the fluid four-velocity as $ u^\mu = \frac{u^0}{c} \left(c,\boldsymbol{v}\right)$ (with $\boldsymbol{v} := {\rm d}\boldsymbol{x}/{\rm d}t$ the coordinate velocity), then it follows from Eqs. \eqref{eq:T_alpha_beta_fluid} and \eqref{eq:S-tensor-fluid} that the PN form of $\sigma$ and $\sigma_i$ reads as
\begin{subequations}
\label{eq:sigma-sigmai-sigmaii-Weyseenhoff}
\begin{align}
\sigma&=\rho^{\star \star} + \rho_{{\rm v}} -\dfrac{4}{c^2}\partial_k\left(s_{kl}v^l\right) + {\rm O}\left( c^{-4}\right),
\label{eq:sigma-Weyseenhoff}
\\
\sigma_i&=\rho^\star v^i-\partial_k s_{ki} +{\rm O}\left( c^{-2}\right),
\label{eq:sigma-i-Weyseenhoff}
\end{align}
\end{subequations}
where we have defined
\begin{align}
\rho^{\star \star} & := \rho^\star \left[1+\dfrac{1}{c^2} \left(\dfrac{v^2}{2}+ \Pi -\dfrac{U}{2}\right)\right],
\\
\rho_{\rm v}& := \dfrac{1}{c^2} \rho^\star \left(v^2 -\dfrac{U}{2}+ \dfrac{3P}{\rho^\star}\right),
\end{align}
with $\Pi := \varepsilon/\rho$ the specific internal energy and $\rho^\star := \frac{u^0}{c} \sqrt{-g} \rho = \rho + \OO\left(c^{-2}\right)$ the coordinate rest-mass density of the fluid, which, in turn, satisfies the exact conservation equation
\begin{align} \label{eq:continuity-eq-rho-star}
\dfrac{{\rm d}}{{\rm d}t} \rho^\star +\rho^\star \partial_k v^k=0,
\end{align}
where $\frac{{\rm d}}{{\rm d}t} f(t,\boldsymbol{x})= \partial_t f + v^k \partial_k f$. We note that in deriving Eq. \eqref{eq:sigma-sigmai-sigmaii-Weyseenhoff} we have exploited the Frenkel condition \eqref{eq:Frenkel_condition} and the fact that
\begin{align}
s_{ij}={}^{(1)}s_{ij} + \OO\left(c^{-2}\right),
\label{eq:PN-s-ij-and-s-ij-1}
\end{align}
${}^{(n)}s_{\mu \nu}$ denoting a factor going like $\frac{\bar{M} \bar{v}^n}{\bar{d}^2 c^{n-1}}$ ($\bar{M}$, $\bar{v}$, and $\bar{d}$ are the typical mass, internal velocity, and dimension of the source, respectively).
By virtue of Eqs. \eqref{eq:U-potential-def} and \eqref{eq:sigma-Weyseenhoff}, the instantaneous potential $U$ can be written as
\begin{align} \label{eq:U-potential-PN-form}
U = \hat{\mathscr{U}} + \dfrac{1}{c^2} \left(\hat{\psi} + \Sigma\right) + \OO\left(c^{-4}\right),
\end{align}
where we have adopted the following definitions\footnote{Although we have indicated the potential \eqref{eq:lowercase-psi-potential} with a hat symbol, we recall that the specific internal energy $\Pi$ receives contributions also from the spin tensor. Despite that, we will see that $\hat{\psi}$ assumes the same functional form as in GR.}:
\begin{subequations}
\label{eq:potentials-U-psi-Sigma}
\begin{align}
\hat{\mathscr{U}}\left(t,\boldsymbol{x}\right) &:=G \int \dfrac{{\rm d}^3\boldsymbol{x}^\prime}{|\boldsymbol{x}-\boldsymbol{x}^\prime|}\rho^{\star \prime},
\label{eq:curly-U-potential-EC-theory}
\\
\hat{\psi} \left(t,\boldsymbol{x}\right) &:=G \int \dfrac{{\rm d}^3\boldsymbol{x}^\prime}{|\boldsymbol{x}-\boldsymbol{x}^\prime|}\rho^{\star \prime} \left(\dfrac{3}{2} v^{\prime\, 2} -\hat{\mathscr{U}}^\prime + \Pi^\prime + \dfrac{3P^\prime}{\rho^{\star \prime}}\right),
\label{eq:lowercase-psi-potential}
\\
\Sigma \left(t,\boldsymbol{x}\right) &:= 4G \int \dd^3 \boldsymbol{x}^\prime \dfrac{(x - x^\prime)_k}{|\boldsymbol{x}-\boldsymbol{x}^\prime|^3} s^\prime_{kl} v^{\prime\,l}.
\label{eq:Sigma-potential-EC}
\end{align}
\end{subequations}
Furthermore, as a consequence of Eqs. \eqref{eq:U-i-potential-def} and \eqref{eq:sigma-i-Weyseenhoff}, we find for the potential $U_i$ that
\begin{align} \label{eq:U-i-potential-PN-form}
U_i= \hat{\mathscr{U}}_i + \Sigma_i + \OO\left(c^{-2}\right),
\end{align}
where
\begin{subequations}
\label{eq:potentials-U-i-Sigma-i}
\begin{align}
\hat{\mathscr{U}}_i \left(t,\boldsymbol{x}\right) &:=G \int \dfrac{{\rm d}^3\boldsymbol{x}^\prime}{|\boldsymbol{x}-\boldsymbol{x}^\prime|}\rho^{\star \prime} v^{\prime i},
\label{eq:curly-U-i-potential-EC-theory}
\\
\Sigma_i \left(t,\boldsymbol{x}\right) &:= G \int \dd^3 \boldsymbol{x}^\prime \dfrac{(x - x^\prime)_k}{|\boldsymbol{x}-\boldsymbol{x}^\prime|^3} s^\prime_{ki}.
\label{eq:Sigma-i-potential-EC}
\end{align}
\end{subequations}
For the superpotential, we have from Eqs. \eqref{eq:superpotential-EC-definition} and \eqref{eq:sigma-Weyseenhoff}
\begin{align}
X= \hat{\chi} + \OO\left(c^{-2}\right),
\end{align}
where
\begin{align} \label{eq:potential-chi}
\hat{\chi} \left(t,\boldsymbol{x}\right):= G\int \dd^3 \boldsymbol{x}^\prime \,|\boldsymbol{x}-\boldsymbol{x}^\prime| \rho^{\star \prime}.
\end{align}
In order to to work out the potentials \eqref{eq:Sigma-potential-EC} and \eqref{eq:Sigma-i-potential-EC},
we have exploited the divergence theorem jointly with the hypothesis according to which the spin density tensor $s_{\mu \nu}$ has compact support in the region occupied by the gravitational source.
The PN dynamics of the fluid is obtained by expanding Eqs. \eqref{eq:translational_fluid_equation_2} and \eqref{eq:rotational_fluid_equation}. At 1PN level, the Euler equation \eqref{eq:translational_fluid_equation_2} yields, after some calculations,
\begin{align}
&\rho^\star \left(\dfrac{{\rm d}v^i}{{\rm d}t} - \partial_i \hat{\mathscr{U}} \right) + \partial_i P + \dfrac{1}{c^2} \Biggl[ v^i \partial_t P - \partial_i P \Biggl( \hat{\mathscr{U}} + \dfrac{v^2}{2}
\nn \\
&+ \dfrac{P + \varepsilon}{\rho^\star} \Biggr) \Biggr] + \dfrac{\rho^\star}{c^2} \Biggl[ \partial_i \hat{\mathscr{U}} \left(-v^2 + 4 \hat{\mathscr{U}}\right) + v^i \left( 4 v^k \partial_k \hat{\mathscr{U}}\right.
\nn \\
&\left.+3 \partial_t \hat{\mathscr{U}}\right) -4 \partial_t \hat{\mathscr{U}}_i + 4v^j \left(\partial_i \hat{\mathscr{U}}_j -\partial_j \hat{\mathscr{U}}_i\right) - \partial_i \hat{\Psi} \Biggr]
\nn \\
& +\dfrac{\rho^\star}{c^2} \Biggl[-\partial_i \Sigma -4 \partial_t \Sigma_i + 4v^j \left(\partial_i \Sigma_j -\partial_j \Sigma_i\right) \Biggr]
\nn \\
&+ \dfrac{2}{c^2} \dfrac{ s_{ki}}{\rho^{\star}} \Biggl[ \partial_t \partial_k P + \partial_j \left(v^j \partial_k P \right)\Biggr]
\nn \\
& +\dfrac{2}{c^2} s_{jk}\Biggr{[}-v^k\partial_i\partial_j \hat{\mathscr{U}} +v^l\left(\delta_{i[k}\partial_{j]}\partial_l+\delta_{l[j}\partial_{k]}\partial_i\right) \hat{\mathscr{U}}
\nn \\
&+8 \pi G \partial_{[k} s_{i|j]} +2\partial_i \partial_{[j} \hat{\mathscr{U}}_{k]}+2\partial_i \partial_{[j} \Sigma_{k]}+\delta_{i[k}\partial_{j]}\partial_t \hat{\mathscr{U}}\Biggr{]}
\nn \\
&={\rm O}\left(c^{-4}\right),
\label{eq:1PN-Euler-equation-explicit-2}
\end{align}
where we have exploited the Frenkel condition and we have defined
\begin{align} \label{Eq:Capital-PSI-potential}
\hat{\Psi}:= \hat{\psi} +\dfrac{1}{2}\partial^2_t \hat{\chi}.
\end{align}
Note that both the terms involving the product between the spin tensor $s_{jk}$ and the second order derivatives of the potentials, and those depending on the factors $s_{jk} \partial_{[k} s_{i|j]}$ are due to the contribution of the Riemann tensor occurring on the right-hand side of Eq. \eqref{eq:translational_fluid_equation_2}. For our purposes, we will need the leading-order expansion of the rotational equation, which, owing to Eq. \eqref{eq:rotational_fluid_equation}, is
\begin{align}
\dfrac{{\rm d}}{{\rm d}t} s_{ij}+s_{ij}\partial_k v^k={\rm O}\left(c^{-2}\right).
\label{eq:continuity-equation-s-ij}
\end{align}
\subsection{$N$-body problem as the point-particle limit of the continuous description}
\label{sec:PPL_Continuous}
In this section, we derive the equations governing the 1PN-accurate dynamics of a system of $N$ gravitationally interacting bodies by performing the point-particle limit of Eq. \eqref{eq:1PN-Euler-equation-explicit-2}, which is outlined in Sec. \ref{sec:PP_limit}. Then, after having worked out the explicit form of the potentials in Sec. \ref{sec:IN_EX_POTENTIALS}, we analyze the new EC spin-dependent terms occurring in Eq. \eqref{eq:1PN-Euler-equation-explicit-2} in Sec. \ref{sec:spin-terms-in-Euler-equation}. The analysis of the derivatives of the external potentials, contained in Sec. \ref{sec:Derivative-ext-pot}, allows us to finally derive the desired 1PN equations of motion (see Sec. \ref{sec:equations_of_motion}).
The point-particle procedure is applied to a framework where the fluid distribution can be broken into a collection of $N$ separated components, usually referred to as bodies \cite{Poisson-Will2014,Blanchet-Schafer1989}. The main advantage of this pattern consists in the fact that Eq. \eqref{eq:1PN-Euler-equation-explicit-2}, which in general comprises a set of partial and integro-differential equations, is transformed into a set of ordinary differential equations, which thus can be more easily dealt with.
The terms occurring in Eq. \eqref{eq:1PN-Euler-equation-explicit-2} which are independent of the spin give rise to the well-known EIH equations (see chapter 9 of Ref. \cite{Poisson-Will2014} for details). These can be obtained in two equivalent ways : (1) by employing the point-particle limit and noting that the angular momentum of each body, which stems from a macroscopic rotation, vanishes in our framework; (2) by supposing that the fluid is made of $N$ structureless point particles. In the latter case, the mass density $\rho^\star$, being defined in terms of the Dirac delta function, is assigned a distributional nature and the ensuing divergent integrals are then regularized by means of either the Hadamard or the dimensional regularization prescription.
In EC theory, the evaluation of the new spin-dependent part of Eq. \eqref{eq:1PN-Euler-equation-explicit-2} via the Dirac-delta formalism runs into some difficulties. Indeed, the terms involving the products between the second-order spatial derivatives of the potentials and the spin density $s_{jk}$ (which assumes a distributional nature in this approach) yield factors quadratic in the Dirac delta function, which are ill-defined in the Schwartz theory of distributions \cite{Lieb(2001)} (a formal method to handle the multiplication of distributions is provided by Colombeau theory \cite{Colombeau1984,Colombeau1985})\footnote{We note that the product of distributions arises in many research fields, such as electrodynamics and particle physics \cite{Gsponer2008a,Gsponer2008b,Gsponer2008c}, and gravitational shock-waves \cite{Dray1984,Battista_Riemann_boosted}.}. In this paper, we will circumvent these issues by working out the spin-dependent terms occurring in the Euler equation \eqref{eq:1PN-Euler-equation-explicit-2} via the abovementioned point-particle limit. This method does not entail the presence of Dirac delta distributions and the related singularities.
\subsubsection{The point-particle limit}
\label{sec:PP_limit}
As pointed out before, the Weyssenhoff fluid modelling the gravitational source is supposed to be split into $N$ separated pieces. Therefore, we can express the coordinate fluid density and the spin density as, respectively,
\begin{subequations}
\begin{align}
\rho^\star &= \sum_A \rho^\star_A,
\\
s_{ik}&= \sum_A s^A_{ik},
\end{align}
\end{subequations}
where both $\rho^\star_A$ and $s^A_{ik}$ are nonvanishing only within the volume occupied by the body $A$. Hereafter, the bodies and all their related quantities are indicated with capital Latin indices $A,B,C=1, \dots, N$.
It is convenient to define the following variables:
\begin{subequations}
\begin{align}
m_A &:= \int_A \dd^3 \boldsymbol{x} \; \rho^\star,
\label{eq:material-mass-A}
\\
\varepsilon_{jki}s_A^i(t) &:=\int_A {\rm d}^3 \boldsymbol{x} \, s_{jk},
\label{eq:spin-vector-body-A}
\\
\boldsymbol{x}_A(t)&:=\dfrac{1}{m_A} \int_A \dd^3 \boldsymbol{x} \; \rho^\star \boldsymbol{x},
\\
\boldsymbol{v}_A(t)&:=\dfrac{\dd \boldsymbol{x}_A}{\dd t}=\dfrac{1}{m_A} \int_A \dd^3 \boldsymbol{x} \; \rho^\star \boldsymbol{v},
\\
\boldsymbol{a}_A(t)&:=\dfrac{\dd \boldsymbol{v}_A}{\dd t}=\dfrac{1}{m_A} \int_A \dd^3 \boldsymbol{x} \; \rho^\star \frac{\dd \boldsymbol{v}}{\dd t},
\end{align}
\end{subequations}
representing the material mass, the spin vector, the center of mass, the center of mass velocity, and the center of mass acceleration of $A$, respectively. Note that the domain of integration is independent of time and extends beyond the volume occupied by $A$. Owing to the continuity equation \eqref{eq:continuity-eq-rho-star}, the material mass \eqref{eq:material-mass-A} is constant, whereas Eq. \eqref{eq:continuity-equation-s-ij} implies that the spin vector \eqref{eq:spin-vector-body-A} is conserved modulo $\OO\left(c^{-2}\right)$ corrections. Moreover, the following notations will be employed:
\begin{align}
\boldsymbol{d}_A &:= \boldsymbol{x} - \boldsymbol{x}_A, \qquad \quad \; \boldsymbol{n}_{A} := \dfrac{\boldsymbol{d}_{A}}{d_{A}},
\nn\\
\boldsymbol{r}_{AB} &:= \boldsymbol{x}_A - \boldsymbol{x}_B, \qquad \boldsymbol{n}_{AB} := \dfrac{\boldsymbol{r}_{AB}}{r_{AB}}.
\end{align}
The (conserved) total mass-energy of the body $A$ is \cite{Paper2}
\begin{align}
M_A &=\int_A {\rm d}^3 \boldsymbol{x} \, \rho^\star \left[1+\dfrac{1}{c^2} \left(\dfrac{w^2}{2} + \Pi -\dfrac{U_A}{2}\right)\right]
\nonumber \\
&+ {\rm O}(c^{-4}),
\label{eq:total-mass-bodyA}
\end{align}
$U_A$ being the internal selfgravity of $A$ (further details will be given in Sec. \ref{sec:IN_EX_POTENTIALS} below), and
\begin{subequations}
\begin{align}
y^i &:= x^i -x^i_A\left(t\right), \label{eq:y-i-A-def}
\\
w^i &:= \dfrac{{\rm d}}{{\rm d}t} y^i = v^i - v^i_A\left(t\right),
\label{eq.w-i-A-def}
\end{align}
\end{subequations}
denoting the position relative to the center of mass $x^i_A$ and the velocity relative to the body velocity $v^i_A$ of a fluid element, respectively.
Hereafter, we will exploit the following reasonable hypotheses regarding the bodies, which are supposed to be: (1) reflection symmetric about their center of mass; (2) in stationary equilibrium; (3) mutually well separated.
The stationary-equilibrium condition implies that any fluid element has vanishing velocity relative to the center of mass, namely in our calculations the terms involving $w^i$ can be ignored (see Eq. \eqref{eq.w-i-A-def}). Note that this hypothesis resembles the static equilibrium used in GR. The subsequent calculations, performed \qm{as in the GR static equilibrium case}, are not spoiled by the presence of the spin as long as the intrinsic rotation of each fluid element is stationary, i.e., the spin vector associated to each fluid element neither changes direction nor varies in time \cite{Moller1962,Romano2019}. This relativistic issue presents already at the classical level, when we deal with \emph{(nonclosed) micropolar continuous systems}, which find physical applications in ferromagnetic substances or liquid crystals \cite{Romano2014}.
If the bodies are well separated, then $\ell_A/d_A \ll 1$, $\ell_A$ denoting the typical linear dimension of $A$.
For this reason, hereafter terms of fractional order $(\ell_A/d_A)^2$ or $(\ell_A/r_{AB})^2$ will be neglected. The hypotheses underlying our approach are visually summarized in Fig. \ref{fig:Fig1}.
\subsubsection{The potentials}
\label{sec:IN_EX_POTENTIALS}
The potentials \eqref{eq:potentials-U-psi-Sigma}, \eqref{eq:potentials-U-i-Sigma-i}, and \eqref{eq:potential-chi} can be divided into internal and external pieces. The former represent the potentials produced by the body $A$, while the latter refer to the potentials sourced by the remaining bodies of the system. Let $\mathscr{F}\left(t,\boldsymbol{x}\right) = \int \dd^3 \boldsymbol{x}^\prime \, f\left(t,\boldsymbol{x},\boldsymbol{x}^\prime \right) $ be a generic potential where the function $f$ has a compact support consisting of $N$ mutually disjoint connected regions. The internal and external contributions of $\mathscr{F}$ have the form
\begin{align}
\mathscr{F}_A \left(t,\boldsymbol{x}\right) &= \int_A \dd^3 \boldsymbol{x}^\prime \, f\left(t,\boldsymbol{x},\boldsymbol{x}^\prime \right),
\nn \\
\mathscr{F}_{\neg A} \left(t,\boldsymbol{x}\right) &= \sum_{B \neq A} \int_B \dd^3 \boldsymbol{x}^\prime \, f\left(t,\boldsymbol{x},\boldsymbol{x}^\prime \right),
\end{align}
respectively, so that $\mathscr{F}$ can be written as
\begin{align}
\mathscr{F}= \mathscr{F}_A + \mathscr{F}_{\neg A}.
\end{align}
For example, the potential $\Sigma$ can be decomposed as (see Eq. \eqref{eq:Sigma-potential-EC})
\begin{subequations}
\begin{align}
\Sigma_A \left(t,\boldsymbol{x}\right) &= 4G \int_A \dd^3 \boldsymbol{x}^\prime \, s^\prime_{kl} \dfrac{(x - x^\prime)_k}{|\boldsymbol{x}-\boldsymbol{x}^\prime|^3} v^{\prime\,l}
\\
\Sigma_{\neg A} \left(t,\boldsymbol{x}\right) &= \sum_{B \neq A} 4G \int_B \dd^3 \boldsymbol{x}^\prime \, s^\prime_{kl} \dfrac{(x - x^\prime)_k}{|\boldsymbol{x}-\boldsymbol{x}^\prime|^3} v^{\prime\,l}.
\end{align}
\end{subequations}
In our hypotheses, the potentials $\hat{\mathscr{U}}$, $\hat{\mathscr{U}}_i$, $\hat{\Psi}$ (cf. Eqs. \eqref{eq:curly-U-potential-EC-theory}, \eqref{eq:curly-U-i-potential-EC-theory}, and \eqref{Eq:Capital-PSI-potential}) assume the same form as in GR \cite{Poisson-Will2014}. In particular, we have
\begin{align}
\hat{\mathscr{U}}(t,\boldsymbol{x}) &= \sum_A \frac{G m_A}{d_A},
\\
\hat{\mathscr{U}}_i (t,\boldsymbol{x})&=\sum_A \frac{G m_A v_A^i}{d_A}.
\end{align}
Bearing in mind Eqs. \eqref{eq:Sigma-potential-EC} and \eqref{eq:Sigma-i-potential-EC}, and adopting the same techniques as in GR (see e.g. chapter 9 of Ref. \cite{Poisson-Will2014}), for the spin-dependent potentials we find
\begin{align}
\Sigma (t,\boldsymbol{x}) &= 4G \sum_A \left(\boldsymbol{v}_A \times \boldsymbol{s}_A\right) \cdot \frac{\boldsymbol{n}_A}{d_A^2},
\label{eq:potential-Sigma-pp-limit}
\\
\Sigma_i (t,\boldsymbol{x}) &= G \sum_A \frac{\left(\boldsymbol{s}_A \times \boldsymbol{n}_A\right)^i}{d_A^2}.
\label{eq:potential-Sigma-i-pp-limit}
\end{align}
\subsubsection{Analysis of the spin-dependent terms}
\label{sec:spin-terms-in-Euler-equation}
In this section, we work out the contributions involving the spin density $s_{ki}$ which occur in Eq. \eqref{eq:1PN-Euler-equation-explicit-2}. In the following calculations, all functions inside integrals involving $y^i$ variables are supposed to depend on $t$ and $\boldsymbol{y}+\boldsymbol{x}_A(t)$.
If we define the \emph{inner-structure-dependent quantity}
\begin{align}
\mathcal{H}^{ki}_A&:= 3G\int_A \dd^3 \boldsymbol{y} \, \dd^3 \boldsymbol{y}^\prime \, \rho^\star s^\prime_{kj} \frac{(y-y^\prime)^{\langle i}(y-y^\prime)^{j \rangle}}{\vert \boldsymbol{y}-\boldsymbol{y}^\prime \vert^5},
\label{eq:tensor-mathcal-H-A-ki}
\end{align}
then for the first group of spin-dependent terms appearing in Eq. \eqref{eq:1PN-Euler-equation-explicit-2}, we find
\begin{subequations}
\begin{align}
\int_A \dd^3 \boldsymbol{x} \, \rho^\star \partial_i \Sigma &= m_A \partial_i \Sigma_{\neg A} \left(t,\boldsymbol{x}_A \right) + 4 v^l_A \mathcal{H}^{li}_A,
\\
\int_A \dd^3 \boldsymbol{x} \, \rho^\star \partial_t\Sigma_i &=m_A \partial_t \Sigma_{i,\neg A} \left(t,\boldsymbol{x}_A \right) -v^j_A \mathcal{H}_A^{ij}
\nn \\
&+ \OO\left(c^{-2}\right),
\label{eq:integral-2}
\\
\int_A \dd^3 \boldsymbol{x} \, \rho^\star v^j \partial_i\Sigma_j &=m_A v^j_A \partial_i \Sigma_{j,\neg A} \left(t,\boldsymbol{x}_A \right) + v^j_A \mathcal{H}_A^{ji},
\end{align}
\end{subequations}
where we have exploited the continuity equation \eqref{eq:continuity-equation-s-ij} to derive Eq. \eqref{eq:integral-2}. Moreover, the spin-dependent quantities involving the second-order derivatives of the potentials give
\begin{subequations}
\begin{align}
\int_A \dd^3 \boldsymbol{x} \, s_{jk} v^k \partial_i \partial_j \hat{\mathscr{U}} &= \left(\boldsymbol{v}_A \times \boldsymbol{s}_A\right)^j \partial_i \partial_j \hat{\mathscr{U}}_{\neg A}\left(t,\boldsymbol{x}_A\right)
\nn \\
& -v^k_A \mathcal{H}_A^{ki},
\\
\int_A \dd^3 \boldsymbol{x} \, s_{jk} \partial_i \partial_j \hat{\mathscr{U}}_k &=\varepsilon_{jkl} s^l_A \partial_i \partial_j \hat{\mathscr{U}}_{k,\neg A}\left(t,\boldsymbol{x}_A\right) -v^k_A \mathcal{H}_A^{ki},
\\
\int_A \dd^3 \boldsymbol{x} \, s_{jk} \partial_i \partial_j \Sigma_k &= \varepsilon_{jkl} s^l_A \partial_i \partial_j \Sigma_{k,\neg A}\left(t,\boldsymbol{x}_A\right),
\\
\int_A \dd^3 \boldsymbol{x} \, s_{ji} \partial_j \partial_t \hat{\mathscr{U}} &=\varepsilon_{jil} s^l_A \partial_j \partial_t \hat{\mathscr{U}}_{\neg A} \left(t,\boldsymbol{x}_A\right) + v^k_A \mathcal{H}_A^{ik}.
\end{align}
\end{subequations}
Last, both the integral
\begin{align}
&\int_A \dd^3\boldsymbol{x} \left(s_{jk} \partial_k s_{ij}-s_{jk} \partial_j s_{ik}\right) =2 \int_A \dd^3\boldsymbol{x}\, s_{ij} \partial_k s_{kj},
\end{align}
and those involving the derivatives of the pressure vanish owing to the reflection symmetry condition.
\subsubsection{Derivatives of the external potentials}
\label{sec:Derivative-ext-pot}
In our hypotheses, the derivatives of $\hat{\mathscr{U}}_{\neg A}$ and $\hat{\mathscr{U}}_{j,\neg A}$ assume the same form as in GR. In particular (see chapter 9 of Ref. \cite{Poisson-Will2014}),
\begin{subequations}
\begin{align}
\partial_i \partial_j \hat{\mathscr{U}}_{\neg A} \left(t, \boldsymbol{x}_A\right)&= \sum_{B \neq A} \frac{3Gm_B}{r_{AB}^3}n_{AB}^{\langle ij \rangle},
\\
\partial_i \partial_j \hat{\mathscr{U}}_{k,\neg A} \left(t, \boldsymbol{x}_A\right)&= \sum_{B \neq A} \frac{3Gm_B}{r_{AB}^3}n_{AB}^{\langle ij \rangle}v^k_B,
\\
\partial_i \partial_t \hat{\mathscr{U}}_{\neg A} \left(t, \boldsymbol{x}_A\right)&= \sum_{B \neq A} \frac{-3Gm_B}{r_{AB}^3}n_{AB}^{\langle ij \rangle}v_B^j.
\end{align}
\end{subequations}
For the new spin-dependent potentials, we have
\begin{subequations}
\begin{align}
\partial_i \Sigma_{\neg A} \left(t, \boldsymbol{x}_A\right)&=\sum_{B \neq A} \frac{G}{r_{AB}^3}
\Bigl[ 4\left(\boldsymbol{v}_B \times \boldsymbol{s}_B\right)^i
\nn \\
&-12 \boldsymbol{n}_{AB} \cdot \left(\boldsymbol{v}_B \times \boldsymbol{s}_B\right)n_{AB}^i\Bigr],
\\
\partial_t \Sigma_{i,\neg A} \left(t, \boldsymbol{x}_A\right)&= \sum_{B \neq A} \frac{G}{r_{AB}^3}\Bigl[\left(\boldsymbol{v}_B \times \boldsymbol{s}_B\right)^i
\nn \\
&- 3\left(\boldsymbol{n}_{AB} \times \boldsymbol{s}_B\right)^i \left(\boldsymbol{v}_B \cdot \boldsymbol{n}_{AB}\right) \Bigr]
\nn \\
&+\OO\left(c^{-2}\right),
\label{eq:EC-derivative-external-potential-2}
\\
\partial_i \Sigma_{j,\neg A} \left(t, \boldsymbol{x}_A\right)&= \sum_{B \neq A} \frac{G}{r_{AB}^3}\Bigl[ 3 n_{AB}^i \left(\boldsymbol{n}_{AB} \times \boldsymbol{s}_B\right)^j
\nn \\
&+ \varepsilon_{ijl}s^l_B \Bigr],
\\
\partial_i\partial_j \Sigma_{k,\neg A} \left(t, \boldsymbol{x}_A\right)&= \sum_{B \neq A}\frac{G}{r_{AB}^4}\Biggl\{ 3 \Bigl[\delta_{ij} \left(\boldsymbol{n}_{AB} \times \boldsymbol{s}_B\right)^k
\nn \\
&+ \varepsilon_{kil} s_B^l n_{AB}^j + \varepsilon_{kjl} s_B^l n_{AB}^i \Bigr]
\nn \\
&-15n_{AB}^i n_{AB}^j \left(\boldsymbol{n}_{AB} \times \boldsymbol{s}_B\right)^k
\Biggr\},
\end{align}
\end{subequations}
where in Eq. \eqref{eq:EC-derivative-external-potential-2} we have exploited the continuity equation \eqref{eq:continuity-equation-s-ij}.
\subsubsection{Equations of motion}
\label{sec:equations_of_motion}
By means of the calculations of the previous sections, the coordinate acceleration $a_A^i$ of the body $A$ reads as
\begin{align}
m_A a_A^i &= m_A a_{A,{\rm EIH}}^i + \frac{1}{c^2} \Biggl\{m_A \biggl[ \partial_i \Sigma_{\neg A} + 4 \partial_t \Sigma_{i,\neg A} -4 v^j_A
\nn \\
&\times \left(\partial_i \Sigma_{j,\neg A}-\partial_j \Sigma_{i,\neg A}\right) \biggr] -2 \biggl[ 2 \varepsilon_{jkl} s^l_A \partial_i \partial_j \hat{\mathscr{U}}_{k,\neg A}
\nn \\
&-2 \left(\boldsymbol{v}_A \times \boldsymbol{s}_A\right)^j \partial_i \partial_j \hat{\mathscr{U}}_{\neg A} + \varepsilon_{jik} s^k_A v^l_A \partial_j \partial_l \hat{\mathscr{U}}_{\neg A}
\nn \\
&+2 \varepsilon_{jkl} s^l_A \partial_i \partial_j \Sigma_{k,\neg A} + \varepsilon_{jil} s^l_A \partial_j \partial_t \hat{\mathscr{U}}_{\neg A} \biggr] \Biggr\}
\nn \\
&+ \OO\left(c^{-4}\right),
\label{eq:EC-body-A-equation-of-motion-1}
\end{align}
where $a_{A,{\rm EIH}}^i$ is the EIH acceleration of the object $A$ and all the external potentials are evaluated at $\boldsymbol{x}=\boldsymbol{x}_A$. Bearing in mind the results of Sec. \ref{sec:Derivative-ext-pot}, the final form of the equations of motion for the body $A$ is
\begin{align}
a_A^i &=a_{A,{\rm EIH}}^i + \frac{4}{c^2}\sum_{B \neq A} \frac{G}{r_{AB}^3} \Biggl\{ 2 \Bigl[ \left(\boldsymbol{v}_B-\boldsymbol{v}_A\right) \times \boldsymbol{s}_B\Bigr]^i
\nn \\
&+3 n_{AB}^i \; \boldsymbol{s}_B \cdot \left[\boldsymbol{n}_{AB} \times \left(\boldsymbol{v}_A-\boldsymbol{v}_B \right) \right]
\nn \\
&+3\left( \boldsymbol{n}_{AB} \times \boldsymbol{s}_B\right)^i \left(\boldsymbol{v}_A - \boldsymbol{v}_B\right) \cdot \boldsymbol{n}_{AB} \Biggr\}
\nn \\
&-\frac{6}{c^2} \sum_{B \neq A} \frac{G M_B}{M_Ar_{AB}^3} \Biggl\{ \Bigl[ \left(\boldsymbol{v}_A-\boldsymbol{v}_B\right) \times \boldsymbol{s}_A\Bigr]^i
\nn \\
&-2 n_{AB}^i \; \boldsymbol{s}_A \cdot \left[\boldsymbol{n}_{AB} \times \left(\boldsymbol{v}_A-\boldsymbol{v}_B \right) \right]
\nn \\
&+\left( \boldsymbol{n}_{AB} \times \boldsymbol{s}_A\right)^i \left(\boldsymbol{v}_B - \boldsymbol{v}_A\right) \cdot \boldsymbol{n}_{AB} \Biggr\}
\nn \\
&-\frac{12}{c^2} \sum_{B \neq A} \frac{G }{M_Ar_{AB}^4} \Biggl \{ s^i_A \left(\boldsymbol{n}_{AB}\cdot \boldsymbol{s}_B\right)+s^i_B \left(\boldsymbol{n}_{AB}\cdot \boldsymbol{s}_A\right)
\nn \\
&+ n_{AB}^i \Bigl[ \boldsymbol{s}_A \cdot \boldsymbol{s}_B - 5 \left(\boldsymbol{n}_{AB}\cdot \boldsymbol{s}_A\right)\left(\boldsymbol{n}_{AB}\cdot \boldsymbol{s}_B\right)\Bigr] \Biggr\}
\nn \\
&+ \OO\left(c^{-4}\right),
\label{eq:EC-body-A-equation-of-motion-2}
\end{align}
where we have taken into account that $M_A = m_A + \OO\left(c^{-2}\right)$ (see Eq. \eqref{eq:total-mass-bodyA}). Equation \eqref{eq:EC-body-A-equation-of-motion-2}, jointly with the conservation law $\dd \boldsymbol{s}_A / \dd t = {\rm O}\left(c^{-2}\right)$, completely determines the dynamics of the $N$-body system at 1PN level.
From the above equations, it is clear that, remarkably, the contributions of the tensor \eqref{eq:tensor-mathcal-H-A-ki} vanish identically. Furthermore, the external potentials do not couple with structure-dependent integrals (such as the mass multipole moments of the bodies) and their derivatives are written in terms of the bodies' mass and spin. In particular, Eq. \eqref{eq:EC-body-A-equation-of-motion-2} involves the total mass $M_A$ and not its decomposition (see Eq. \eqref{eq:total-mass-bodyA}), and the spin of $A$ enters only via the definition \eqref{eq:spin-vector-body-A}. In other words, no corrections stemming from the inner details of the bodies occur in the equations of motion at 1PN order, which imply that both the mass and the spin can be seen as labels characterizing the objects. This result can be interpreted as a hint for the validity of the \emph{effacing principle} of the internal structure in EC theory. Apart from the hypotheses (1)--(3) (see Fig. \ref{fig:Fig1}), which resemble the GR pattern, this achievement has been obtained by means of the Frenkel condition. This is a crucial requirement as it gives physical significance to the Weyssenhoff model and, as consequence, to EC theory as well.
\section{Binary systems}
\label{sec:binary_system}
We apply the results of the previous section to the case of binary systems. The relative acceleration in the barycentric frame is evaluated in Sec. \ref{sec:relative_acceleration}. Then, we estimate the new EC contributions to the GR motion in Sec. \ref{sec:estimate_EC_to_GR}. Last, we conclude the section with an interesting analysis showing the conceptually close connections between GR and EC theories (see Sec. \ref{sec:similarities_GR_EC}).
\subsection{The relative acceleration}
\label{sec:relative_acceleration}
The relative dynamics of the two bodies can be readily described by defining the vectors\footnote{In this section, $\boldsymbol{v}$ is the relative velocity of the binary system and must not be confused with the fluid velocity field. }
\begin{align}
\boldsymbol{r} &:=\boldsymbol{x}_1-\boldsymbol{x}_2, \qquad \qquad \;\;\; \boldsymbol{n}:= \boldsymbol{r}/r,
\nn \\
\boldsymbol{v} &:=\frac{\dd }{\dd t} \boldsymbol{r}=\boldsymbol{v}_1-\boldsymbol{v}_2, \qquad \boldsymbol{a} :=\frac{\dd }{\dd t} \boldsymbol{v}=\boldsymbol{a}_1-\boldsymbol{a}_2,
\label{eq:relative-vectors}
\end{align}
the spin variables
\begin{align}
\boldsymbol{s} := \boldsymbol{s}_1 + \boldsymbol{s}_2, \qquad \boldsymbol{\sigma} := \frac{M_2}{M_1}\boldsymbol{s}_1+\frac{M_1}{M_2}\boldsymbol{s}_2,
\end{align}
and the total mass $M$, the reduced mass $\mu$, and the symmetric mass ratio $\nu$ of the system
\begin{align}
M & := M_1+M_2, \qquad \mu := \frac{M_1M_2}{M}, \qquad \nu := \frac{\mu}{M}.
\label{eq:total-Mass-et-al}
\end{align}
In a mass-centered coordinate system, the motion of the bodies is related to their relative motion by the following relations \cite{Paper2}:
\begin{subequations}
\label{eq:position-vectors-r1-r2-with-spin}
\begin{align}
\boldsymbol{x}_1(t)&=\left[\frac{\mu}{M_1}+\frac{\mu (M_1-M_2)}{2M^2c^2}\left(v^2-\frac{GM}{r}\right)\right]\boldsymbol{r}(t)
\notag\\
&+\frac{2 \nu}{c^2}\left[\dfrac{\boldsymbol{s}_1(t)}{M_1} -\dfrac{\boldsymbol{s}_2(t)}{M_2}\right]\times \boldsymbol{v}(t)+{\rm O}\left(c^{-4}\right),
\\
\boldsymbol{x}_2(t)&=\left[-\frac{\mu}{M_2}+\frac{\mu (M_1-M_2)}{2M^2c^2}\left(v^2-\frac{GM}{r}\right)\right]\boldsymbol{r}(t)\notag\\
&+\frac{2 \nu}{c^2}\left[\dfrac{\boldsymbol{s}_1(t)}{M_1} -\dfrac{\boldsymbol{s}_2(t)}{M_2}\right]\times \boldsymbol{v}(t)+{\rm O}\left(c^{-4}\right).
\end{align}
\end{subequations}
Starting from Eq. \eqref{eq:EC-body-A-equation-of-motion-2} with $N=2$ and employing the abovedefined quantities \eqref{eq:relative-vectors}-\eqref{eq:total-Mass-et-al}, the relative acceleration reads as
\begin{align}
\boldsymbol{a} = \boldsymbol{a}_{\rm EIH} + \boldsymbol{a}_{\rm EC} + \OO \left(c^{-4}\right),
\end{align}
where the GR contribution is
\begin{align}
\boldsymbol{a}_{\rm EIH} &= -\frac{GM}{r^2} \boldsymbol{n} + \frac{GM}{c^2 r^2} \Biggl\{ \Bigl[ 2 (2 + \nu) \frac{GM}{r} + \frac{3}{2} \nu \left(\boldsymbol{n} \cdot \boldsymbol{v}\right)^2
\nn \\
&- (1+3 \nu) v^2 \Bigr] \boldsymbol{n} + 2(2-\nu) \left(\boldsymbol{n} \cdot \boldsymbol{v}\right) \boldsymbol{v} \Biggr\},
\end{align}
whereas the EC correction is given by
\begin{align}
\boldsymbol{a}_{\rm EC} &=\frac{4G}{c^2r^3} \Biggl[ - \boldsymbol{v} \times \left(2 \boldsymbol{s} + \frac{3}{2}\boldsymbol{\sigma}\right) + 3 \boldsymbol{n} \left(\boldsymbol{n} \times \boldsymbol{v}\right) \cdot \left(\boldsymbol{s} + \boldsymbol{\sigma}\right)
\nn \\
&+ 3 \boldsymbol{n} \times \left(\boldsymbol{s} +\frac{\boldsymbol{\sigma}}{2}\right)\left(\boldsymbol{n} \cdot \boldsymbol{v}\right) \Biggr] -\frac{12G}{c^2 r^4 \mu} \Biggl\{ \boldsymbol{s}_1 \left(\boldsymbol{n}\cdot \boldsymbol{s}_2\right)
\nn \\
&+ \boldsymbol{s}_2 \left(\boldsymbol{n}\cdot \boldsymbol{s}_1\right) + \boldsymbol{n} \Bigl[ \boldsymbol{s}_1 \cdot \boldsymbol{s}_2 -5 \left(\boldsymbol{n}\cdot \boldsymbol{s}_1\right) \left(\boldsymbol{n}\cdot \boldsymbol{s}_2\right) \Bigr] \Biggl\}.
\label{eq:EC-acceleration-binary}
\end{align}
The last equation shows that the EC acceleration vector has the same functional form as in GR. This result will be analyzed in Sec. \ref{sec:similarities_GR_EC}.
\subsection{Numerical comparison with general relativity}
\label{sec:estimate_EC_to_GR}
We evaluate the EC contributions to the acceleration by calculating the parameter $\epsilon:= \frac{|\boldsymbol{a}_{\rm EC}|}{|\boldsymbol{a}_{\rm EIH}| }$. We suppose that the bodies are black holes having masses $M_1=2M/3$, $M_2=M/3$, relative radius $\boldsymbol{r}=\left(100 GM/c^2,0,0\right)$, and relative velocity $\boldsymbol{v}=\left(0,0.5\sqrt{GM/r},0\right)$. Following Ref. \cite{Paper2}, the spins can be modelled as $\boldsymbol{s}_i=\frac{4\pi}{3} n \hbar \left(\frac{2 G M_i}{c^2}\right)^3(0,0,1)$ ($i=1,2$), where $n= 10^{44}\, {\rm m}^{-3}$ is estimated as the inverse of the nucleon volume. In this way, we find $10^{-23} \lesssim \epsilon \lesssim 10^{-13} $ for $M\in[6,10^{11}]M_\odot$.
If the bodies have macroscopic angular momenta or \qm{classic spins} $\hat{\boldsymbol{s}}_1$ and $\hat{\boldsymbol{s}}_2$, then, after having defined
\begin{equation}
\hat{\boldsymbol{s}} := \hat{\boldsymbol{s}}_1 + \hat{\boldsymbol{s}}_2, \qquad \hat{\boldsymbol{\sigma}} := \frac{M_2}{M_1}\hat{\boldsymbol{s}}_1+\frac{M_1}{M_2}\hat{\boldsymbol{s}}_2,
\end{equation}
the GR relative acceleration can be written (in the center of mass frame) as
\begin{align} \label{eq:GR-acceleration-with-SO-SS}
\boldsymbol{a}_{\rm GR}= \boldsymbol{a}_{\rm EIH}+\boldsymbol{a}_{\rm SO}+\boldsymbol{a}_{\rm SS}+ \OO \left(c^{-4}\right),
\end{align}
where \cite{Poisson-Will2014}
\begin{subequations}
\begin{align}
\boldsymbol{a}_{\rm SO}&= \frac{2G}{c^2r^3}\Biggl[-\boldsymbol{v}\times\left(2\hat{\boldsymbol{s}}+\frac{3}{2}\hat{\boldsymbol{\sigma}}\right)+3\boldsymbol{n}(\boldsymbol{n}\times\boldsymbol{v})\cdot(\hat{\boldsymbol{s}}+\hat{\boldsymbol{\sigma}})\notag\\
&+3\boldsymbol{n}\times\left(\hat{\boldsymbol{s}}+\frac{\hat{\boldsymbol{\sigma}}}{2}\right)(\boldsymbol{n}\cdot\boldsymbol{v})\Biggr],
\\
\boldsymbol{a}_{\rm SS} &=-\frac{3G}{c^2r^4\mu}\Biggl\{ \hat{\boldsymbol{s}}_1 \left(\boldsymbol{n}\cdot \hat{\boldsymbol{s}}_2\right)+ \hat{\boldsymbol{s}}_2 \left(\boldsymbol{n}\cdot \hat{\boldsymbol{s}}_1\right)
\nn \\
&+ \boldsymbol{n} \Bigl[ \hat{\boldsymbol{s}}_1 \cdot \hat{\boldsymbol{s}}_2 -5 \left(\boldsymbol{n}\cdot \hat{\boldsymbol{s}}_1\right) \left(\boldsymbol{n}\cdot \hat{\boldsymbol{s}}_2\right) \Bigr] \Biggr\}.
\end{align}
\end{subequations}
By employing the above equations, we can compute the EC contributions via the parameter $\epsilon_{\rm spin}=|\boldsymbol{a}_{\rm EC}|/|\boldsymbol{a}_{\rm SO}+\boldsymbol{a}_{\rm SS}|$. We consider the same setup as before, while for the \qm{classic spins} we write $\hat{\boldsymbol{s}}_i= \alpha \frac{GM_i^2}{c}(0,0,1)$ ($i=1,2$ and $\alpha \in (0,1)$). If $\alpha = 1/2$, we obtain $10^{-20} \lesssim \epsilon_{\rm spin} \lesssim 10^{-10}$ with $M\in[6,10^{11}]M_\odot$.
\subsection{Links between general relativity and Einstein-Cartan theory}
\label{sec:similarities_GR_EC}
The analysis of the equations of motion performed in Sec. \ref{sec:relative_acceleration} reveals that, up to a redefinition of the spin variables, the 1PN-accurate EC and GR accelerations coincide (recall, however, the distinct nature featuring the quantum spin and the classical angular momentum). Despite our starting point is represented by Eq. \eqref{eq:translational_fluid_equation_2}, which differs from the GR Euler equation, we find in fact that if
\begin{align}\label{Eq:GR-EC-spin-related}
\hat{\boldsymbol{s}}\quad \leftrightarrow\quad 2 \boldsymbol{s},
\end{align}
then
\begin{align} \label{Eq:GR-EC-accel-related}
\boldsymbol{a}_{\rm SO} + \boldsymbol{a}_{\rm SS}\quad \leftrightarrow\quad \boldsymbol{a}_{\rm EC}.
\end{align}
Various explanations supporting Eq. \eqref{Eq:GR-EC-accel-related} can be provided. First of all, the Frenkel condition \eqref{eq:Frenkel_condition} permits to ignore, at 1PN level, all contributions stemming from the torsional stress-energy tensor \eqref{eq:S-tensor-fluid}. However, at higher PN orders, $\mathcal{S}^{\mu \nu}$ introduces additional corrections which can make the EC acceleration differ from the GR one. Moreover, the terms appearing in Eq. \eqref{eq:1PN-Euler-equation-explicit-2}, which involve the product between the spin and its first order derivatives and the derivatives of the pressure, vanish owing to the reflection symmetry (see Sec. \ref{sec:spin-terms-in-Euler-equation}). The result \eqref{Eq:GR-EC-accel-related} can be also interpreted by investigating the test-particle limit of the dynamical equations, where one body is nearly at rest while its companion has a small mass with a finite spin-to-mass ratio. In fact, within this approximation, the 1PN-accurate GR acceleration agrees with the 1PN dynamics, as described by the Mathisson-Papapetrou equations, of a test particle endowed with \qm{classic spin} in the background gravitational field of a Kerr black hole \cite{Tanaka1996,Tagoshi2000,Faye2006a}. Although the motion of a spinning test particle in EC theory is described by a set of Mathisson-Papapetrou-like equations generalizing the aforementioned equations valid in GR (see Eq. (8) in Ref. \cite{Hehl1971}), the test-mass limit of the EC and GR accelerations will lead to the same effects by virtue of Eq. \eqref{Eq:GR-EC-accel-related}. However, this is consistent with the following two facts: (1) we have checked that, within our hypotheses and at 1PN level, the EC Mathisson-Papapetrou-like equations reduce to the corresponding GR equations; (2) if we employ the potentials \eqref{eq:potential-Sigma-pp-limit} and \eqref{eq:potential-Sigma-i-pp-limit} along with Eq. \eqref{Eq:GR-EC-spin-related}, the metric, when evaluated for a single body having vanishing $\boldsymbol{x}_A$ and $\boldsymbol{v}_A$, reproduces the 1PN Kerr metric in harmonic coordinates. This last result goes in the direction of the findings of Ref. \cite{Arkuszewski1974}, where it has been proved that in the weak-field limit the metric tensor of a static body made of Weyssenhoff dust coincides with the linearized Kerr metric.
\section{Discussion and conclusions}
\label{sec:end}
In this paper we have investigated the $N$-body problem in EC theory at 1PN level by exploiting the Weyssenhoff fluid to model the spin effects inside matter. To achieve this objective, our methodology expounds on the point-particle limit of the Weyssenhoff fluid's continuous description to finally derive the related equations of motion \eqref{eq:EC-body-A-equation-of-motion-2}, see Sec. \ref{sec:PPL_Continuous}. This procedure relies on three fundamental assumptions on each body, which are (see Fig. \ref{fig:Fig1}): (1) reflection symmetric about their center of mass; (2) in stationary equilibrium; (3) mutually well separated. During our calculations, we have proved the no-dependence of the equations of motion on structure-dependent terms. This is an essential clue for the validity of the effacing principle at 1PN order in EC theory, which states that the internal (gravitational) details of each extended body in the system do not influence its own dynamics as soon as hypothesis (3) holds. This permits also to avoid tidal effects among the objects, which surely spoil hypotheses (1) and (2) as well.
The Frenkel condition \eqref{eq:Frenkel_condition} provides a physical meaning to the Weyssenhoff fluid model, and leads to a drastic simplification of the ensuing calculations. More in general, this situation implies reflexively that assumption \eqref{eq:gauge_EC} in EC framework is vital to make the theory coherent. As one can observe, Eq. \eqref{eq:Frenkel_condition}
leads to a wealth of beneficial consequences not only in terms of purely mathematical and numerical computations (see Ref. \cite{Paper2}, for details), but also under conceptual perspectives. In fact, it can be exploited as a sort of criterion to select, among all possible EC models, those endowed with physical connotations. It would be interesting to investigate this particular class of EC theories and check whether, besides the effacing principle, the equivalence principle (in its various formulations) holds (see Ref. \cite{Dicasola2015} for a comprehensive review on the different formulations and meanings of the equivalence principle). However, something in this direction has already been proved by Von der Hyde \cite{Vonderhyde1975}. This topic fulfils a paramount task in building up solid extensions of GR, being also in agreement with its foundation principles.
In Sec. \ref{sec:binary_system}, we have applied our findings to binary systems. We have numerically compared the EC spin contributions to the GR bulk dynamics via $\epsilon$, and then to the GR macroscopic angular momentum via $\epsilon_{\rm spin}$, obtaining thus $10^{-23} \lesssim \epsilon \lesssim 10^{-13}$ and $10^{-20} \lesssim \epsilon_{\rm spin} \lesssim 10^{-10}$ for all black hole mass ranges $M\in[6,10^{11}]M_\odot$. The effect remains physically very small as soon as the bodies keep widely separated (or, in the gravitational-wave terminology, in the inspiral stage). Furthermore, we have discovered that at 1PN order the GR and EC treatments are conceptually equivalent up to a constant factor relating the quantum and \qm{classic} spins (cf. Eqs. \eqref{Eq:GR-EC-spin-related} and \eqref{Eq:GR-EC-accel-related}). Nevertheless, we strongly expect that such equivalence should break down at higher PN orders, because EC theory sprouts up on new terms (e.g., $\mathcal{S}_{\alpha\beta}$), stemming \emph{de facto} from its geometrical description (see conclusions of Ref. \cite{Paper2}, for similar discussions).
Another important cross-checking theoretical result relies on having verified that at 1PN level the GR Mathisson-Papapetrou equation is surprisingly recovered also in EC theory, as the two approaches move their steps from essentially dissimilar hypotheses. Moreover, we obtain, as in GR, the 1PN approximation of the Kerr metric. However, the latter result possesses two distinct physical interpretations in GR and EC frameworks, albeit they mathematically reproduce the same metric (up to a normalization factor).
The outcomes of our paper can be compared with those obtained in the literature in the broad framework of general relativistic theories with torsion. In fact, the PN scheme has allowed the authors of Refs. \cite{Schweizer1979,Smalley1980} to discover that GR and teleparallel theories of gravitation (where curvature vanishes) agree at 1PN level, but differ at higher orders. The same conclusion holds also for the 1PN generation of the gravitational radiation, as discussed in Ref. \cite{Schweizer1980}. In particular, it is shown that the dipole catastrophe, which afflicts many alternative metric theories of gravity, is absent. The PN formalism has been applied also to EC theory by Castagnino and collaborators \cite{Castagnino1985,Castagnino1987}, who have employed the ideal spinning fluid model to derive the 1PN dynamical equations of a matter source and a test particle moving in the vacuum region outside the source distribution. In this approach, the study of the 1PN acceleration of the test particle can, in principle, lead to the possibility of distinguishing GR and EC theories. This pattern differs from the one adopted in this paper, where we have employed the point-particle limit to describe a system of bodies subject to their mutual gravitational attraction and, in addition, our starting point is represented by the 1PN Euler equation \eqref{eq:1PN-Euler-equation-explicit-2}. We also mention the paper of Gladchenko \& Zhytnikov \cite{Gladchenko1994}, who have considered the 1PN approximation of the quadratic Poincar\'e gauge theory of gravitation in its most general form, where torsion quanta are allowed. Here, differently from our study, their existence must be constrained via classical gravity effects, like light deflection and time delay tests. Last, as more recent applications we point out some works on PN and parametrized PN (PPN) expansions performed in a general class of teleparallel gravity theories \cite{Ualikhanova2019,Emtsova2020,Gonzalez2022}. It emerges that the two PPN parameters $\beta$ and $\gamma$ allow to highlight the differences with GR, whereas in the limit of $f(T)$ theories ($T$ being the torsion scalar) indistinguishability with GR is again restored.
Our findings have shown that in EC theory all spin-related quantities come naturally out of the theory. Therefore, it might be interesting to calculate the macroscopic angular momentum in GR by resorting to the EC pattern (similarly to the analysis of Refs. \cite{Ray1982a,Ray1982b}). The great advantage of this approach dwells in the possibility to carry consistently out the calculations, without unbinding the physical nuances.
The further step after this study will be its Lagrangian formulation together with the analysis of the related first integrals. Although at 1PN order EC and GR accelerations coincide, important deviations are likely to emerge in the PN analysis of the rotational equation \eqref{eq:rotational_fluid_equation}. These topics will deserve consideration in a separate paper.
\section*{Acknowledgements}
The authors are grateful to Gruppo Nazionale di Fisica Matematica of Istituto Nazionale di Alta Matematica for partial support. E.B. acknowledges the support of the Austrian Science Fund (FWF) grant P32086. V.D.F. thanks Prof. Antonio Romano for the stimulating discussions on the internal angular momentum in continuous media. V.D.F. acknowledges the support of INFN {\it sezione di Napoli}, {\it iniziative specifiche} TEONGRAV.
|
Title:
A study on the Clustering Properties of Radio-Selected sources in the Lockman Hole Region at 325 MHz |
Abstract: Studying the spatial distribution of extragalactic source populations is
vital in understanding the matter distribution in the Universe. It also enables
understanding the cosmological evolution of dark matter density fields and the
relationship between dark matter and luminous matter. Clustering studies are
also required for EoR foreground studies since it affects the relevant angular
scales. This paper investigates the angular and spatial clustering properties
and the bias parameter of radio-selected sources in the Lockman Hole field at
325 MHz. The data probes sources with fluxes $\gtrsim$0.3 mJy within a radius
of 1.8$^\circ$ around the phase center of a $6^\circ \times 6^\circ$ mosaic.
Based on their radio luminosity, the sources are classified into Active
Galactic Nuclei (AGNs) and Star-Forming Galaxies (SFGs). Clustering and bias
parameters are determined for the combined populations and the classified
sources. The spatial correlation length and the bias of AGNs are greater than
SFGs -- indicating that more massive haloes host the former. This study is the
first reported estimate of the clustering property of sources at 325 MHz,
intermediate between the preexisting studies at high and low-frequency bands.
It also probes a well-studied deep field at an unexplored frequency with
moderate depth and area. Clustering studies require such observations along
different lines of sight, with various fields and data sets across frequencies
to avoid cosmic variance and systematics. Thus, an extragalactic deep field has
been studied in this work to contribute to this knowledge.
| https://export.arxiv.org/pdf/2208.00992 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
Galaxies - galaxies: active$<$ Galaxies- cosmology: large-scale structure of Universe$<$Cosmology - cosmology: observations$<$Cosmology - radio continuum: galaxies$<$ Resolved and unresolved sources as a function of wavelength
\end{keywords}
\section{Introduction}
Observations of the extragalactic sky at radio frequencies are essential for the study of both large-scale structures (LSS) and different populations of sources present in the Universe. The initial research on LSS using clustering was performed with the reporting of slight clustering signals from nearby sources \citep{seldner1981, Shaver1989}. With the advent of large-area surveys like FIRST (Faint Images of the Radio Sky at Twenty-Centimeters, \citealt{FIRST}) and NVSS (NRAO VLA Sky Survey, \citealt{Condon1998}), the studies became more precise due to the large number of sources detected in these surveys.
The extragalactic sky at radio frequencies is dominated by sources below mJy flux densities (at frequencies from MHz to a few GHz, see for example \citealt{simpson2006, mignano, seymour, Smolic2008, prandoni2018}). The source population can be divided into Active Galactic Nuclei (AGNs) and Star-Forming Galaxies (SFGs) \citep{Condon1989, Afonso_2005, simpson2006, bonzini, padovani2015, vernstrom2016}. The dominant sources at these fluxes are SFGs, AGNs of Fanaroff-Riley type I (FR I, \citealt{fanaroff_riley}), and radio-quiet quasars \citep{Padovani2016}. Emission mechanism dominating populations at low frequencies ($\lesssim$10 GHz) is synchrotron emission, modeled as a power law of the form $\textrm S_{\nu} \propto \nu^{-\alpha}$, where $\alpha$ is the spectral index. Study of the extragalactic population using synchrotron emission can help trace the evolution of the LSS in the Universe. It also helps to map their dependence on various astrophysical and cosmological parameters \citep{blake_wall, lindsay_first,hale_cosmos}. Radio continuum surveys, both wide and deep, help constrain the overall behavior of cosmological parameters and study their evolution and relation to the environment \citep{best, ineson, hardcastle2016, rivera, williams2018}. The clustering pattern of radio sources (AGNs and SFGs) can be studied to analyze the evolution of matter density distribution. Clustering measurements for these sources also provide a tool for tracing the underlying dark matter distribution \citep{PressSchechter1974, lacey_cole1993, lacey_cole_1994, sheth_tormen, Mo2010}. The distribution of radio sources derived from clustering is related to the matter power spectrum and thus provides insights for constraining cosmological parameters that define the Universe.
The relationship of the various galaxy populations with the underlying dark matter distribution also helps assess the influence of the environment on their evolution. Clustering studies are also required for extragalactic foreground characterization for EoR and post-EoR science. Spatial clustering of extragalactic sources with flux density greater than the sub-mJy range (around$\sim$150 MHz) dominate fluctuations at angular scales of arcminute range. Thus, their modeling and removal allow one to detect fluctuation of the 21-cm signal on the relevant angular scales.
The definition of clustering is the probability excess above a certain random distribution (taken to be Poisson for astrophysical sources) of finding a galaxy within a certain scale of a randomly selected galaxy. This is known as the two-point correlation function \citep{Peebles1980}. The angular two-point correlation function has been studied in optical surveys like the 2dF Galaxy Redshift Survey \citep{Peacock2001, percival, norberg}, Sloan Digital Sky Survey \citep{einstein_bao,sdss_wang,sdss_simoni,Shi2016,sdss_10_bao} and the Dark Energy Survey \citep{des}. Optical surveys provide redshift information for sources either through photometry or spectroscopy. This information can be used to obtain the spatial correlation function and the bias parameter \citep{2df_spatial, Heinis2009, boss_tomography}. But for optical surveys, observations of a large fraction of the sky is expensive in terms of cost and time. Additionally, optical surveys suffer the limitation of being dust-obscured for high redshift sources. However, at radio wavelengths, the incoming radiation from these sources do not suffer dust attenuation and thus can be used as a mean to probe such high z sources \citep{Hao2011, cucciati, highz, Jarvis2016, saxena}. The highly sensitive radio telescopes like GMRT \citep{Swarup1991}, ASKAP \citep{askap}, LOFAR \citep{lofar} are also able to survey larger areas of the sky significantly faster. They are thus efficient for conducting large-area surveys in lesser time than the old systems while detecting lower flux densities. Therefore, radio surveys provide an efficient method for investigation of the clustering for the different AGN populations. Additionally, at low-frequencies ($\lesssim$ 1.4 GHz), synchrotron radiation from SFGs provide insight into their star-formation rates \citep{Bell2003, Jarvis2010, Davies2017, Delhaize2017, Gurkan2018}. These insights have lead to clustering studies of SFGs as well at low frequencies \citep{maglio2017, arnab2020}. Through clustering studies of radio sources, deep radio surveys help trace how the underlying dark matter distribution is traced by luminous matter distribution. In addition to this, the two-point correlation functions can also provide other information relevant for cosmology by fitting parameterized models to the data to obtain acceptable ranges of parameters. These include the bias parameter, dark energy equations of state, and $\Omega_{m}$ (total density of matter), to name a few \citep{Peebles1980, camera, raccanelli2012, planck2013, allison}.
Extensive observations at multiple frequencies can help understand the relationship of the various source populations with their host haloes and individual structures (stars) present. It has been inferred from clustering observations that AGNs are primarily hosted in more massive haloes than SFGs and are also more strongly clustered \citep{gilli, Donoso2014, maglio2017, hale_cosmos}. While AGNs are more clustered than SFGs, for the latter, the clustering appears to be dependent on the rate of star formation. SFGs with higher star formation rates are more clustered than the ones with a lower rate (since star formation rate is correlated to stellar mass, which in turn is strongly correlated to the mass of the host halo, see \citet{magnelli, Delvecchio2021, bonato2021lofar} and references therein). Studying the large-scale distribution of dark matter by studying the clustering pattern of luminous baryonic matter is vital for understanding structure formation.
From linear perturbation theory, galaxies are "biased" tracers of the underlying matter density field since they are mostly formed at the peak of the matter distribution \citep{Peebles1980}. Bias parameter (b) traces the relationship between overdensity of a tracer $\delta$ and the underlying dark matter overdensity ($\delta_{DM}$), given by $\delta = \textrm b\delta_{DM}$. The linear bias parameter is the ratio between the dark matter correlation function and the galaxy correlation function (\citet{Peebles1980, Kaiser1984, Bardeen1986}, also see \citet{DESJACQUES20181} for a recent review). Measurement of the bias parameter from radio surveys will allow measurements which probe the underlying cosmology governing the LSS, and probe dark energy, modified gravity, and non-Gaussianity of the primordial density fluctuations \citep{BLAKE20041063, CARILLI2004979, seljak, Raccanelli_2015, abdalla2015cosmology}.
Analysis of the clustering pattern for extragalactic sources is also important for observations targeting the 21-cm signal of neutral hydrogen (HI) from the early Universe. These weak signals from high redshifts have their observations hindered by many orders of magnitude brighter foregrounds - namely diffuse galactic synchrotron emission \citep{Shaver1999}, free-free emission from both within the Galaxy as well as extragalactic sources \citep{Cooray2004}, faint radio-loud quasars \citep{DiMatteo2002} and extragalactic point sources \citep{dimatteo2004}. \citet{dimatteo2004} showed that spatial clustering of extragalactic sources with flux density $\gtrsim$0.1 mJy at 150 MHz (the equivalent flux density at 325MHz is $\sim$0.05 mJy) dominate fluctuations at angular scale $\theta \gtrsim$1$\arcmin$. Thus, their modeling and removal allow one to detect fluctuation of the 21-cm signal on relevant angular scales. So their statistical modeling is necessary to understand and quantify the effects of bright foregrounds. Many studies have modeled the extragalactic source counts as single power-law or smooth polynomial \citep{Intema16,franzen2019} and the spatial distribution of sources as Poissonian \citep{ali08} or having a simple power-law clustering. However, a Poisson distribution of foreground sources is very simplistic and may affect signal recovery for sensitive observations like those targeting the EoR signal \citep{ali08, Trott_2016}. Thus more observations are required for low-frequency estimates of the clustering pattern of compact sources.
A number of studies have been done in recent years for observational determination of the clustering of radio selected sources (for instance \citet{Cress_1996, overzier2003, lindsay_first,maglio2017,hale_cosmos,Hale19,rana_tgss,arnab2020,lotss_clustering}). However, more such studies are required for modeling the influence of different processes on the formation and evolution of LSS in the Universe. The sample used for such analyses should not be limited to small deep fields, since the limited number of samples makes clustering studies of different populations (AGNs/ SFGs) sample variance limited. Studies on the statistics of the source distribution are also essential for understanding the matter distribution across space. Thus, observations using sensitive instruments are required to conduct more detailed studies. At 1.4 GHz and above, many clustering studies are present (for instance \citet{Cress_1996, overzier2003, lindsay_first, maglio2017, hale_cosmos, lh_clustering_1.4}); however there extensive studies at low frequencies (and wider areas) are still required. The TIFR GMRT Sky Survey (TGSS) \citep{Intema16} is a wide-area survey of the northern sky at 150 MHz. But the available catalog from the TGSS- Alternate Data Release (TGSS-ADR) suggests that the data is systematics limited. Thus it is unsuitable for large-scale clustering measurements \citep{Tiwari_2019}. The ongoing LOFAR Two-metre Sky Survey (LoTSS \citealt{lotss_dr1}) at a central frequency of 144 MHz is expected to have very high sensitivity and cover a very wide area and thus provide excellent data for studying source distribution statistics at low frequencies \citep{lotss_clustering}. However, to constrain cosmological parameters, consensus for the overall behavior of sources along different lines of sight and across frequencies is also required, and there data sets like the one analyzed here become important \citep{BLAKE20041063, CARILLI2004979, Norris2013}. Radio data has the advantage that even flux-limited samples contain high-z sources \citep{Dunlop1990}. Thus, using the entire radio band provides insights into physical processes driving the evolution of different galaxy populations and helps create a coherent picture of the matter distribution in the Universe. Therefore, studies at radio frequencies would help constrain the cosmology underlying structure formation and evolution.
The recent study of the clustering of the ELAIS-N1 field centered at 400 MHz using uGMRT by \citet{arnab2020} was extremely sensitive, with an RMS ($\sigma_{400}$)\footnote{Unless otherwise stated, $\sigma_{frequency}$ is the RMS sensitivity at the quoted frequency throughout the text.} of 15 $\mu$Jy $\textrm beam^{-1}$. But the area covered was significantly smaller ($\sim$1.8 deg $^2$) than this work. This smaller field of view makes measurement of clustering properties on large angular scales impossible. Smaller areas also lead to smaller sample sizes for statistics, resulting in studies limited by cosmic variance. Another study of the HETDEX spring field at 144 MHz (using the data release 1 of LOFAR Two meter Sky Survey) by \citet{lotss_clustering} has a sky coverage of $\sim$350 square degree, but the mean $\sigma_{150}$ is $\sim$91 $\mu$Jy beam$^{-1}$. However, despite the sensitivity achieved in the survey, the analysis by \citet{lotss_clustering} is limited to flux densities above 2 mJy. Motivated by the requirement for a study in the intermediate range (in terms of flux density, area covered, and frequency), this work aims to quantify the clustering of the sources detected in the Lockman Hole field. The data analyzed here fall in the intermediate category, with a survey area $\sim$6 deg$^2$ with $\sigma_{325} \sim$ 50 $\mu$Jy beam $^{-1}$. It is thus ideal for clustering studies with a sizeable area of the sky covered (thus large angular scales can be probed) and moderately deep flux threshold (catalogue will have fluxes reliable to a lower value). Additionally, the Lockman Hole region has excellent optical coverage through surveys like SDSS and SWIRE; thus, associated redshift information is available to study spatial clustering and bias parameters. This frequency also has the additional advantage of having lesser systematics than the 150 MHz band while still being sensitive to the low-frequency characteristics of sources. New data releases for the LoTSS surveys promise greater sensitivity and source characterization over various deep fields targeted by these observations \citep{lotss2019, tasse2020}; all these observational data at multiple frequencies will put more precise constraints on the various parameters governing the structure formation and evolution.
This work uses archival GMRT data at 325 MHz covering a field of view of $6^\circ \times 6^\circ$ through multiple pointings. In \citet{aishrila1}, data reduction procedure is described in detail. This work used the source catalogue obtained there for clustering analyses. However, the entire dataset could not be used due to limiting residual systematics at large angular scales. The clustering pattern and linear bias parameter are determined for the whole population and sub-populations, i.e., AGNs and SFGs, separately. The previous work by \citet{aishrila1} had determined the flux distribution of sources (i.e., differential source count) and characterized the spatial property and the angular power spectrum of the diffuse galactic synchrotron emission using the same data.
This paper is arranged in the following manner: In section \ref{observation}, a brief outline of the radio data as well as various optical data used is discussed; the classification into source sub-populations is also using radio luminosity of sources is also shown. The following section, i.e., Section \ref{all_correlation} shows the clustering quantification - both in spatial and angular scales and calculation of linear bias for all the detected sources. Section \ref{sep_correlation} discusses the clustering property and bias for classified population, with a brief discussion on the choice of the field of view for this analysis discussed in Section \ref{discussion}. Finally, the paper is concluded in Section \ref{conclusion}.
For this work, the best fitting cosmological parameters obtained from the Planck 2018 data \citep{Planck2018I} has been used. The values are $\mathrm{\Omega_{M}}$ = 0.31, $\mathrm{\Omega_{\Lambda}}$ = 0.68, $\mathrm{\sigma_{8}}$ = 0.811, \& $H_{0}$ = 67.36 km s$^{-1}$ Mpc $^{-1}$. The spectral index used for scaling the flux densities between frequencies is taken as $\alpha$=0.8.
\section{Observations and Source Catalogues}
\label{observation}
This work uses 325 MHz GMRT archival data of the Lockman Hole region. The details of the data reduction procedure have been described in \citet{aishrila1}, here it is discussed very briefly. The data were reduced using the SPAM pipeline \citep{Intema2009, Intema2014, Intema16}, which performs direction-independent as well as direction-dependent calibration techniques. The observation had 23 separate pointings , centered at ($\alpha_{2000}=10^{h}48^{m}00^{s},\delta_{2000}=58^{\circ}08'00\arcsec$), each of which was reduced separately. The final image is a $6^\circ \times 6^\circ$ mosaic having off-source RMS of 50$\mathrm{\mu Jy beam^{-1}}$ at the central frequency. Figure \ref{PB} shows the primary beam corrected final mosaic image of the observed region. This image was used to extract a source catalogue using Python Blob Detection and Source Finder \footnote{\url{https://www.astron.nl/citt/pybdsf/}}(P{\tiny Y}BDSF, \citet{Mohan2015}) above a minimum flux density $\textrm S^{cut}_{325}$ 0.3mJy (i.e., above 6$\sigma_{325}$). A total of 6186 sources were detected and cataloged. The readers are referred to \citet{aishrila1} for details on catalogue creation and subsequent comparison with previous observations.
The redshift information for the sources are derived by matching with optical data from the Sloan Digital Sky Survey (SDSS)\footnote{\url{https://www.sdss.org/}} and the Herschel Extragalactic Legacy Project (HELP) \footnote{\url{http://herschel.sussex.ac.uk/}}\textsuperscript{,}\footnote{\url{https://github.com/H-E-L-P}} \citep{help1}. The SDSS \citep{SDSSI, SDSSIII} has been mapping the northern sky in the form of optical images as well as optical and near-infrared spectroscopy since 1998. The latest data release (DR16) is from the fourth phase of the survey (SDSS-IV, \citet{Blanton2017}). It includes the results for various survey components like the extended Baryon Oscillation Spectroscopic Survey eBOSS, SPectroscopic identification of ERosita Sources SPIDERS, Apache Point Observatory Galaxy Evolution Experiment 2 APOGEE-2, etc. The surveys have measured redshifts of a few million galaxies and have also obtained the highest precision value of the Hubble parameter $H(z)$ to date \citep{sdss_hz}. An SQL query was run in the CasJobs \footnote{\url{https://skyserver.sdss.org/casjobs/}} server to obtain the optical data corresponding to the radio catalogue, and the catalogue thus obtained was used for further analysis.
HELP has produced optical to near-infrared astronomical catalogs from 23 extragalactic fields, including the Lockman Hole field. The final catalogue consists of $\sim$170 million objects obtained from the positional cross-match with 51 surveys \citep{help1}. The performance of various templates and methods used for getting the photometric redshift is described in \citet{help2, duncan2018}. Each of the individual fields is provided separate database in the \textit{Herschel Database in Marseille} site \footnote{\url{https://hedam.lam.fr/HELP/}} where various products, field-wise and category wise are made available via "data management unit (DMU)". For the Lockman Hole field, the total area covered by various surveys is 22.41 square degrees with 1377139 photometric redshift objects. The Lockman Hole field is covered well in the Spitzer Wide-area InfraRed Extragalactic Legacy Survey (SWIRE) with photometric redshifts obtained as discussed in \citet{robinson2008,robinson2012}. However, additional data from other survey catalogues like Isaac Newton Telescope - Wide Field Camera (INT-WFC, \citet{int}), Red Cluster Sequence Lensing Survey (RCSLenS, \citet{rcslens}) catalogues, Panoramic Survey Telescope and Rapid Response System 3pi Steradian Survey (PanSTARRS-3SS, \citet{panstarrs1}), Spitzer Adaptation of the Red-sequence Cluster Survey (SpARCS, \citet{sparcs}), UKIRT Infrared Deep Sky Survey - Deep Extragalactic Survey (UKIDSS-DXS, \citet{ukidss}), Spitzer Extragalactic Representative Volume Survey (SERVS, \citet{servs}) and UKIRT Hemisphere Survey (UHS, \citet{uhs}) resulted in more sources being detected and better photometric determination. The publicly available photometric catalogue for the Lockman Hole region was used to determine the redshift information for matched sources. The source catalogue derived from the 325 MHz observation is pre-processed, matched to add redshift information, and then further analysis is done. The following subsections describe these steps in detail.
\subsection{Merging multi-component sources}
The final map produced has a resolution of 9\arcsec. The source finder might resolve an extended source into multiple components for such high-resolution maps. Such sources are predominantly radio galaxies that have a core at the center and hotspots that extend along the direction of the jet(s) or at their ends; these structures may be classified as separate sources \citep{maglio98, prandoni2018, lofar_association, pink}. Using the NVSS catalogue, it has been shown in \citet{blake_wall_2002a} that large radio sources with unmarked components can significantly alter clustering measurements. Thus, for unbiased estimation of source clustering, such sources need to be identified and merged properly. A strong correlation between the angular extent of radio sources and their fluxes has been discovered by \citet{Oort1987}. The angular extent ($\theta$) of a source is related to its flux density (S) by the $\theta$-S relation, $\theta \propto \sqrt{\textrm S}$. This relation was used to identify resolved components of multi-component sources in surveys like the FIRST survey \citep{maglio98}.
Identification of multi-component sources in the Lockman Hole catalogue resolved as separate sources are made using two criteria. The maximum separation between pairs of sources (using the $\theta$-S relation) is given by $\mathrm{\theta_{max} = 20\sqrt{S_{total}}}$, where $\mathrm{S_{total}}$ is the summed flux of the source pairs \citep{Huynh_2005, prandoni2018,arnab2020}. Sources identified by the above criteria have been considered as the same source if their flux densities differ by less than a factor of 4 \citep{Huynh_2005}.
Figure \ref{merge} shows the separation between the nearest neighbour pairs from the 325 MHz catalogue as a function of the separation between them. Above the black dotted line, the sources have separation less than $\mathrm{\theta_{max}}$ as mentioned above. Blue triangles are sources that have flux density differences less than a factor of 4. The two criteria mentioned gave a sample of 683 sources (out of 6186 total) to have two or more components. After merging multi-component sources and filtering out random associations, 5489 sources are obtained in the revised catalogue. The position of the merged sources are the flux weighted mean position for their components.
\subsection{Adding Redshift Information}
\label{redshift_dist}
As already mentioned, optical cross-identification have for the sources detected has been done using the HELP and SDSS catalogues. A positional cross-match with 9\arcsec matching radius (which is the resolution for this observation) was used for optical cross-matching. Since the positional accuracy of the catalogue is better than 1\arcsec \citep{aishrila1}, a nearest neighbour search algorithm was used to cross-match sources with the optical catalogue with a search radius $r_s$. The rate of contamination expected due to proximity to optical sources is given by \citep{lindsay_first}:
\begin{equation*}
P_{c} = \pi r_{s}^{2}\sigma_{opt}
\end{equation*}
where $\sigma_{opt}$ is the surface density of the optical catalogue. For surface density of 1.4$\times$10$^{4}$ deg$^{-2}$, a matching radius $r_s$ = 9\arcsec gives a contamination of \textless10\%. This radius was thus used to ensure valid optical identification of a large number of radio sources.
Large FoVs are helpful for observational studies of like the one used here is useful for studying LSS, since the presence of a large number of sources provides statistically robust results and also reduces the effect of cosmic variance. Accordingly, the data used for this work had an Fov of $\rm 6^\circ \times 6^\circ$. However, cross-matching with optical catalogue produced matches with only 70\% total sources over entire FoV, and 30\% sources remained unclassified. Further investigation also revealed the presence of unknown systematics, which resulted in excess correlation and deviation from power-law behavior at large angles. The most probable cause for such a deviation seems to be either the presence of many sources with no redshifts at the field edge or the presence of artifacts at large distances from the phase center. Analysis done by reducing the area of the field increased percentage of optical matches and reduced the observed deviation from the power-law nature. Thus, the cause of such a deviation has been attributed to the former one.
Hence, the clustering properties of sources at large angular scales are not reliable for this observation. Thus, the analysis was restricted to a smaller area of the Lockman Hole region around the phase center; large-scale clustering properties could not be estimated. Taking a cut-off with 1.8$^\circ$ radius around the phase center resulted in $\sim$95\% sources having an optical counter-part. Hence, it is expected that the unclassified sources present would not affect the signal significantly (a detailed discussion on the choice of the FoV cut-off is discussed in Section \ref{discussion}). This FoV cut-off yielded 2555 sources in the radio catalogue, out of which 2424 sources have optical matches within the aforementioned match radius. This is shown in Figure \ref{optical_ovreplot}, where the area considered is represented with black dot-dashed circle, and the "x" marks denote the sources in radio catalogue; the blue circles represent the sources without any optical cross-matches in either photometry or spectroscopy. A total of 2415 photometric and 664 spectroscopic matches were obtained after the cross-match with optical catalogues. Out of these, 650 sources had both photometric and spectroscopic detection. For such cases, the spectroscopic identifications were taken. Combined photometry and spectroscopic identifications were obtained for a total of 2424 sources, of which 27 sources were discarded from this analysis since they were nearby objects with 0 or negative redshifts \citep{lindsay_first}. The final sample thus had 2397 sources, which is $\sim$94\% of the total catalogued radio sources within 1.8$^\circ$ radius of the phase center. The redshift matching information for both the full and restricted catalogues have been summarised in Table \ref{z_summary}. The redshift information from the optical catalogues was incorporated for these sources and was used for further analysis. Figure \ref{logz} shows the distribution of redshifts for the sources detected in both HELP and SDSS. In the left panel, the photometric redshifts are plotted as a function of the spectroscopic redshifts. As can be seen, the two values are in reasonable agreement with each other for most cases. Additionally to check for the reliability of obtained photometric redshifts, following \citet{duncan2018}, the outlier fraction defined by $\mathrm{\frac{\mid z_{phot}-z_{spec}\mid }{1+z_{spec}}}$ \textgreater 0.2, is plotted as a function of the spectroscopic value (right panel of Figure \ref{logz}). For this work, the drastic outliers are the points with values \textgreater0.5. The fraction of outliers with drastically different values between photometric and spectroscopic redshifts is $\sim$10\%. While a detailed investigation is beyond the scope of this work, the outliers may be present due to the combination of uncertainties in the different surveys used in the HELP catalogue. As can be seen, the outlier fraction is not very drastic except for some cases; however, the reason for deviations in these sources is unknown. The median redshift for all the sources with redshift information comes out to be 0.78. The top panel of Figure \ref{zhist} shows the distribution N($z$) as a function of source redshift, with the black dashed line indicating median redshift.
\begin{table*} %
\begin{center}
\caption{Summary of number of sources with redshift information}
\label{z_summary}
\begin{tabular}[\columnwidth]{lccccc}
\hline
\hline
Area & Number of sources & Redshift matches & Percentage of matches & AGNs & SFGs \\
\hline
$6^\circ \times 6^\circ$ & 5489 & 3628 & 66 & 2149 & 1479 \\
$3.6^\circ$ diameter around phase center & 2555 & 2397 & 95 & 1821 & 576\\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Classification using Radio Luminosity Function}
\label{classify}
The catalogued sources with optical counterparts were divided into AGNs and SFGs using their respective radio luminosities. Assuming pure luminosity evolution, the luminosity function evolves approximately as $\mathrm{(1+z)^{2.5}}$ and $\mathrm{(1+z)^{1.2}}$ for SFGs and AGNs respectively \citep{mcalpine2013}. The value for AGNs differ slightly from those of \citet{smolic_agn} and \citet{ocran_agn} for the COSMOS field at 3 GHz and ELAIS N1 field at 610 MHz respectively. However, they are consistent with those of \citet{prescott_gama} for the GAMA fields. The values for redshift evolution of SFGs also agree broadly for the GAMA fields \citep{prescott_gama} and the ELAIS N1 field \citep{ocran_sfg}.
It has been shown in \citet{maglio2014, maglio2016, maglio2017} that radio selected galaxies powered by AGNs dominate for radio powers beyond a radio power $\mathrm{P_{cross}}(z)$ which is related with the redshift $z$ as:
\begin{equation}
\mathrm{log_{10}P_{cross}=log_{10}P_{0,cross}}+z
\label{logp}
\end{equation}
upto $z \sim$1.8, with P (at 1.4 GHz) in W $\mathrm{Hz^{-1} sr^{-1}}$. In the local Universe, the value of $\mathrm{P_{cross}}$ is $\mathrm{10^{21.7}}$ (W $\mathrm{Hz^{-1} sr^{-1}}$), coinciding with the observed break in the radio luminosity functions of SFGs \citep{Maglio2002}, beyond which their luminosity functions decrease rapidly and the numbers are also reduced greatly. Thus contamination possibility between the two population of radio sources is very low using the radio luminosity based selection criterion \citep{maglio2014, maglio2017}.
The radio luminosity has been calculated for the sources from their flux as \citep{maglio2014}:
\begin{equation}
\mathrm{P_{1.4GHZ} = 4 \pi S_{1.4GHz}D^{2}(1+z)^{3+\alpha}}
\label{lum}
\end{equation}
where D is the angular diameter distance, and $\alpha$ is the spectral index of the sources in the catalogue. The individual spectral index for the sources was not used (since all sources do not have the measured values). The median value of 0.8 for $\alpha$ was derived by matching with high-frequency catalogues in \citet{aishrila1}. Since the probability of finding a large number of bright, flat-spectrum sources is very low \citep{maglio2017}, the median value of 0.8 was used to determine the luminosity functions of the sources in the Lockman Hole field detected here.
Besides the radio luminosity criterion described above, there are several other methods to classify sources into AGNs and SFGs. X-ray luminosity can also be used to identify AGNs since it can directly probe their high energy emissions \citep{Szokoly_2004}. Color-color diagnostics from optical data (like IRAC) can also be used for identifying AGNs \citep{Donley_2012}. Classification can also be done using the q$_{24}$ parameter, which is the ratio of 24 $\mu$m flux density to the effective 1.4 GHz flux density \citep{bonzini}. Based on the resuts of \citet{mcalpine2013}, it was shown by \citet{maglio2014} that the radio luminosity function for SFGs fall of in a much steeper manner than AGNs for all redshifts, and this reduces the chances of contamination in the two samples. Additionally, these different multi-wavelength methods are not always consistent with each other, and a detailed investigation into any such discrepancy is beyond the scope of this work. Hence, only the radio luminosity criterion has been used for classification.
The sources with redshifts up to 1.8 were classified into AGNs and SFGs according to whether their luminosity is greater than or less than the threshold in Equation \ref{logp} (with $\mathrm{P_{cross}}$ determined using Equation \ref{lum}). At higher redshifts (i.e. \textgreater1.8), $\mathrm{P_{0,cross}}$ is fixed to 10$^{23.5}$ [$\mathrm{W Hz^{-1}sr^{-1}}$] \citep{mcalpine2013}. Of the 2397 sources, 1821 were classified as AGNs and 576 as SFGs using the radio luminosity criteria. The median redshifts for AGNs and SFGs are 1.02 and 0.2 respectively.
\section{Estimation of Correlation Function: Combined Sources}
\label{all_correlation}
\subsection{The Angular Correlation Function}
The angular two-point correlation function $w(\theta)$ is used to quantify clustering in the sky on angular scales. While several estimators have been proposed in literature (for a comparison of the different types of estimators see \citet{Kerscher_2000} and Appendix B. of \citet{lotss_clustering}), this work uses the LS estimator proposed by \citet{Landy1993}. It is defined as :
\begin{equation}
w(\theta) = \mathrm{\frac{DD(\theta)-2DR(\theta)+RR(\theta)}{RR(\theta)}}
\label{estimator_eq}
\end{equation}
Here DD($\theta$) and RR($\theta$) are the normalised average pair count for objects at separation $\theta$ in the original and random catalogues, respectively. Catalogue realizations generated by randomly distributing sources in the same field of view as the real observations have been used to calculate RR($\theta$). The LS estimator also includes the normalized cross-pair separation counts DR($\theta$) between original and random catalogue, which has the advantage of effectively reducing the large-scale uncertainty in the source density \citep{Landy1993, Hamilton1993, blake_wall_2002a, overzier2003}. The uncertainty in the determination of $w(\theta)$ is calculated using the bootstrap resampling method \citep{bootstrap}, where 100 bootstrap samples are generated to quote the 16th and 84th percentile errors in determination of $w(\theta)$.
\subsubsection{Random Catalogue}
The random catalogues generated should be such that any bias due to noise does not affect the obtained values of the correlation function. The noise across the entire $\mathrm{6^\circ \times 6^\circ}$ mosaic of the field is not uniform (see Figure 3 of \citet{aishrila1}). This can introduce a bias in estimating the angular two-point correlation function since the non-uniform noise leads to the non-detection of fainter sources in the regions with higher noise.
P{\tiny Y}BDSF was used for obtaining the noise map of the image. Assuming the sources follow a flux distribution of the form dN/dS $\propto$ $S^{-1.6}$ \citep{intema2011, william2013}, random samples of 3000 sources were generated in the given flux range (with lower limit corresponding to 2 times the background RMS of the image) and assigned random positions to distribute them in the entire FoV. The sources constitute a mixture of 70\% unresolved sources and 30\% extended sources, which is roughly in the same ratio as the actual source catalogue \citep{aishrila1}. These were injected into the residual map, and using the same parameters in P{\tiny Y}BDSF as the ones used in the extraction of the original sources (see \citet{aishrila1}), the random catalogues were extracted. 100 such statistically independent realizations were used to reduce the associated statistical uncertainty.
For clustering analysis of AGNs and SFGs, two sets of random catalogues were generated uing the publicly available catalogues for these source types from the T-RECS simulation \citep{trecs}. These catalogues have source flux densities provided at different frequencies between 150 MHz to 20 GHz. The flux densities at 300 MHz were considered for the randoms. They were scaled to 325 MHz using $\alpha=0.8$, and 2000 sources were randomly chosen within flux density limit for the radio catalogue of AGNs and SFGs. They were assigned random positions within the RA, Dec limits of the original catalogues and injected into the residual maps. Then using the same parameters for P{\tiny Y}BDSF as the original catalogue, the sources were recovered. 100 such realizations were done for AGNs and SFGs separately. The recovered random catalogues were used for further clustering analysis of the classified populations. It should also be mentioned here that the lower cut-off of flux density for the random catalogues was $\sim$0.1\,mJy, which is 2 times the background RMS. As already seen from \citet{aishrila1}, even a flux limit of 0.2\,mJy (4 times the background RMS) takes care of effects like the Eddington bias \citep{Eddington}. Thus 0.1\,mJy is taken as the limiting flux for both the combined and the the classified random catalogues. The final random samples for AGNs and SFGs consisted of a total of $\sim$120000 sources each, while for the combined sample, it was $\sim$200000. This is much higher than the number of sources in the radio catalogue. Thus, it does not dominate the errors. As has already been stated, the point and extended sources in the random catalogues (generated for the whole sample and the classified sources) are taken in the same ratio as that of the original radio catalogue. The drawback of this assumption is that there is a chance of underestimating extended sources in the random catalogue, which may lead to spurious clustering signals at smaller angular scales. However, since no evidence of any spurious signal is seen, taking point and extended sources in the same ratio as the original catalogue seem reasonable.
\subsection{Angular Clustering Pattern at 325 MHz}
\label{ang_325}
The angular correlation function of the sources detected in this observation is calculated using the publicly available code \texttt{TreeCorr} \footnote{\url{https://github.com/rmjarvis/TreeCorr}}\citep{treecorr}. The 325 MHz catalogue was divided into 15 equispaced logarithmic bins between $\theta \sim 36\arcsec$ to 2$^\circ$. The lower limit corresponds to the four times the PSF at 325 MHz, and the upper limit is the half-power beamwidth at this frequency. Figure \ref{angular_all} shows the angular correlation function of the 325 MHz in red circles; the error bars are estimated using the bootstrap method as discussed earlier. A power law of the form $w(\theta) = A\theta^{1-\gamma}$ is also fitted. The power law index, $\gamma$ is kept fixed at the theoretical value of 1.8. The parameter estimation for this fit is done using Markov chain Monte Carlo
(MCMC) simulation by generating 10$^6$ data points by applying the Metropolis-Hastings algorithm in the $A$ parameter space. The first 10$^2$ samples have been removed from the generated chains to avoid the burn-in phase. From the sampled parameter space, $\chi^2$ is used to estimate the most likely values of the parameters. The best fit parameters are $\mathrm{log(A) = -2.73^{+0.11}_{-0.15}}$, with the error bars being the 1-$\sigma$ error bars from the 16th and 84th percentiles of the chain points.
\subsubsection{Comparison with previous Observations}
The best fit values obtained for parameters A and $\gamma$ of the 325 MHz catalogue have been compared with those for other observations at radio frequencies. The parameters obtained for different radio surveys, namely from \citet{lindsay_first, hale_cosmos, Hale19, rana_tgss, arnab2020, lh_clustering_1.4, lotss_clustering} have been summarised in Table \ref{angular_table}. The best-fit estimate of the slope $\gamma$ for the correlation function is found to be in reasonable agreement with the theoretical prediction of \citet{Peebles1980} and previous observations (for example see \citet{lh_clustering_1.4,lotss_clustering}).
The scaled flux limit at 325 MHz for the \citet{lh_clustering_1.4} catalogue at (originally 1.4 GHz, scaled using a spectral index of 0.8) is $\sim$0.4 mJy, very close to the flux limit for this work. However, their estimates are higher than all previous estimates (they particularly compare with \citep{maglio2017}), which they assign partly to the presence of sample variance. While the area probed by \citep{lh_clustering_1.4} is also included within the region this work probes, the area covered are different, the one covered here being larger.
This might be the reason for differences between the estimates in this work and \citep{lh_clustering_1.4}, despite both having similar flux density cut-offs. The clustering amplitude for this work is similar to \citet{hale_cosmos} at almost all the angular scales. One possible reason is that the flux limit for the study at 3 GHz was 5.5 times the 2.3 $\mu$Jy $\mathrm{beam}^{-1}$ limit corresponding to a flux of $\sim$0.1 mJy at 325 MHz, which is near the flux cut-off for this work (0.3 mJy), and thus can trace similar halo masses and hence clustering amplitudes.
\begin{table*} %
\begin{center}
\caption{Clustering Parameters for Observed Data. The columns indicate the name of the survey (Observation), observing frequency in MHz (Frequency), the flux density cut-off at the observing frequency (S$_\mathrm{cut,\nu}$), the equivalent 325 MHz flux-density (S$_\mathrm{cut,325}$), best fit clustering amplitude ($\mathrm{log_{10}(A)}$) and best fit power-law index ($\gamma$) respectively.}
\label{angular_table}
\begin{tabular}[\columnwidth]{lccccll}
\hline
\hline
Observation & Frequency & S$_\mathrm{cut,\nu}^\dagger$ & S$_\mathrm{cut,325}^*$ & $\mathrm{log_{10}(A)}$ & $\gamma$ & Reference \\
& (MHz) & (mJy) & & &\\
\hline
FIRST & 1400 &1.00 & 3.21 & -2.30$^\mathrm{^{+0.70}_{-0.90}}$ & 1.82$\pm$.02 & \citet{lindsay_first}\\
COSMOS & 3000 & 0.013 & 0.08& -2.83$^\mathrm{^{+0.10}_{-0.10}}$ & 1.80 & \citet{hale_cosmos} \\
XMM-LSS & 144 & 1.40 & 0.73 & -2.08$^\mathrm{^{+0.05}_{-0.04}}$ & 1.80 & \citet{Hale19} \\
TGSS-ADR & 150 & 50 & 26.9 &-2.11$^\mathrm{^{+0.30}_{-0.30}}$ & 1.82$\pm$.07 & \citet{rana_tgss}\\
ELAIS-N1 & 400 & 0.10 & 0.12 &-2.03$^\mathrm{^{+0.10}_{-0.08}}$ & 1.75$\pm$0.06 & \citet{arnab2020}\\
Lockman Hole & 1400 & 0.12 & 0.39 &-1.95$^\mathrm{^{+0.005}_{-0.005}}$ & 1.96$\pm$.15 & \citet{lh_clustering_1.4}\\
LoTSS & 144 & 2.00 & 1.04 & -2.29$^\mathrm{^{+0.6}_{-0.6}}$ & 1.74$\pm$.16 & \citet{lotss_clustering} \\
Lockman Hole & 325 & 0.30 & 0.30 & -2.73$\mathrm{^{+0.11}_{-0.15}}$ & 1.80 & This work\\
\hline
\hline
\end{tabular}
\end{center}
\flushleft{$^\dagger$ S$_\mathrm{cut,\nu}$ is the flux density limit at the respective observing frequencies; $^*$ S$_\mathrm{cut,325}$ is the scaled flux density ($\alpha$=0.8) limit at 325 MHz\\
}
\end{table*}
The clustering properties of the radio sources in the VLA-FIRST survey \citep{FIRST} has been reported in \citet{lindsay_first}, where $\mathrm{log(A)}$ is -2.30$^\mathrm{^{+0.70}_{-0.90}}$. \citet{hale_cosmos} and \citet{Hale19} have reported $\mathrm{log(A)}$ value of -2.83 and -2.08 for the COSMOS and XMM-LSS fields, respectively, by fixing $\gamma$ at the theoretical value of 1.80. The clustering amplitude of the 150 MHz TGSS-ADR \citep{Intema16} has been shown by \citet{rana_tgss} for a large fraction of the sky and at different flux density cut-offs. In the recent deep surveys of the ELAIS-N1 field at 400 MHz \citep{arnab2020}, $\mathrm{log(A)}$ and the best fit power law index have the values -2.03$^\mathrm{^{+0.10}_{-0.08}}$ and 1.75$\pm$0.06 respectively.
Comparison has also been made with the wide-area survey of LoTSS data release 1 \citep{lotss_clustering}. This study (with data obtained at a central frequency of 144 MHz) employed various masks on the data to obtain the angular clustering values. The survey covers a wider area, but the flux cut-off threshold is above 1 mJy for all of the masks due to systematic uncertainties. A wide range of angles, 0.1$^\circ \leq \theta \leq$ 32$^\circ$ was fixed to determine the angular clustering. Taking three different flux density limits- at 1, 2 and 4 mJy and different masks, the values of $\mathrm{log(A)}$ and power-law index were obtained(the fitting for the power-law form was done for 0.2$^\circ \leq \theta \leq$ 2$^\circ$). \citet{lotss_clustering} have applied various flux density cuts and masks to their sample for obtaining the angular clustering parameters. They have concluded that the flux density cut-off of 2 mJy provides the best estimate for the angular clustering parameters, and the same has been used here for comparison. Comparison of the present work with LoTSS 2 mJy flux cut shows that the values of $\mathrm{log(A)}$ agree well. The best fit power-law index is also consistent within 1$\sigma$ error bars. Hence, it is seen that the angular correlation function obtained in the present work gives values for the parameters $\mathrm{log(A)}$ and $\gamma$ consistent with those reported in previous surveys. Additionally, since this survey has both wider coverage than the recent EN1 data and a lower flux density threshold than the LoTSS data used by \citet{lotss_clustering}, it provides an intermediate data set along a different line of sight to probe cosmology.
\subsection{The Spatial Correlation Function at 325 MHz}
For known angular clustering $w(\theta)$, the spatial clustering of sources is quantified by the two-point correlation function $\xi(r)$. Using the Limber inversion \citep{Limber1953}, $\xi(r)$ can be estimated for known redshift distribution. Gravitational clustering causes the spatial clustering to vary with redshift, and thus a redshift dependent power-law spatial correlation function can be defined as \citep{Limber1953, overzier2003}:
\begin{equation}
\mathrm{\xi(r,z)} = \mathrm{(r_{0}/r)^{\gamma} (1+z)^{\gamma-(3+\epsilon)}}
\label{spatial_limber}
\end{equation}
where the clustering length r is in comoving units, $\epsilon$ specifies clustering models \citep{overzier2003} and $\mathrm{r_{0}}$ is the clustering length at z=0. For this work, comoving clustering model, in which the correlation function is unchanged in the comoving coordinate system and with $\epsilon$ = $\gamma$-3, is used. The comoving cluster size is constant. The correlation length is calculated using \citep{Peebles1980}:
\begin{equation}
\mathrm{A} = \mathrm{r_{0}^{\gamma}H_{\gamma} (H_{0}/c)\frac{\int_{0}^{\infty}N^{2}(z)(1+z)^{\gamma-(3+\epsilon)}\chi^{1-\gamma}(z)E(z)dz}{[\int_{0}^{\infty}N(z)dz]^{2}}}
\label{smooth_function}
\end{equation}
where $\mathrm{H_{\gamma} = \frac{\Gamma(\frac{1}{2})\Gamma(\frac{\gamma-1}{2})}{\Gamma(\frac{\gamma}{2})}}$, \\$\mathrm{E(z)=\sqrt{\Omega_{m,0}(1+z)^{3}+\Omega_{k,0}(1+z)^{2}+\Omega_{\Lambda,0}}}$ is the cosmological factor, N(z) is the redshift distribution of the sources and $\mathrm{\chi(z)}$ is the line of sight comoving distance. Equation \ref{smooth_function} can be used to estimate $\mathrm{r_{0}}$ using the angular clustering amplitude A and the redshift distribution shown in Figure \ref{zhist}.
The theoretical value of 1.8 for $\gamma$, as predicted by \citet{Peebles1980} is consistent with the values across various surveys, as well as within 2 $\sigma$ of the current analysis (tabulated in Table \ref{angular_table}). Thus the theoretical value of $\gamma$, the distribution of A obtained from the MCMC distribution discussed in \ref{ang_325} and the combined redshift distribution distribution discussed in Section \ref{redshift_dist} are used to estimate the value of $\mathrm{r_{0}}$. Figure \ref{space_pdf} shows the probability distribution function (PDF) of the spatial clustering length. As already mentioned, the median redshift of the samples is $\sim$0.78, and at this redshift, the median value of $\mathrm{r_{0}}$ is 3.50$\mathrm{^{+0.50}_{-0.50}}$ Mpc h$^{-1}$, where the errors are the 16th and 84th percentile errors.
\subsection{The Bias Parameter}
The bias parameter is used to quantify the relation between the clustering property of luminous sources and the underlying dark matter distribution. The ratio of the galaxy to the dark matter spatial correlation function is known as the scale-independent linear bias parameter $b(z)$ \citep{Kaiser1984, Bardeen1986, peacock}. For cosmological model with dark matter governed only by gravity, following \citet{lindsay_first,hale_cosmos,arnab2020}, $b(z)$ is calculated as :
\begin{equation}
b(z) = \Bigg(\frac{r_{0}(z)}{8}\Bigg)^{\gamma/2}\frac{J_{2}^{1/2}}{\sigma_{8}D(z)/D(0)}
\end{equation}
where $J_{2}$ = 72/[(3-$\gamma$)(4-$\gamma$)(6-$\gamma$)2$^{\gamma}$], $D(z)$ is the linear growth factor, calculated from CMB and galaxy redshift information \citep{Eisenstein_1999}, and $\sigma_{8}^{2}$ is the amplitude of the linear power spectrum on a comoving scale of 8 Mpc $\rm h^{-1}$.
For this work, the bias parameter has been calculated using the median redshift value of the r$_\mathrm{0}$ distribution with the 16th and 84th percentile errors. The value of the bias parameter $b(z)$ at $z$=0.78 is 2.22$\mathrm{^{+0.33}_{-0.36}}$.
\section{Estimation of Correlation Function: AGNs and SFGs}
\label{sep_correlation}
This section discusses the angular and spatial correlation scales and the bias parameter obtained for the two separate populations of sources (i.e., AGNs and SFGs). The obtained values are also compared with previously reported values using radio and other bands data. Following a similar procedure as that done for the entire population, initially, the angular clustering was calculated, and a power law of the form $A\theta^{1-\gamma}$ was fitted. The best value of clustering amplitude $A$ is determined, once again keeping $\gamma$ fixed at the theoretical value of 1.8 for both AGNs and SFGs populations. Figure \ref{angular_sep} shows the angular correlation function of AGNs (left panel) and SFGs(right panel). Using the MCMC simulations as discussed previously, the clustering amplitudes, $\mathrm{log(A)}$ have values $\textrm -2.18^{+0.20}_{-0.20}$ and $ \textrm -1.69^{+0.10}_{-0.10}$ respectively for AGNs and SFGs. The results of the fit and the subsequent values of clustering length and bias parameter obtained here and results from previous surveys in radio wavelengths are also tabulated in Table \ref{spatial_table}.
The spatial clustering length and bias parameter b$_{z}$ for the AGNs with $z_{median}$=1.02 are $8.30^{+0.96}_{-0.91}$ Mpc $\rm h^{-1}$ and $3.74^{+0.39}_{-0.36}$.
For SFGs with $z_{median}\approx$0.20, the values are $\mathrm{r_{0}}$= $3.22^{+0.34}_{-0.32}$ Mpc $\rm h^{-1}$ and b$_{z}$=$1.06^{+0.1}_{-0.1}$.
It is seen that the spatial clustering length and consequently the bias factor for AGNs is more than SFGs, which implies that the latter are hosted by less massive haloes, in agreement with previous observations \citep{gilli, gilli07, Starikova_2012, Dolley_2014, maglio2017, hale_cosmos, arnab2020}.
\subsection{Comparison with previous Observations}
\begin{table*} %
\begin{center}
\caption{Spatial Clustering Length and Bias Parameter from Different Observations. The columns are respectively name of the survey field, observing frequency in MHz, type of radio source (AGNs/SFGs), median redshift, angular clustering amplitude, spatial clustering length in Mpc h$\rm ^{-1}$ \& bias parameter value.}
\label{spatial_table}
\begin{tabular}[\columnwidth]{lccccccc}
\hline
\hline
Observation & Frequency & Source type & $\textrm z_{median}$ & $\mathrm{log_{10}(A)}$ & $\mathrm{r_{0}}$ & b$_{z_{median}}$ & Reference \\
&(MHz)&& &&(Mpc h$\rm ^{-1}$)&&\\
\hline
COSMOS & 3000 & AGNs & 0.70 & $-2.30^{+0.1}_{-0.1}$ & $6.9^{+0.60}_{-0.70}$ & $2.1^{+0.2}_{-0.2}$ & \citet{hale_cosmos}\\
&& AGNs & 1.24 & $-2.60^{+0.1}_{-0.1}$ & $9.6^{+0.70}_{-0.70}$ & $3.6^{+0.2}_{-0.2}$ &\\
&& AGNs & 1.77 & $-2.60^{+0.1}_{-0.1}$ & $7.3^{+0.90}_{-0.90}$ & $3.5^{+0.4}_{-0.4}$ &\\
&& SFG & 0.62 & $-2.60^{+0.1}_{-0.1}$ & $5.0^{+0.50}_{-0.60}$ & $1.5^{+0.1}_{-0.2}$ &\\
&& SFG & 1.07 & $-2.90^{+0.1}_{-0.1}$ & $6.1^{+0.60}_{-0.70}$ & $2.3^{+0.2}_{-0.2}$ &\\
&&&&&&&\\
VLA-COSMOS &1400 & AGNs & 1.25 & $-2.79^{+0.1}_{-0.1}$ & $7.84^{+1.75}_{-2.31}$ & - & \citet{maglio2017}\\
&& SFG & 0.50 & $-2.36^{+0.3}_{-0.3}$ & $5.46^{+1.12}_{-2.10}$ & - &\\
&&&&&&&\\
ELAIS N1 & 400 & AGNs & 0.91 & $-2.22^{+0.16}_{-0.16}$ & $7.30^{+1.4}_{-1.2}$ & $3.17^{+0.5}_{-0.5}$ & \citet{arnab2020}\\
&& SFG & 0.64 & $-2.16^{+0.05}_{-0.06}$ & $4.62^{+0.39}_{-0.40}$ & $1.65^{+0.14}_{-0.14}$ &\\
&&&&&&&\\
ELAIS N1 & 612 & AGNs & 0.85 & $-2.30^{+0.02}_{-0.03}$ & $6.0^{+1.5}_{-1.3}$ & $2.6^{+0.6}_{-0.5}$ & \citet{arnab2020}\\
&& SFG & 0.87 & $-2.19^{+0.01}_{-0.02}$ & $4.16^{+0.7}_{-0.8}$ & $1.59^{+0.2}_{-0.2}$ &\\
&&&&&&&\\
Lockman Hole & 325 & AGNs & 1.02 & $-2.18^{+0.20}_{-0.20}$ & $8.30^{+0.96}_{-0.91}$ & $3.74^{+0.39}_{-0.36}$ & This work\\
&& SFG & 0.20 & $-1.65^{+0.1}_{-0.1}$ & $3.22^{+0.34}_{-0.32}$ & $1.06^{+0.10}_{-0.10}$ &\\
\hline
\hline
\end{tabular}
\end{center}
\end{table*}
Figure \ref{r0_sep} shows the observationally determined values from surveys at various wavebands for $\mathrm{r_{0}}$ as a function of their redshift, while Figure \ref{bias} shows the same for the bias parameter. The left and right panels of Figure \ref{r0_sep} are for AGNs and SFGs, respectively. Table \ref{spatial_table} summarizes the values obtained in radio surveys only, while Figures \ref{r0_sep}, \ref{bias} show the observed values for surveys at radio as well as other wavebands, e.g. IR and X-Ray.
The clustering length for AGNs in this work is at z$_{median} \approx$1.02 is $\textrm 8.30^{+0.96}_{-0.91}$ Mpc $\rm h^{-1}$. Using X-Ray selected AGNs in the COSMOS field, \citealt{gilli} obtained clustering lengths at redshifts upto $\sim$3.0. They divided their sample into a number of bins, to obtain $\mathrm{r_{0}}$ at different median redshifts. For their entire sample, taking slope of the angular correlation function as 1.80, $\mathrm{r_{0}}$ was $\textrm 8.39^{+0.41}_{-0.39}$ Mpc $\rm h^{-1}$, for a median redshift of 0.98. It is consistent with the value obtained here at a similar redshift. The clustering length with this work is also consistent within error bars for AGNs at 400 MHz and 610 MHz of \citet{arnab2020}.
For their work, they obtain an $\mathrm{r_{0}}$ value of $\textrm 7.30^{+1.14}_{-1.12}$ Mpc $\rm h^{-1}$ at $z \approx0.91$, $\textrm 6.00^{+1.5}_{-1.3}$ Mpc $\rm h^{-1}$ at $z_{median} = 0.84$. The clustering length estimates for radio selected AGNs in the COSMOS field at 1.4 GHz \citep{maglio2017} and 3 GHz \citep{hale_cosmos} observed with the VLA also agree within error bars with the estimates obtained here. \citet{maglio2017} have found a clustering length of $\textrm 7.84^{+1.75}_{-2.31}$ Mpc $\rm h^{-1}$ at z $\approx$1.25 while \citep{hale_cosmos} obtained $\textrm 6.90^{+0.60}_{-0.70}$ Mpc $\rm h^{-1}$, $\textrm 9.60^{+0.70}_{-0.70}$ Mpc $\rm h^{-1}$ and $\textrm 7.30^{+0.90}_{-0.90}$ Mpc $\rm h^{-1}$ at $z\approx$ 0.70, 1.24, 1.77 respectively. Using X-ray selected AGNs in the CDFS field, \citet{gilli05} obtained a value of $\textrm 10.30^{+1.7}_{-1.7}$ Mpc $\rm h^{-1}$ at z $\approx$ 0.84. This value though higher than the values for radio selected AGNs, is still consistent within error bars.
For the SFGs population (right panel of Figure \ref{r0_sep}), the median redshift is 0.20. At this redshift, the clustering length is $\textrm 3.22^{+0.34}_{-0.32}$ Mpc $\rm h^{-1}$. This estimate is at a redshift lower than previous observations. An extensive study at mid-IR frequency has been done by \citet{Dolley_2014} for SFGs. The lowest redshift probed in their study is 0.31, where $\mathrm{r_{0}}$ is $\textrm 3.41^{+0.18}_{-0.18}$ Mpc $\rm h^{-1}$. Thus, the value is consistent with that obtained here at a nearby redshift. \citet{maglio2013} studied the clustering of SFGs using the Herschel PACS Evolutionary Probe observations of the COSMOS and Extended Groth Strip fields. They found clustering lengths for SFGs out to $z \approx$ 2. For the ELAIS-N1 field at 400 MHz and 610 MHz, \citet{arnab2020} reported clustering length of $\textrm 4.62^{+0.39}_{-0.40}$ Mpc $\rm h^{-1}$ and $\textrm 4.16^{+0.70}_{-0.80}$ Mpc $\rm h^{-1}$ at redshifts 0.64 and 0.87 respectively. The 3 GHz COSMOS field studies of \citet{hale_cosmos} gave clustering lengths $\textrm 5.00^{+0.50}_{-0.60}$ Mpc $\rm h^{-1}$ and $\textrm 6.1^{+0.60}_{-0.70}$ Mpc $\rm h^{-1}$ respectively at z$\approx$ 0.62 and 1.07. The mid-IR selected samples for Lockman Hole give $\mathrm{r_{0}}$ values $\textrm 4.98^{+0.28}_{-0.28}$ Mpc and $\textrm 8.04^{+0.69}_{-0.69}$ Mpc $\rm h^{-1}$ at $z \approx$ 0.7 and 1.7 respectively. Similarly, the mid-IR sample for \citet{gilli07} has clustering lengths $\mathrm{r_{0}}$ is $\textrm 4.25^{+0.12}_{-0.12}$ Mpc $\rm h^{-1}$ and $\textrm 3.81^{+0.10}_{-0.10}$ Mpc $\rm h^{-1}$ for $z \approx$ 0.67 and 0.73.
The results have also been compared with the assumed bias models of the semi-empirical simulated catalogue of the extragalactic sky, the Square Kilometer Array Design Studies (referred to as SKADS henceforth, \citealt{skads}). This simulation models the large-scale cosmological distribution of radio sources to aid the design of next-generation radio interferometers. It covers a sky area of $20^\circ \times 20 ^\circ$, with sources out to a cosmological redshift of $z\sim$20 and a minimum flux 10\,nJy at 151, 610 MHz \& 1.4, 4.86 and 18 GHz. The simulated sources are drawn from observed and, in some cases, extrapolated luminosity functions on an underlying dark matter density field with biases to reflect the measured large-scale clustering. It uses a numerical Press–Schechter \citep{PressSchechter1974} style filtering on the density field to identify clusters of galaxies. The SKADS catalogue has been used here for statistical inference of the spatial and angular clustering variations of the sources with redshift. It should be mentioned here that the T-RECS catalogue \citep{trecs} incorporates more updated results from the recent observations. However, the evolution of the bias parameter and clustering length with redshift is not available for the same; hence SKADS has been used.
The bias parameter for AGNs and SFGs at z$_{median}$ 1.02 and 0.20 is $3.74^{+0.39}_{-0.36}$ and $1.06^{+0.10}_{-0.10}$ respectively. Although the value for AGNs is slightly higher than those obtained by \citet{hale_cosmos} and \citet{arnab2020}, it is still with reasonable agreement with the SKADS for FR-I galaxies. Comparison with the population distribution of the SKADS simulation of \citet{skads}, in terms of both clustering length and the bias parameter (solid magenta pentagons in Figure \ref{bias}) show that the AGN population is dominated by FR-I type galaxies hosted in massive haloes with $\sim M_{h}$= 5$\times$10$^{13}\rm h^{-1}M_{\odot}$. It can also seen from Figure \ref{bias}, that the mass of the haloes hosting the SFG samples of the current sample is $\sim M_{h}$= 3$\times10^{12} \rm h^{-1}M_{\odot}$. Thus, it is seen that the SFGs have a lower range of halo masses compared to AGNs, which implies that the latter inhabits more massive haloes and are more biased tracers of the dark matter density field.
\section{Discussion}
\label{discussion}
The analysis of clustering properties of radio selected sources in the Lockman Hole region presented in this work is one of the first results reported at 325 MHz. Similar studies were previously done at 400 MHz for the ELAIS-N1 field \citep{arnab2020}, however for a much smaller area. Beside analysing at a mostly unexplored frequency, this work also presents a comparatively large area with a significant number of sources. Previous clustering study of the same area using 1.4 GHz data from WSRT by \citet{lh_clustering_1.4} used 1173 sources with a flux density cut-off of 0.12 mJy at 1.4 GHz (or 0.4 mJy at 325 MHz). Their obtained clustering amplitude is slightly higher than previous surveys, as acknowledged by the authors. However, further investigation is required to ascertain the reason for the deviation. Clustering analysis was also done using the recent LoTSS observation of the HETDEX spring field \citep{lotss_clustering}. The clustering analyses were produced with many flux density cut-offs and masks, and the most reliable estimate was for a flux density limit of 2 mJy at 150 MHz (or $\sim$1.0 mJy at 325 MHz). The clustering amplitude estimate at 2 mJy limit for LoTSS is consistent with that obtained here within error bars. As seen from Figure \ref{angular_all}, the clustering amplitude obtained here agrees with previous observations. The slightly higher values for the bias parameter of the AGNs for this work, compared to that of \citet{hale_cosmos} and \citet{arnab2020}, may be attributed to the different flux limits of the studies. This implies that each of these observations are probing slightly different populations of sources (with slightly different luminosities as discussed later). Nevertheless, as seen from Figure \ref{bias}, the values are broadly consistent with each other.
The angular clustering amplitude for this work as shown in Figure \ref{angular_sep} agree with previous observations, as seen in Table \ref{spatial_table}. The angular clustering of sources for this work are calculated with \texttt{TreeCorr} using the default values for most parameters. It has been shown in \citet{lotss_clustering} that using default value of 1 for the parameter \texttt{bin\_slop} gives less accurate results than for values $\leq$1. \citet{lotss_clustering} obtained the most accurate values for \texttt{bin\_slop}=0. They also showed that angular clustering amplitudes deviate largely from precise values (calculated from a separate brute-force algorithm, see \citep{lotss_clustering} for details) at angular scales $\gtrsim$1$^\circ$. However, the computation times also significantly increased for \texttt{bin\_slop}=0. The use of default parameters in this work might be the cause slight oscillation of the correlation function seen around the best fit curves. Nevertheless, since results obtained here are in reasonable agreement with the previous observations and owing to constraints in the available computing power, the default values have been used.
The clustering lengths and bias parameters obtained here also agree with previous studies, as evident from Table \ref{spatial_table} and Figures \ref{r0_sep}, \ref{bias}. Comparison of bias parameter with SKADS simulation \citealt{skads} shows that the expected mass for dark matter haloes hosting AGNs is orders of magnitude higher than that for SFGs. This trend is consistent with previous observations (see for example \citealt{maglio2017, hale_cosmos, arnab2020}). Studies on the luminosity of the AGNs and SFGs suggest that it is correlated with source clustering, and hence with the bias parameter and host halo mass \citep{hale_cosmos}. Using observations of high and low luminosity AGNs in the COSMOS field, \citet{hale_cosmos} showed that the luminosity and clustering are correlated, with the higher luminosity AGNs residing in more massive galaxies \citep{jarvis2001a}. Thus, they are hosted by more massive haloes. Again, this points to AGNs (which are in general more luminous than SFGs) being hosted by more massive haloes. However, it should also be mentioned that the two population of AGNs in \citet{hale_cosmos} studied are at different redshifts, with the low excitation population studied at $z\lesssim0.65$. So it is possible that this population may evolve into higher mass haloes at higher redshifts. Moreover, some works (for example \citealt{Mendez_2016}) do not find any relation between clustering and luminosity. Nevertheless, following \citet{hale_cosmos}, studies using a larger population of samples covering a large range of luminosities is required for probing the relationship with clustering.
It is also observed in Figure \ref{bias} that the bias parameter for SFGs (solid blue square) is higher than the SKADS predicted values for this population. This trend is consistent with that observed in previous studies of \citet{hale_cosmos} (cyan circles) and \citet{arnab2020} (light magenta triangles). The trend observed in \citet{Dolley_2014} (light blue diamonds) is almost similar as well. There may be two reasons that cause this variation- contamination of the SFG sample by star-burst (SB) galaxies or underestimation of halo mass for SKADS. If there exists a few SB samples in the SFG population, comparison with SKADS (blue dotted curve in Figure \ref{bias}) shows that the overall value for bias (as well as $\mathrm{r_{0}}$) will be higher than an uncontaminated sample. However, the most likely reason remains the second one, the halo mass used in the SKADS is not a correct representation, which has also been hinted at by comparing the values obtained from previous observations (for instance \citet{Dolley_2014, hale_cosmos, arnab2020}) in Figure \ref{bias}. However, it is also seen from Figure \ref{r0_sep} spatial clustering length for SFGs agrees with SKADS. The exact reason for agreement of $\mathrm{r_{0}}$ and disagreement of $b(z)$ for SFGs in this work with SKADS is unclear, and will be investigated in detail in later works.
Analyses like the one presented here are important for fully understanding how bias scales with redshift as well as with source properties like luminosity. This is important for cosmology, since the bias relates to dark matter distribution, and is thus essential for understanding the underlying cosmological parameters that define the Universe.
It should be mentioned here that the current study also has certain limitations. Due to unknown systematics at large scales, and lack of optical matches, the entire observed field could not be utilised for this study. Additionally, the source classification is done based solely on the radio luminosity of the sources. As has already been mentioned previously, \citet{maglio2014} showed that the chances of contamination between the populations is very less using this criteria. Nonetheless, there are several other methods that can also be used to classify AGNs and SFGs in a sample (detailed in Section \ref{classify}). Future works will present a more detailed analysis using the different multi-wavelength classification schemes available.Such multi-frequency studies, combined with the present work and similar studies with other fields will enhance the knowledge of the extragalactic sources and provide more insights into the processes governing their formation and evolution.
\section{Conclusion}
\label{conclusion}
This work investigates the higher-order source statistics, namely the angular and spatial clustering of the sources detected in the Lockman Hole field. The data was observed by the legacy GMRT at 325 MHz. The details of data analysis and catalogue extraction are discussed in \citet{aishrila1}. The initial step involved merging the multi-component sources present in the raw catalogue. The resultant catalogue was cross-matched with SDSS and HELP catalogues to identify sources with either spectroscopic or photometric redshift information. A region of radius 1.8$^\circ$ around the phase center was selected for optical identifications, yielding $\sim$95\% matches. All the sources with redshift distribution were separated into AGN and SFG populations using the criterion for radio luminosity. The angular correlation function was determined for the combined population for separation between $36\arcsec$ to 2$^\circ$. A power law fitted to this function, keeping a fixed power law index of $\mathrm{\gamma = 1.80}$, as estimated theoretically \citep{Peebles1980}. This gave the value of clustering amplitude $\mathrm{log_{10}(A)} = -2.73^{0.11}_{-0.15}$.
The source population was further divided into AGNs and SFGs based on their radio luminosity, and clustering analyses were done for these populations as well. Using the redshift information and the clustering amplitude, spatial correlation length was determined using Limber inversion for the AGNs and SFGs. The correlation length and bias parameters have been obtained for the full sample, as well as the classified AGN and SFG population. For the full sample at z$_{median}\approx$ 0.78, $\mathrm{r_{0}}$ = 3.50$\mathrm{^{+0.50}_{-0.50}}$ Mpc h$^\textrm{-1}$ and $b(z)$ = 2.22$\mathrm{^{+0.33}_{-0.36}}$. For AGNs, the values are $\mathrm{r_{0}}$ = 8.30$\mathrm{^{+0.96}_{-0.91}}$ Mpc h$\rm ^{-1}$ and $b(z)$ = 3.74$\mathrm{^{+0.39}_{-0.36}}$ at z$_{median}\approx$ 1.02. At z$_{median}\approx$ 0.20, SFGs have values $\mathrm{r_{0}}$ = 3.22$\mathrm{^{+0.34}_{-0.32}}$ Mpc h$\rm ^{-1}$ and $b(z)$ = 1.06$\mathrm{^{+0.10}_{-0.10}}$. The clustering length for AGNs is reasonably consistent with \citet{gilli} as well as \citet{arnab2020}. For SFGs, the values are consistent with \citet{Dolley_2014}.
The obtained values have also been compared with the SKADS simulation of \citet{skads}. The comparative analysis suggests that the AGNs are dominated by FR-I galaxies, with host dark matter halo masses of $M_{h}$=5-6 $\times$ 10$^{13}h^{-1}M_{\odot}$. For SFGs, the estimated halo mass obtained from SKADS is lower compared to the value $\sim M_{h}$=3 $\times10^{12}h^{-1}M_{\odot}$ obtained here. The halo mass obtained here are in agreement with previous literature \citep{Dolley_2014, hale_cosmos, arnab2020}. It is worthwhile to mention that while the current classifications are based on radio luminosity for each source alone, the results are in agreement with clustering properties for populations of AGNs and SFGs in X-Ray and mid-IR surveys as well. However, there are some deviations from the predictions of the SKADS simulation. This deviation, seen in other observations as well, emphasizes the need for wider and deeper low-frequency observations. These will be able to constrain the host properties better, leading to better models formation and evolution of the different sources, and better understanding of the distribution of these populations over space and time (redshift).
The study done in this work aims to characterize the clustering and bias of an observed population of radio-selected sources, both as a combined population and as distinct classes of sources (namely AGNs and SFGs). This work is the first to report the clustering properties of radio-selected sources at 325 MHz. Thus, current data, being at a frequency with little previous clustering study, bridges the gap between low frequency and high-frequency studies. It also has the advantage of covering a wider area than many recent studies with moderately deep RMS values, thus probing a larger number of sources with fluxes at the sub-mJy level ($\sim$0.3 mJy). More such studies using observational data are required for constraining cosmology and probing how different source populations are influenced by their parent halos and how they evolve with time (redshift).
Additionally, for sensitive observations of CD/EoR and post-EoR science, accurate and realistic models for compact source populations that comprise a significant fraction of foregrounds are required. Studies on the effect of imperfections in foreground modeling in the power spectrum estimates will require detailed observational studies of source position and flux distributions. For realistic estimates, second-order statistics like clustering also cannot be ignored. Thus many more analyses like the one done in this paper will be required for better understanding the effects of the interplay between the various cosmological parameters on the different populations of sources and putting constraints on said parameters. Sensitive large area surveys like the MIGHTEE \citep{Jarvis2016} with the MeerKat telescope, LoTSS \citep{lotss_dr1, tasse2020, sabater} with the LOFAR, EMU \citep{Norris2011, norris2021} with the ASKAP, as well as those to be done with the upcoming SKA-mid telescope would provide wider as well as deeper data for doing cosmology. The current work demonstrates that even instruments like the legacy GMRT can provide reasonable data depth and coverage for
cosmological observations.
\section*{ACKNOWLEDGEMENTS}
AM would like to thank Indian Institute of Technology Indore for supporting this research with Teaching Assistantship. AM further acknowledges Akriti Sinha for helpful suggestions and pointing to the HELP catalogue and Sumanjit Chakraborty for helpful discussions. The authors thank the staff of GMRT for making this observation possible. GMRT is run by National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. The authors also acknowledge SDSS for the spectroscopic redshift catalogues. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is \url{http://www.sdss3.org/}. The authors thank the anonymous reviewer for their thorough review that has helped to improve the quality of the work.
\section*{Data Availability}
The raw data for this study is available in the GMRT archive (\url{https://naps.ncra.tifr.res.in/goa/data/search}). The spectroscopic redshift data is available from \url{https://skyserver.sdss.org/casjobs/}, and the photometric redshift data is available from \url{https://hedam.lam.fr/HELP/}.The P{\tiny Y}BDSF catalogue used here (accompanying \citet{aishrila1}) is available on VizieR at \url{https://cdsarc.cds.unistra.fr/viz-bin/cat/J/MNRAS/495/4071}.
\section*{Software}
This work relies on the Python programming language (\url{https://www.python.org/}). The packages used here are astropy (\url{https://www.astropy.org/}; \citet{astropy:2013,astropy:2018}), numpy (\url{https://numpy.org/}), scipy (\url{https://www.scipy.org/}), matplotlib (\url{https://matplotlib.org/}), TreeCorr (\url{https://github.com/rmjarvis/TreeCorr}).
\bibliographystyle{mnras}
\bibliography{references} %
\bsp %
\label{lastpage} |
Title:
Spectropolarimetry of the Thermonuclear Supernova 2021rhu: High Calcium Polarization 79 Days After Peak Luminosity |
Abstract: We report spectropolarimetric observations of the Type Ia supernova (SN)
2021rhu at four epochs: $-$7, +0, +36, and +79 days relative to its $B$-band
maximum luminosity. A wavelength-dependent continuum polarization peaking at
$3890 \pm 93$ Angstroms and reaching a level of $p_{\rm max}=1.78% \pm 0.02$%
was found. The peak of the polarization curve is bluer than is typical in the
Milky Way, indicating a larger proportion of small dust grains along the
sightline to the SN. After removing the interstellar polarization, we found a
pronounced increase of the polarization in the CaII near-infrared triplet, from
$\sim$0.3% at day $-$7 to $\sim$2.5% at day +79. No temporal evolution in
high-resolution flux spectra across the NaID and CaIIH&K features was seen from
days +39 to +74, indicating that the late-time increase in polarization is
intrinsic to the SN as opposed to being caused by scattering of SN photons in
circumstellar or interstellar matter. We suggest that an explanation for the
late-time rise of the CaII near-infrared triplet polarization may be the
alignment of calcium atoms in a weak magnetic field through optical
excitation/pumping by anisotropic radiation from the SN.
| https://export.arxiv.org/pdf/2208.12862 | command.
\newcommand{\vdag}{(v)^\dagger}
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\newcommand*{\hy}{\color{orange} HY:}
\usepackage{CJK}
\usepackage[whole]{bxcjkjatype}
\usepackage{CJKutf8}
\shorttitle{AASTeX v6.3.1 Spectropolarimetry of SN\,2021rhu}
\shortauthors{Yang et al.}
\graphicspath{{./}{figures/}}
\begin{document}
\title{\bf Spectropolarimetry of the Thermonuclear Supernova 2021rhu: \linebreak High Calcium Polarization 79 Days After Peak Luminosity}
\correspondingauthor{Yi Yang}
\email{[email protected]}
\author[0000-0002-6535-8500]{Yi Yang
\begin{CJK}{UTF8}{gbsn}
(жќЁиЅ¶)
\end{CJK}}
\affiliation{Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA}
\affiliation{Bengier-Winslow-Robertson Postdoctoral Fellow}
\author[0000-0003-2560-8066]{Huirong Yan}
\affiliation{Deutsches Elektronen-Synchrotron (DESY), Platanenallee 6, 15738 Zeuthen, Germany}
\affiliation{Institut f\"ur Physik und Astronomie, Universit\"at Potsdam, Haus 28, Karl-Liebknecht-Str.\ 24/25, 14476 Potsdam, Germany}
\author[0000-0001-7092-9374]{Lifan Wang}
\affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics \& Astronomy, Texas A\&M University, 4242 TAMU, College Station, TX 77843, USA}
\author[0000-0003-1349-6538]{J. Craig Wheeler}
\affiliation{Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712-1205, USA}
\author[0000-0003-1637-9679]{Dietrich Baade}
\affiliation{European Organisation for Astronomical Research in the Southern Hemisphere (ESO), Karl-Schwarzschild-Str.\ 2, 85748 Garching b.\ M{\"u}nchen, Germany}
\author[0000-0002-0531-1073]{Howard Isaacson}
\affil{Department of Astronomy, University of California,
Berkeley, CA 94720-3411, USA}
\author[0000-0001-7101-9831]{Aleksandar Cikota}
\affiliation{European Organisation for Astronomical Research in the Southern Hemisphere (ESO), Alonso de Cordova 3107, Vitacura, Casilla 19001, Santiago de Chile, Chile}
\author[0000-0003-0733-7215]{Justyn R. Maund}
\affiliation{Department of Physics and Astronomy, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, UK}
\author[0000-0002-4338-6586]{Peter Hoeflich}
\affiliation{Department of Physics, Florida State University, Tallahassee, Florida 32306-4350, USA}
\author[0000-0002-0537-3573]{Ferdinando Patat}
\affiliation{European Organisation for Astronomical Research in the Southern Hemisphere (ESO), Karl-Schwarzschild-Str.\ 2, 85748 Garching b.\ M{\"u}nchen, Germany}
\author[0000-0002-8965-3969]{Steven Giacalone}
\affil{Department of Astronomy, University of California Berkeley, Berkeley, CA 94720-3411, USA}
\author[0000-0002-7670-670X]{Malena Rice}
\affiliation{Department of Astronomy, Yale University, New Haven, CT 06511, USA}
\altaffiliation{NSF Graduate Research Fellow}
\author[0000-0003-0298-4667]{Dakotah B. Tyler}
\affil{Department of Physics \& Astronomy, University of California, Los Angeles, CA 90095, USA}
\author[0000-0001-5965-0997]{Divya Mishra}
\affiliation{George P.\ and Cynthia Woods Mitchell Institute for Fundamental Physics \& Astronomy, Texas A\&M University, 4242 TAMU, College Station, TX 77843, USA}
\affiliation{European Organisation for Astronomical Research in the Southern Hemisphere (ESO), Karl-Schwarzschild-Str.\ 2, 85748 Garching b.\ M{\"u}nchen, Germany}
\author[0000-0002-5221-7557]{Chris Ashall}
\affiliation{Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA}
\author[0000-0001-5955-2502]{Thomas~G.~Brink}
\affiliation{Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA}
\author[0000-0003-3460-0103]{Alexei~V.~Filippenko}
\affiliation{Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA}
\author[0000-0002-1296-6887]{Ll\'{i}us Galbany}
\affiliation{Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain}
\affiliation{Institut d’Estudis Espacials de Catalunya (IEEC), E-08034 Barcelona, Spain}
\author[0000-0002-1092-6806]{Kishore C. Patra}
\affiliation{Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA}
\affiliation{Nagaraj-Noll-Otellini Graduate Fellow}
\author[0000-0002-9301-5302]{Melissa Shahbandeh}
\affiliation{Department of Physics, Florida State University, Tallahassee, Florida 32306-4350, USA}
\author[0000-0002-4951-8762]{Sergiy~S.Vasylyev}
\affiliation{Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA}
\affiliation{Steven Nelson Graduate Fellow}
\author[0000-0001-8764-7832]{Jozsef Vink{\'o}}
\affiliation{Department of Astronomy, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712-1205, USA}
\affiliation{ Konkoly Observatory, CSFK, Konkoly-Thege M. \'ut 15-17, Budapest, 1121, Hungary}
\affiliation{ELTE E\"otv\"os Lor\'and University, Institute of Physics, P\'azm\'any P\'eter s\'et\'any 1/A, Budapest, 1117 Hungary}
\affiliation{Department of Optics \& Quantum Electronics, University of Szeged, D\'om t\'er 9, Szeged, 6720, Hungary}
\keywords{polarization --- galaxies: individual (NGC 7814) --- supernovae: individual (SN\,2021rhu)}
\section{Introduction} \label{sec:intro}
The mechanism that destroys carbon/oxygen white dwarfs (CO WDs) in binary
or multiple systems and produces Type Ia supernovae (SNe) remains
uncertain. A key to the distinction between various possible explosion
models is offered by measuring the explosion geometry through
spectropolarimetry (see \citealp{Howell_etal_2001}, and the review
by \citealp{Wang_wheeler_2008}). The continuum polarization observed in
normal Type Ia SNe is generally low within weeks after the SN explosion;
for instance, $p \lesssim 0.2$\% from about two weeks before
(SN\,2018gv, \citealp{Yang_etal_2020}) to six weeks after
(SN\,2001el, \citealp{Wang_etal_2003_01el, Kasen_etal_2003};
SN\,2006X, \citealp{Patat_etal_2009}) the peak of the optical light-curve
maximum. This implies high overall spherical symmetry for Type Ia SNe.
For example, when seen equator-on, an oblate ellipsoid would have an
axis ratio of $\lesssim 1.1$ \citep{Hoeflich_1991}. In contrast, certain
prominent spectral lines, such as \ion{Si}{2} and \ion{Ca}{2}, usually
show higher degrees of polarization, within weeks after the SN
explosion \citep{Wang_wheeler_2008, Cikota_etal_2019}. This line
polarization can be understood as the effect of clumps of chemically
distinguished material partially blocking the underlying photosphere.
The uneven distribution of the corresponding line opacity leads to an
incomplete cancellation of electric ($E$) vectors over the range of the
absorption wavelength, and hence a net line polarization.
Ignition of an exploding WD can, in principle, be in the center, off-center,
or throughout the volume of the WD; also, it may occur at a single knot,
within a confined region, or at multiple locations in the WD (see overviews
by \citealp{Alsabti_etal_2017}). Different explosion mechanisms can shape
distinct explosion geometries and the distributions of various elements
that affect the line polarization. Here we summarize some scenarios to
account for typical Type Ia SNe that are discussed in the literature.
(i) Deflagration-to-detonation transition may be caused by turbulence in the
flame front (delayed-detonation models, \citealp{Khokhlov_etal_1991}) or
strong pulsations of the WD (pulsational delayed-detonation
models \citealp{Khokhlov_etal_1991_pddt, Khokhlov_etal_1993,
Hoeflich_etal_1996, Plewa_etal_2004, Jordan_etal_2008,
Bravo_etal_2009_ignition, Bravo_etal_2009_explosion, Jordan_etal_2012}). The
abundance distribution of the burning products in the deflagration phase is
likely to be mixed sufficiently to produce a high degree of homogeneity in
the density distribution. Only the most prominent features such as \ion{Si}{2}
and \ion{Ca}{2} are expected to show polarization (of $\lesssim1$\%). The
amplitude of line polarization may depend on the manner in which the ignition
is initiated \citep{Seitenzahl_etal_2013, Bulla_etal_2016b}.
(ii) Dynamical mergers between two WDs (see, e.g., \citealp{Iben_Tutukov_1984,
Webbink_1984, Pakmor_etal_2010}) will show significant asymmetry in all
abundances and in the density distribution, resulting in a wealth of
significantly polarized lines ($\sim 1$\%) across the optical spectrum
\citep{Pakmor_etal_2012, Bulla_etal_2016a}.
(iii) Head-on collision of the WDs may exhibit bimodal chemical distributions
including two distinct $^{56}$Ni regions \citep{Dong_etal_2015_snia} and
significant polarization owing to strong departure from spherical symmetry.
(iv) WDs with a helium shell may explode with mass below the Chandrasekhar
mass ($M_{\rm Ch}$) limit \citep{Taam_1980, Fink_etal_2010}. In this picture,
a detonation in the helium layer will send a shock wave inward, triggering a
second, off-center detonation in the inner C/O core \citep{Shen_etal_2010}.
Two and three-dimensional hydrodynamic simulations suggest that an off-center
detonation of the WD causes the core to be more compressed in one direction,
thus producing an aspherical distribution of the intermediate-mass elements
(IMEs; \citealp{Bulla_etal_2016b, Boos_etal_2021}) and significant line
polarization.
Multidimensional hydrodynamic computations of the polarization of a variety of
models of Type Ia SNe were conducted for the phase between $-$10 to $+$30 days
relative to peak luminosity by \citet{Bulla_etal_2016a, Bulla_etal_2016b}.
During this phase, the photosphere recedes into the layers of intermediate-mass
elements (IMEs) produced in the outer regions of typical explosion models. Line
polarization of IMEs such as \ion{Si}{2} and \ion{Ca}{2} is associated with
absorption features, and thus traces the deviation from spherical symmetry of
the corresponding elements \citep{Hoeflich_1991, Wang_wheeler_2008}. The
photosphere continues to recede over time. After $\sim 2$ months, most lines
and the majority of the continuum flux come from the inner, Fe-rich ejecta. The
polarization properties and the chemical distribution of the inner regions
could potentially be inferred from nebular-phase spectropolarimetry; however,
such datasets are very rare.
Here we report spectropolarimetric observations of the Type Ia SN\,2021rhu
between $-$7 and $+$79 days relative to the time of $B$-band maximum light.
Two epochs of high-resolution flux spectra obtained at days $+$39 and $+$74
in order to search for any circumstellar matter (CSM) around the SN will also
be discussed. SN\,2021rhu was discovered 2021 July
01 \citep[UT dates are used throughout this paper;][]{Munoz-Arancibia_etal_2021_sn2021rhu} in the alert stream of the
Zwicky Transient Facility \citep{Bellm_etal_2019, Graham_etal_2019_ztf} in
the edge-on spiral galaxy NGC 7814 and has been classified as a Type Ia
SN \citep{Atapin_etal_2021_sn2021rhu}. A detailed study of the observational
properties of SN\,2021rhu will be presented by Patra et al.\ (in prep.). \\
\section{Observations and Data Reduction} \label{sec:obs}
\subsection{VLT FORS2 Spectropolarimetry} \label{sec:fors2}
Spectropolarimetry of SN\,2021rhu was conducted with the FOcal
Reducer and low-dispersion Spectrograph 2
(FORS2; \citealp{Appenzeller_etal_1998}) on UT1 (Antu) of the ESO Very Large
Telescope (VLT) in the Polarimetric Multi-Object Spectroscopy (PMOS) mode.
With a 1$\arcsec$-wide slit, the spectral resolving power was $R\approx440$
[or 13\,\AA, full width at half-maximum intensity (FWHM)] at the center of
the wavelength range from $\sim 3500$ to 9200\,\AA. An additional
observation on day $+$0 employed the 1200R grism with
$R \approx 2140$ \citep[corresponding to $\sim 3$\,\AA\ FWHM around the
\ion{Si}{2} $\lambda$6355 feature; see, e.g.,][]{Anderson_etal_2018}. In order
to maximize the blue wavelength coverage, we decided not to use an
order-sorting filter (the standard is GG435 with a cuton wavelength of
4350\,\AA). The contamination by second-order light, which starts beyond two
times the atmospheric cutoff (i.e., $2 \times 3300$\,\AA), has an almost
negligible effect on the extraction of the true polarization signal at the red
end, unless the source is very blue (see the Appendix
of \citealp{Patat_etal_2010}). Observations at early phases indicate that the
photometric and spectroscopic behavior of SN\,2021rhu resembles that of normal
Type Ia SNe \citep{Munoz-Arancibia_etal_2021_sn2021rhu,
Atapin_etal_2021_sn2021rhu}, so the SN was not very blue. Table~\ref{Table_pol}
assembles a log of the VLT observations and the extracted polarization
properties of SN\,2021rhu, as discussed in Section~\ref{sec:snpol}. Details of
the FORS2/PMOS data reduction and the derivation of the Stokes parameters
including a debias procedure of the degree of linear
polarization introduced by \citep{Wang_etal_1997}
can be found in the FORS2 Spectropolarimetry Cookbook and Reflex
Tutorial\footnote{\url{ftp://ftp.eso.org/pub/dfs/pipelines/instruments/fors/fors-pmos-reflex-tutorial-1.3.pdf}}, \citet{Cikota_etal_2017_fors2}, and in Appendix~A of \citet{Yang_etal_2020}, following the procedures
of \citet{Patat_etal_2006_polerr} and \citet{Maund_etal_2007_05bf}.
Below $\sim 4000$\,\AA, the sensitivity of the VLT FORS2 instrument decreases
rapidly. Therefore, flux calibration and estimation of the polarization error
at the very blue end of the optical spectrum may suffer from systematic
uncertainties, and the polarization features become hard to characterize.
\subsection{Keck HIRES spectroscopy} \label{sec:hires}
We obtained two sets of spectra of SN\,2021rhu with the High-Resolution Echelle
Spectrometer (HIRES; \citealp{Vogt_etal_1994}) instrument on the Keck-I 10\,m
telescope on 2021-08-23 (day $+$39) and 2021-09-27 (day $+$74). We used the C2
decker setup (14\arcsec$\times$0\farcs{861}, $R = 45,000$) and integrated for
$3 \times 900$\,s and $2 \times 1800$\,s, respectively, at the two epochs. The
spectra were reduced following a standard routine of the California Planet
Search project \citep{Howard_etal_2010}. We corrected the velocity zero point
of the spectral orders containing the \ion{Na}{1}\,D and \ion{Ca}{2}\,H\&K
lines to the rest frame using the recession velocity of
$v_{\rm res} = 1049$\,km\,s$^{-1}$ measured for the spiral host galaxy
NGC 7814 \citep{van_Driel_etal_2016} and the barycentric velocity determined
following \citet{Wright_etal_2014} for the UTC at each HIRES observation. \\
\section{Results} \label{sec:results}
\subsection{Interstellar Polarization} \label{sec:isp}
At all epochs, the polarization measured toward SN\,2021rhu exhibits a clear
wavelength dependence (see the middle panels of Fig.~\ref{Fig_isp}). In the
optical/near-infrared domain, the wavelength ($\lambda$) dependence of the
interstellar polarization (ISP) can be approximated by the Serkowski
law \citep{Serkowski_etal_1975},
\begin{equation}
p(\lambda)/p(\lambda_{\rm max}) = {\rm exp}[-K\ \rm {ln}^2 (\lambda_{\rm max} / \lambda)] ,
\label{Eqn_Ser}
\end{equation}
where $\lambda_{\rm max}$ and $p(\lambda_{\rm max})$ denote the wavelength
and the level of the maximum polarization, respectively. The parameter $K$
describes the width of the interstellar polarization peak. For the purpose
of estimating the ISP, we exploit the fact that the intrinsic continuum
polarization of Type Ia SNe is generally negligible (i.e.,
$\lesssim 0.2$--0.3\%; \citealp{Wang_wheeler_2008}). Hence, we fitted
Serkowski's law to the polarization spectra of SN\,2021rhu at days $-$7 and
$+$0, and present the results in Figure~\ref{Fig_isp}. Data points near the
blue end of the spectra or belonging to the prominent and polarized
\ion{Si}{2}\,$\lambda$6355 and \ion{Ca}{2} near-infrared triplet (NIR3)
features were excluded in the fitting process. We take the \ion{Ca}{2} NIR3
feature to have a rest wavelength of $\lambda_{\rm0, Ca}=8570$\,\AA, averaged
over the wavelengths of the triplets (8500.36\,\AA, 8544.44\,\AA, and
8664.52\,\AA). The fitted ISP curve for day $+$0 ($K=1.06\pm0.06$,
$\lambda_{\rm max} = 3890\pm93$\,\AA, and
$p(\lambda_{\rm max})=1.778\pm0.015$\%) has been adopted for the ISP
corrections throughout the paper. The parameters are consistent with the
values determined based on observations at day $-$7 (see the left panels of
Fig.~\ref{Fig_isp}), thus confirming the assumption of constant low
continuum polarization of SN\,2021rhu around its peak luminosity. The actual
fitting and correction process has been carried out separately for the Stokes
parameters $Q$ and $U$ to obtain their values intrinsic to the SN at all
observed phases. We also fitted the ISP by adding a secondary component which
characterizes any contribution from the Galactic dust. A Galactic reddening
component can be estimated as $E(B-V)^{\rm MW} = 0.04$\,mag based
on \citet{Schlafly_etal_2011} and \citet{Cardelli_etal_1989}. This provides
an upper limit on the polarization by the Milky Way dust induced by dichroic
extinction:
$p_{\rm ISP} \textless 9\% \times E(B-V)$ \citep{Serkowski_etal_1975}, and
$p_{\rm max}^{\rm MW}\leq0.36$\%. Adopting a fixed
$K^{\rm MW} = 1.15$ \citep{Serkowski_etal_1975}, we found that
$p_{\rm max}^{\rm MW}$ is consistent with zero, suggesting that the observed
ISP toward SN\,2021rhu is mainly contributed by its host.
\subsection{Intrinsic Polarization of SN\,2021rhu} \label{sec:snpol}
In Figures~\ref{Fig_iqu_ep1}--\ref{Fig_iqu_ep5} we present the ISP-corrected
spectra of Stokes parameters, degree of polarization, and polarization
position angle obtained at four epochs from days $-$7 to $+$79. The figures
cover a wavelength range 3600--9150\,\AA\ and the data have been rebinned
to 40\,\AA\ in order to increase the signal-to-noise ratio but also make sure
that major broad spectral features are sampled by at least $\sim 5$--10
resolution elements. The polarization position angle in panel (e) is presented
without subtracting the ISP since ${\rm PA} = (1/2){\rm tan}^{-1}(U/Q)$ will
display random values when $Q$ is low after a baseline ISP has been removed.
The size of the error bars in the histograms that provide the
polarization measurements represent the 1--$\sigma$ uncertainty.
The temporal evolution of the degree of polarization of SN\,2021rhu after the
ISP correction is shown in Figure~\ref{Fig_pol}. We also estimate the
continuum polarization based on the Stokes parameters over the wavelength
range 5000--6800\,\AA\ with the highly polarized \ion{Si}{2} lines excluded.
The error-weighted mean Stokes parameters $(q^{\rm Cont}$, $u^{\rm Cont})$
across this region from days $-7$ to $+$79 are presented in
Table~\ref{Table_pol}. The error has been estimated by adding the statistical
uncertainties and the standard deviation calculated from the 40\,\AA\ binned
spectra within the continuum wavelength range in quadrature.
The $q^{\rm Cont}$ and $u^{\rm Cont}$ are consistent with zero within the
selected wavelength range, and the level of
polarization intrinsic to SN\,2021rhu across the optical continuum remains
low from days $-$7 to $+$79. The most noticeable behavior in the
spectropolarimetric evolution of SN\,2021rhu is the strong increase in the
peak polarization of the \ion{Ca}{2} lines from days $+$36 to $+$79, namely
in \ion{Ca}{2}\,H\&K from $1.1 \pm 0.3$\% to $2.3 \pm 0.7$\%, and in
\ion{Ca}{2}\,NIR3 from $0.8 \pm 0.1$\% to $2.5 \pm 0.3$\% (see
Table~\ref{Table_pol} and Fig.~\ref{Fig_pol}), as measured in the data with
40\,\AA\ bin size. At all epochs, the peak polarization across
\ion{Ca}{2}\,NIR3 is significantly higher than in the continuum.
A polarization signal is seen in \ion{Si}{2}\,$\lambda$6355 on days $-$7 and
$+$0. Thereafter, it vanished in accordance with the disappearance of the
\ion{Si}{2} feature from the total-flux spectrum a few weeks past maximum
light. This early \ion{Si}{2} polarization can be understood as an
inhomogeneous obscuration of the photosphere. With the recession of the
photosphere into theinterior ejecta below the Si-rich layer, the optical depth
of the Si becomes small, and polarization by blocking portions of the
photosphere cannot occur.
The polarization profiles of the \ion{Ca}{2}\,NIR3 and
\ion{Si}{2}\,$\lambda$6355 lines exhibit rich structures that also evolve with
time. For the former complex, we infer the expansion velocity from the
absorption minimum of the \ion{Ca}{2}\,NIR3 feature in the total-flux spectrum,
namely $v_{\rm Ca}=$ 13,800, 13,200, 12,800, and 12,200\,km\,s$^{-1}$, at days
$-7$, $+$0, $+$36, and $+$79, respectively, as shown in
Figures~\ref{Fig_iqu_ep1}--\ref{Fig_pol}. We also mark the corresponding
expansion velocities of the three transitions with green vertical solid lines.
The \ion{Ca}{2}\,H\&K feature has almost the same velocity as measured from
\ion{Ca}{2}\,NIR3, and we also characterize it with $v_{\rm Ca}$. The velocity
of \ion{Si}{2}\,$\lambda$6355 is also labeled in
Figures~\ref{Fig_iqu_ep1}-\ref{Fig_iqu_ep2} and \ref{Fig_pol}--\ref{Fig_iqu_ep3},
namely $v=$ 13,100 and 12,000\,km\,s$^{-1}$ at days $-7$ and $+$0, respectively.
Identifications of major spectral lines are provided for days
$-$7 and $+$0, before the SN has entered the nebular phase. All spectral features
except for the \ion{Ca}{2} lines marked in Figures~\ref{Fig_iqu_ep1} and
\ref{Fig_iqu_ep2} are labeled for the photospheric velocity $v$. Different colors
were used to provide a better separation of the lines. Major telluric features
are marked with gray-shaded areas.
At day $+$0, two peaks appear in the polarization profile of \ion{Ca}{2}\,NIR3,
at $\sim 15,400$ and $\sim 8700$\,km\,s$^{-1}$ with respect to the rest frame
of SN\,2021rhu. They bracket the \ion{Ca}{2}\,NIR3 and \ion{Ca}{2}\,H\&K
velocities of $\sim 13,200$\,km\,s$^{-1}$ measured at the same epoch. By day
$+$79, three peaks near 15,700, 11,500, and 7,500\,km\,s$^{-1}$ had developed
in the \ion{Ca}{2}\,NIR3 polarization profile. At this epoch, $v_{\rm Ca}$ has
decelerated less and appears at 12,200\,km\,s$^{-1}$. The uncertainty of the
stated velocities is dominated by the width of the smallest resolution element,
namely half of 8\,\AA\ for \ion{Si}{2}$\,\lambda$6355 in the higher-resolution
observation and half of 40\,\AA\ for \ion{Ca}{2}\,NIR3, corresponding to
$\sim 200$ and $\sim 700$\,km\,s$^{-1}$, respectively. The uncertainties
represent the maximum possible error owing to rounding of wavelengths in
spectral bins. Inspection of the raw data before spectral rebinning confirms
the reality of the structures as well as of their evolution. From the available
data, it is not possible to conclude whether the three polarization peaks
correspond to the three triplet components or have a different origin.
At days $-$7 and $+$0, the expansion velocity of the SN ejecta inferred from
the absorption minimum in the \ion{Si}{2}\,$\lambda$6355 flux profile was
13,100 and 12,000\,km\,s$^{-1}$ (marked by the vertical solid and dotted-dashed
lines in Figures~\ref{Fig_iqu_ep1}--\ref{Fig_iqu_ep2} and ~\ref{Fig_pol}),
respectively. The higher-resolution polarization profile of this line from day
$+$0 (this observation does not cover the \ion{Ca}{2}\,NIR3 region) exhibits
two peaks. At about 13,500 and 9,600\,km\,s$^{-1}$, respectively (highlighted
by the dashed dark-green lines in Figs.~\ref{Fig_iqu_ep3}d
and \ref{Fig_iqu_ep3}e), their velocities are different from that of the
absorption minimum. A peak and a trough in $Q$ at the position of
\ion{Si}{2}\,$\lambda$6355 (Fig.~\ref{Fig_iqu_ep3}b) lead to different
position angles (Fig.~\ref{Fig_iqu_ep3}e). Without ISP subtraction, the
polarization position angles of the blue ($-50.0\pm2.5$ deg) and the red
($-57.5\pm2.0$ deg) components bracket that of the continuum
($-53.5 \pm 1.3$ deg). The position angles of the blue and the red components
were estimated by taking the error-weighted mean value in a velocity range of
$\pm 800$\,km\,s$^{-1}$ relative to the respective peak. The continuum PA was
computed between 6300\,\AA\ and 6700\,\AA. In the low-resolution observations
on the same date with grism 300V (overplotted in brown in
Figs.~\ref{Fig_iqu_ep3}(b)--(d)), only traces of the complex polarization
structure are visible as some asymmetry. The differences between
\ion{Si}{2}\,$\lambda$6355 and \ion{Ca}{2}\,NIR3 in the structure and evolution
of their multiple polarization components will be analyzed by Patra et al.\
(in prep.).
\subsection{\ion{Na}{1}\,D and \ion{Ca}{2}\,H\&K Flux Profiles}
Since any CSM will be exposed to intense ultraviolet (UV) radiation from the
SN explosion, embedded elements such as Na and Ca are likely to be ionized.
Unlike most of the broad features that arise from the expanding ejecta which
retain high kinetic energy, lines from these elements would be narrow enough
so that any subcomponents of each feature could be seen separately with higher
spectral resolution. Additionally, the \ion{Na}{1}\,D line provides a good
tracer of gas and dust, and its strength is correlated with the dust reddening
along the line of sight \citep{Munari_etal_1997,Poznanski_etal_2012,
Phillips_etal_2013}. Temporal variability of such narrow absorption lines
would indicate the evolving conditions of the CSM ionization induced by the
variable SN radiation field. Therefore, this observational signature has been
used to search for CSM around Type Ia SNe \citep{Patat_etal_2007,
Simon_etal_2009, Sternberg_etal_2011, WangX_etal_2019}.
In order to investigate any temporal evolution of circumstellar Na and Ca line
profiles of SN\,2021rhu, we approximate the pseudocontinuum by a low-order
polynomial fitted to the spectrum between $\sim \pm 20$\,\AA\ of the central
wavelength with the absorption line excluded. The flux spectrum spanning each
line profile was then divided by the pseudocontinuum spectrum. The line
profiles are shown in Figure~\ref{Fig_hires}. For SN\,2021rhu, we identify no
temporal evolution in the \ion{Na}{1}\,D doublet (5895.92, 5889.95\,\AA) and
the \ion{Ca}{2}\,H\&K doublet (3968.47, 3933.66\,\AA). At both epochs, two
complexes of absorption features with centroids at $\sim -10$ and
$\sim +10$\,km\,s$^{-1}$ plus a shallower component at $\sim +60$\,km\,s$^{-1}$
are visible in the lines of \ion{Ca}{2}\,H\&K. The \ion{Na}{1}\,D doublet
exhibits a broad, saturated profile at almost zero velocity blended with a
narrow component near 15\,km\,s$^{-1}$, plus another narrower component
centered at $\sim 52$\,km\,s$^{-1}$. The reddest component displays a slightly
higher velocity in \ion{Ca}{2}\,H\&K compared to the \ion{Na}{1}\,D doublet.
The relatively simple line profiles of SN\,2021rhu do not indicate the presence
of numerous resolved velocity components that typically span a velocity range
of a few hundred km\,s$^{-1}$ as found in some strongly reddened Type Ia SNe
(see, e.g., SN\,2006X, \citealp{Patat_etal_2007};
SN\,2014J, \citealp{Graham_etal_2015}; and
SN\,1999cl, \citealp{Blondin_etal_2009}). SN\,2007le exhibited narrow,
time-variable \ion{Na}{1}\,D absorption and a moderate amount of
reddening \citep[$E(B-V)=0.27$\,mag;][]{Simon_etal_2009}. The same feature in
SN\,2021rhu displayed a single component, while a complex absorption profile
with $\sim 7$ distinct velocity components was found in SN\,2007le. Most of the
\ion{Na}{1} and \ion{Ca}{2}\,H\&K profiles in SN\,2021rhu are confined between
$\sim -20$ and $+60$\,km\,s$^{-1}$. They most likely arise in a single
interstellar dust sheet intersecting the sight line. The lack of temporal
evolution suggests constant ionization conditions of the dust cloud exposed to
the variable SN radiation field between the observations of HIRES spectroscopy
at days $+$39 and $+74$. However, the Keck HIRES observations starting from day
$+$39 cannot rule out the possibility that the CSM may have been present at
smaller distances to the SN and evaporated by the radiation at early phases.
Alternatively, the Na lines were absorbed by the foreground dust grains at
large distances. In any case, we infer a rather clean circumstellar environment
around day $+$80, and a large distance between SN\,2021rhu and the dust cloud
where the interstellar lines form. \\
\section{Discussion} \label{sec:discussion}
The polarization of SN\,2021rhu shows a steep increase from the red to the
blue wavelengths and peaked significantly blueward of the galactic average
at $\approx$5500 \AA\ \citep{Whittet_etal_1992}, i.e.,
$p_{\rm max}=$1.78$\pm$0.02\% and $\lambda_{\rm max}=$3890$\pm$93 \AA.
Such a behavior has been seen in some Type Ia SNe that show strong
extinction ($E_{B-V}^{\rm host}\gtrsim$0.5 mag), but the peak polarization
of SN\,2021rhu is lower compared to such events. For example,
SNe\,1986G ($p_{\rm max}=$5.16$\pm$0.04\%,
$\lambda_{\rm max}=$4300$\pm$10 \AA, \citealp{Hough_etal_1987}),
2006X ($p_{\rm max} \textgreater$8\%,
$\lambda_{\rm max}\lesssim$4000 \AA, \citealp{Patat_etal_2009}),
2008fp ($p_{\rm max} \textgreater$2.2\%,
$\lambda_{\rm max}\lesssim$4200 \AA, \citealp{Cox_Patat_2014}), and
2014J ($p_{\rm max} \textgreater$6.6\%,
$\lambda_{\rm max}\lesssim$4000 \AA, \citealp{Kawabata_etal_2014,
Patat_etal_2015, Srivastav_etal_2016, Porter_etal_2016, Yang_etal_2018_pol}).
However, the polarization wavelength dependence of SN\,2021rhu is still
significantly deviated from the MilkyWay average, indicating an enhanced
proportion of small grains along the Earth-SN\,2021rhu sightline than in
the mean Galactic dust \citep{Whittet_etal_1992, Draine_2003a}.
After the ISP correction based on the low continuum polarization of Type Ia
SNe around peak brightness, major spectral lines of SN\,2021rhu show moderate
polarization. This behavior is consistent with that generally found for Type
Ia SNe. The low continuum polarization at day +$79$ also suggests the absence
of a large amount of CSM within $\sim 2 \times 10^{17}$\,cm. This is because the
photons scattered by a circumstellar dust cloud can be highly polarized at
large scattering angles, causing a significant increase in the continuum
polarization \citep{Wang_etal_1996, Yang_etal_2018_pol}. In order to produce
a net polarized signal via CSM scattering, the CSM distribution on the plane
of the sky needs to deviate from point symmetry. Such symmetry would lead to
a complete cancellation of the electric vectors (i.e., zero net polarization).
SN\,2021rhu exhibits a spherical explosion as indicated by the low continuum
polarization that is consistent with zero as measured from as early as $\sim$seven
days before the peak luminosity. Such a high degree of spherical symmetry is
inconsistent with the WD-WD merger-induced explosion models, which predict
modest ($\gtrsim$0.3) to strong ($\sim$1--2\%) continuum polarizations as seen
along and perpendicular to the equatorial plane,
respectively \citep{Bulla_etal_2016a}. The intermediate line polarization
observed before and around the peak luminosity of SN\,2021rhu and the presence
of the small-scale polarization structures across the spectral lines are
compatible with the moderate chemical nonuniformity predicted by the
delayed-detonation and sub-$M_{\rm Ch}$ Helium-shell detonation
scenarios \citep{Bulla_etal_2016b}.
Spectropolarimetry of Type Ia SNe beyond $\sim 30$\,days is still very rare.
Our observations of SN\,2021rhu offer the first opportunity to follow the
temporal evolution, and exploit the diagnostic power, of the polarization of
spectral features at such late epochs.
\subsection{Multiple Polarization Components of \ion{Si}{2}\,$\lambda$6355~\label{sec:si}}
The additional higher-resolution spectropolarimetry obtained around the peak
luminosity of SN\,2021rhu allows us to determine the geometric signatures left
behind by the propagation of the burning front at smaller physical scales,
on the order of a few hundred km\,s$^{-1}$. As shown in
Figure~\ref{Fig_iqu_ep3}, the polarization modulation, which is unresolved in
the conventional flux spectrum (although the latter has a spectral resolution
that is more an order of magnitude higher), suggests that more than one major
line-forming region is present. For comparison, in Figure~\ref{Fig_iqu_ep3}
we also present the Stokes parameters obtained at the same epoch with the
low-resolution 300V grism as shown in Figure~\ref{Fig_iqu_ep2}. Such a
polarized line complex near the SN light-curve peak has also been reported for
SN\,2018gv \citep{Yang_etal_2020}. The shape of the polarized line profile of
SN\,2021rhu is not identical to that in SN\,2018gv, but both cases may arise
from multiple opacity components of clumps and/or shells.
Additionally, the small-scale modulations on the order of a few hundred
km\,s$^{-1}$ can only be discerned in the spectropolarimetry spectra obtained
with the higher-resolution grism 1200R, rather than the low-resolution grism
300V. The comparison between the low- and high-resolution observations also
demonstrates the feasibility of resolving small-scale structures in the SN
ejecta with high-resolution spectropolarimetry.
In addition to the sub-structures across the polarization spectrum of the
\ion{Si}{2}$\lambda$6355 at the SN peak luminosity, polarization peaks are also
seen across both \ion{Si}{2}$\lambda$6355 and \ion{Si}{2}$\lambda\lambda$5898,
5979 absorption features at day $-$7. These narrow peaks are no longer
distinguishable at day $+$0 (see Figures~\ref{Fig_iqu_ep1}--~\ref{Fig_iqu_ep2},
and Figure~\ref{Fig_pol}). The narrow polarization components are real and they
are also seen under smaller sizes of spectral binning. Their FWHM widths are
$\lesssim$1,000 km s$^{-1}$ as measured at day $-$7, reflecting the presence
of Si-rich clumps with a similar scale in velocity space that intersect the
photosphere. At day $+$0, the photosphere recedes into the deeper layers of the
ejecta. The disappearance of these narrow polarization peaks and the evolution
of the small-scale modulations seen from Figures~\ref{Fig_iqu_ep2},
~\ref{Fig_pol}, and ~\ref{Fig_iqu_ep3} can be understood as the retreating
photosphere has passed the clumps of Si-line opacities that produced the narrow
polarization peaks at day $-$7. Future spectropolarimetric observations with a
higher cadence will provide a greater spatial resolution in depth, therefore
enabling a more comprehensive tomography of the SN ejecta and trace the
geometric properties of any small-scale structures.
\subsection{The Dominant Axes of the Continuum and the \ion{Ca}{2}\,NIR3 Line Polarization~\label{sec:qu}}
Presenting the spectropolarimetry on the Stokes $Q$--$U$ plane provides an
intuitive layout of the axisymmetry of the continuum and across different
spectral features \citep{Wang_etal_2001}. For example, an axially symmetric
structure will imprint a straight line in the Stokes $Q$--$U$ plane, the
so-called ``dominant axis'' \citep{Wang_etal_2003_01el, Maund_etal_2010_05hk}:
\begin{equation}
U = \alpha + \beta Q.
\label{Eqn_daxis}
\end{equation}
After ISP correction, departures from spherical and axial symmetries in the
ejecta are indicated in the $Q$--$U$ diagram by their distances from the origin
and deviations from the dominant axis, respectively.
In Figure~\ref{Fig_qu} we present the ISP-corrected Stokes parameters in the
Stokes $Q$--$U$ plane between days $-$7 and $+$79. The dominant axes of
SN\,2021rhu were determined by performing an error-weighted linear
least-squares fitting to the continuum polarization
($3850 \leq\lambda\leq 9100$\,\AA) and to the \ion{Ca}{2}\,NIR3 feature
($7800 \lesssim\lambda\lesssim 8510$\,\AA, corresponding to a velocity range
from 27,000 to 2000\,km\,s$^{-1}$), respectively. The fitted dominant axes
for SN\,2021rhu at different epochs are plotted in Figure~\ref{Fig_qu}, and
the derived parameters are given in Table~\ref{Table_pol}. Based on the
$Q$--$U$ diagrams in the left panels of Figure~\ref{Fig_qu}, the optical ejecta
of SN\,2021rhu observed in the first three epochs (days $-$7 to $+$36) are
consistent with a dominant axis that rotates with time, while deviations from
a single axial symmetry are also notable as indicated by the scatter about
the dominant axis reflected by the large values of $\chi^2$ labeled in each
subpanel. At day $+79$, the dominant axis fitted to the optical spectrum is
mostly defined by the significantly polarized \ion{Ca}{2}\,NIR3 line. After
excluding the wavelength range covering the \ion{Ca}{2}\,NIR3 feature (i.e.,
the black-circled data points in the left rectangular panel of
Fig.~\ref{Fig_qu}), the remaining data points are scattered around (0,0)
and do not exhibit any prominent dominant axis. Therefore, we conclude that
the optical ejecta do not follow any conspicuous axial symmetry at day $+79$.
The $Q$--$U$ diagram across the \ion{Ca}{2}\,NIR3 feature exhibits complicated configurations. Loop-like structures can be seen before day $+$36. Such
patterns across the \ion{Ca}{2}\,NIR3 line are observed in other typical Type
Ia SNe and represent variations in the amplitude and orientation of the
polarization as a function of velocity or depth, indicating the deviation
from axial symmetry \citep{Wang_wheeler_2008}. At the last epoch at day $+$79
(the large left and right subpanels in Fig.~\ref{Fig_qu}), the scatter about
the dominant axis of the Ca line is smaller, but the complex \ion{Ca}{2}\,NIR3
polarization profile cannot be described by a single loop. Within the
framework of a model in which the polarization is produced by line opacity
that unevenly blocks the underlying photosphere, the \ion{Ca}{2}\,NIR3
polarization may indicate the presence of multiple Ca-rich components in the
inner layers of the ejecta. Although the photosphere of Type Ia SNe may still
persist after day $\sim 100$ as evidenced by the presence of permitted
lines \citep{Black_etal_2016}, the above interpretation of the presence of
multiple Ca-rich components obscuring the SN photosphere will be invalid if,
at late phases, the ejecta have become sufficiently optically thin that
electron scattering may not be able to produce considerable polarization
signals. In \S \ref{sec:gsa}, we propose an alternative mechanism of atomic
alignment that generates polarized signals not by patchy photospheric
electron scattering but through the alignment of atomic angular momentum in an
anisotropic radiation field.
\subsection{The Late-Time Increase in the Polarization of the \ion{Ca}{2}
Lines~\label{sec:capol}}
The apparent increasing \ion{Ca}{2}\,NIR3 polarization of SN\,2021rhu at days
$+$36 and $+$79 has previously only been reported for SN\,2006X, which showed
an increase from $\sim 0.6$\% around maximum luminosity to $\sim 1.2$\%
at day $+$39 \citep{Patat_etal_2009}. SN\,2001el, which is the only other Type
Ia SN with \ion{Ca}{2}\,NIR3 polarization measured at a considerable late
phase, did not show such polarization signals at day
$+$41 \citep{Wang_etal_2003_01el}. Although the uncertainty becomes larger
near the blue end of the spectral coverage, a similar evolution of \ion{Ca}{2}
polarization is also seen from the \ion{Ca}{2}\,H\&K lines of SN\,2021rhu. The
nondetection of the \ion{Ca}{2}\,NIR3 polarization in SN\,2001el at day
$+$41 \citep{Wang_etal_2003_01el} may be due to an orientation effect or an
intrinsic diversity among Type Ia SNe. Considering the sparse sample of
polarimetry, a meaningful conclusion is not currently feasible.
The evolution of the \ion{Ca}{2}\,NIR3 from $\sim 0.3$\% at around the peak to
$\sim 2.5$\% at day +79 is thus unprecedented. For example, the polarization
peak in the \ion{Ca}{2}\,NIR3 profile of Type Ia SNe around maximum light is
generally $\lesssim 1$\% \citep{Wang_wheeler_2008}. By far the only exception,
SN\,2004dt, displayed a peak \ion{Ca}{2}\,NIR3 polarization of
$\sim 0.7$--2\% \citep{Wang_etal_2006_04dt}. This high line polarization was
observed around the peak luminosity, and it is incompatible with the
$\lesssim 0.8$\% maximum line polarization predicted by hydrodynamic models
for two-dimensional (2D) double detonation and three-dimensional (3D) delayed
detonation \citep{Bulla_etal_2016b}.
This ``repolarization" behavior seen in SN\,2006X has been explained in the
context of a deflagration/detonation model by a partial blocking of the
photosphere when it had receded to within the inner edge of the Ca layer at
8000--9000\,km\,s$^{-1}$ \citep{Patat_etal_2009}. Detailed modeling of the
degree of asymmetry between the bottom of the Ca layer and the outer parts of
the Fe-rich ejecta at later times is necessary to determine whether the
\ion{Ca}{2}\,NIR3 polarization of SN\,2021rhu at day $+$79 can be accommodated
in the context of an $M_{\rm Ch}$ deflagration/detonation picture. This may be
a challenge. As the ejecta expand, the optical depth decreases approximately
$\propto r(t)^{-2}$ or $\propto t^{-2}$, where $r(t)$ represents the radius
reached by freely expanding ejecta at time $t$ after the SN explosion.
After a few months, the size of the $\tau=2/3$ photosphere has
substantially decreased. The light emitted by the SN is dominated by numerous
overlapping line transitions of Fe-group elements. The abundant overlying
narrow lines form a `quasi-continuum' spectrum, superimposed by several strong
Fe and Co emissions. The late-time emission-dominated spectra trace the
distribution of the Fe-group burning products near the central energy source.
These emissions are essentially unpolarized, resulting in an absence of
polarized photons to be blocked in the first place.
Therefore, the decreasing scattering cross-section over time makes the
photosphere-obscuration mechanism less likely to account for the significant
\ion{Ca}{2} polarization at late times. Owing to the relatively large
systematic uncertainties in the very blue end of the optical continuum
(\ref{sec:fors2}), we refrain from an interpretation of the polarization
behavior measured across the \ion{Ca}{2}\,H\&K lines.
\subsection{Considerations Based on Atomic Alignment in a Weak Magnetic Field~\label{sec:gsa}}
We suggest an alternate possibility that might produce the high \ion{Ca}{2}
polarization as found in SN\,2021rhu at day $+79$. This process involves
photo-excitation in an anisotropic radiation field when the
electron-scattering opacity in the SN ejecta has decreased substantially.
The lower levels of \ion{Ca}{2}\,NIR3 are metastable states that can be
geometrically aligned through photo-excitation by an anisotropic radiation
field. In an interstellar medium that has a weak magnetic field, a subsequent
realignment of the angular momentum of the atoms in their ground state may
happen through magnetic precession (see, e.g., \citealp{Happer_1972,
Landolfi_etal_1986}). Such a magnetic realignment will take place if the
Larmor precession rate, $\nu_{\rm L}$, is greater than the photo-excitation
rate, $\tau_{R}^{-1}$, from the ground state of the atoms:
$\nu_{\rm L} \textgreater \tau_{R}^{-1}$ \citep{Yan_Lazarian_2006}. The atoms'
angular momentum will then be realigned with respect to the magnetic field.
In the case of $\nu_{\rm L} \approx \tau_{R}^{-1}$, this ``Hanle effect" will
become effective in a relatively stronger magnetic field \citep{Hanle_1924,
Ignace_etal_1997, Yan_Lazarian_2008}.
The incident flux must be anisotropic in order to differentially excite the
atoms in different magnetic sublevels (see, e.g., \citealp{Happer_1972,
Landolfi_etal_1986}; and the review by \citealp{Yan_Lazarian_2012}). In the
presence of anisotropic flux, the angular momentum of the atoms will also be
distributed anisotropically. The result is the induction of unequal
populations over the magnetic sublevels that correspond to different magnetic
quantum numbers, and hence the production of polarized radiation. This picture
is compatible with the configuration of Type Ia SNe in which \ion{Ca}{2}-rich
matter is illuminated by a central emission source. In this case, the
radiation field is primarily radial and hence naturally anisotropic, and the
photon pumping is intrinsically anisotropic. Under the framework of atomic
alignment, the spatial distribution of \ion{Ca}{2} cannot be inferred from its
polarization. The photons are polarized through the interaction between
optical pumping by an anisotropic radiation field and the ambient magnetic
field, so the polarization mechanism will be in effect regardless of the
spatial distribution of \ion{Ca}{2}. This is totally different from the
photosphere-blocking mechanism.
Particularly in the later stages of the expansion of a Type Ia SN, the ejecta
become thin enough that the impact of collisions diminishes. In the case of
\ion{Ca}{2}, the photo-excitation from the ground state $^2S_{1/2}$ is
dominated by the two E1 transitions of the \ion{Ca}{2}\,H$\&$K lines and
followed by the cascade to the two metastable states $^2D_{3/2,5/2}$. As
proposed by \cite{Yan_Lazarian_2006}, the metastable states of
\ion{Ca}{2}\,NIR3 can be aligned in an anisotropic radiation field and
magnetically realigned in the same fashion as their ground states. This
happens to some atomic species because their metastable states are also
long-lived, which makes them act similarly to the ground states and become
sensitive to a weak magnetic field.
Assuming a black-body radiation field of 5000\,K, the photo-excitation rate
from the ground state of \ion{Ca}{2}\,NIR3 gives a photo-excitation rate of
$\sim2\times 10^4$\,s$^{-1}$ \citep{Yan_Lazarian_2006}. This corresponds to the
inverse of the lifetime of the metastable states, which in turn yields a low
magnetic sensitivity of $\sim1$\,mG for the
\ion{Ca}{2}\,NIR3 \citep{Carlin_etal_2013}. In other words, the polarization
of the \ion{Ca}{2}\,NIR3 line potentially traces the component of the local
magnetic field in the plane of the sky when the magnetic field is stronger
than $\sim 1$\,mG. Based on the optical pumping from the ground state,
particularly, we estimated the maximum degree of polarization for
\ion{Ca}{2}\,NIR3 reached in the ideal alignment case, with a beam of
radiation and a local magnetic field aligned with it. Without counting the
effect of collisions between atoms, the maximum degree of polarization for
8500.36\,\AA\ ($2D_{3/2} \textgreater 2P_{3/2}$), 8544.44\,\AA\ ($2D_{5/2} \textgreater 2P_{3/2}$), and 8664.52\,\AA\ ($2D_{3/2} \textgreater 2P_{1/2}$)
can reach 10.8\%, 8.9\%, and 12.5\%, respectively.
The highest polarization is higher compared to that in \ion{Ti}{2} lines,
$\sim 7$\% for the two
aligned metastable ($2D_{3/2, 5/2}$) to the upper ($2P^{O}_{1/2, 3/2}$) states,
which exhibits similar but more complicated transitional structures compared to
\ion{Ca}{2}\,NIR3 (see, e.g., Table~3 of \citealp{Yan_Lazarian_2012}). A more
detailed discussion of the \ion{Ca}{2}\,NIR3 polarization computed based on the
ground-state-alignment theory will be presented in future work (Yan et al., in
prep.).
Since all three components of the \ion{Ca}{2}\,NIR3 triplet are magnetically
alignable and can be individually polarized, the large width of the polarized
feature on day $+$79 (see Figs.~\ref{Fig_iqu_ep5}, \ref{Fig_pol}, and
\ref{Fig_qu}) is in agreement with the proposed atomic alignment mechanism.
The polarization profile and peak level at this late time differ
substantially from those around peak luminosity (see
Figs.~\ref{Fig_iqu_ep1}--\ref{Fig_iqu_ep2}), when the line polarization
was mainly due to an uneven obscuration of the photosphere by the line opacity.
If the late-time polarization is indeed caused by atomic alignment in a weak
magnetic field, the shape of the polarization profile may depend on (i) the
anisotropy of the pumping radiation field, (ii) the ambient magnetic field,
and (iii) the velocity field that affects the Ca opacity. The modeling of the
\ion{Ca}{2}\,NIR3 polarization profile is beyond scope of this paper.
For atomic species to be aligned, the angular momentum of their ground state
should be nonzero. For example, \ion{Ca}{2}\,H\&K, which is induced via the
transition from the ground state to the upper state (i.e.,
$2S_{1/2}\rightarrow2P^{O}_{1/2, 3/2}$), is not alignable by this magnetic
mechanism since the total angular momentum of the ground state is $J=0$, which
does not allow uneven occupation of ground-state angular momenta. Therefore,
the \ion{Ca}{2}\,H\&K absorption would not be polarized by the same effect.
However, although the uncertainty in the Stokes parameters becomes larger near
the blue end of the spectral coverage, a similar evolution of increasing
late-time \ion{Ca}{2} polarization is also seen in the \ion{Ca}{2}\,H\&K lines.
In Figures~\ref{Fig_iqu_ep4} and \ref{Fig_iqu_ep5}, we present the observed
polarization position angles across both the \ion{Ca}{2}\,H\&K and
\ion{Ca}{2}\,NIR3 for SN\,2021rhu after correcting for the ISP. At day $+$79,
the two position angles exhibit significant differences, suggesting that the
polarization mechanisms of the two \ion{Ca}{2} lines are intrinsically
different. In the case of uneven photosphere obscuration, the two features
would be expected to exhibit similar polarization position angles, since both
lines are likely to originate from the same \ion{Ca}{2} opacity distribution.
\section{Summary}
Compared to the time around peak luminosity, we found a strong growth of the
\ion{Ca}{2} polarization observed in the Type Ia SN\,2021rhu on days $+$36 and
$+$79. The continuum polarization remained low at day $+$79 in spite of the
drastic increase in polarization of \ion{Ca}{2}\,NIR3; this is consistent with
the low level measured for the continuum polarization at early phases when the
photosphere has not yet receded into deep layers. We consider the possibility
of line polarization owing to partial blocking of the underlying photosphere by
Ca-rich material, and an alternative explanation that \ion{Ca}{2}\,NIR3 might
be polarized through the alignment of the atoms with respect to an ambient
weak magnetic field in an anisotropic radiation field. Detailed modeling of
the late-time polarization of \ion{Ca}{2} features by magnetic alignment
should consider (1) the geometric distribution of the \ion{Ca}{2} opacities,
(2) the shape and degree of anisotropy of the induced radiation field, and
(3) the strength and geometry of the magnetic field. Note that the radiation
field above the photosphere is intrinsically anisotropic, and the majority of
the photon pumping will happen along the radial directions. Such an anisotropic
incident flux can induce an unequal population distribution over the magnetic
sublevels. Without the flux anisotropy, the alignment of atomic angular
momentum will not happen, and no polarization will be produced.
Assuming that SN\,2021rhu is not a singular case, we briefly outline the open
questions raised by the polarimetry of SN\,2021rhu at late phases. \\
(i) Is the large \ion{Ca}{2}\,NIR3 polarization observed on day $+$79 best explained by photospheric blocking or optical pumping effects or their combination, or is another process needed? \\
(ii) Do different explosion mechanisms predict specific late-time continuum and line polarization properties that can be used for diagnostic purposes? \\
(iii) Can the magnitude and geometry of the magnetic field be determined from the late-time \ion{Ca}{2}\,NIR3 polarization profile? \\
(iv) Can the multicomponent polarization profiles of \ion{Si}{2}\,$\lambda$6355 and \ion{Ca}{2}\,NIR3 and their evolution lay the foundation for an overarching consistent model?
During the rapid expansion of the ejecta, any pre-existing magnetic field will
be frozen in the ejecta and weaken as they expand. Evidence for high initial
magnetic fields (in excess of 10$^{6}$\,G at the surface of the progenitor
WD) has been reported from infrared line profiles and light curves of Type Ia SNe
(see, e.g., \citealp{Penney_etal_2014, Hristov_etal_2021}). The exact nature of
the initial magnetic field of the WD and its evolution to late SN phases may be
of great importance for the understanding of Type Ia SNe. Owing to the rapid
evolution of the SN ejecta, the radiation field may become sufficiently diluted
and anisotropic, matching the conditions necessary for an effective atomic
alignment for a wide range of atomic species and transitions. A rise of
polarized atomic lines is expected at this epoch. Perhaps this mechanism is
already at work in the observed polarization of \ion{Si}{2} and \ion{Ca}{2}
lines even around maximum brightness.
This is a new area worth pursuing both theoretically and observationally in
future studies. With a comprehensive understanding of the mechanism(s), the
\ion{Ca}{2} polarization may provide a measure in the ejecta of the magnetic
field inherited from the progenitor. This would be a valuable new diagnostic
of Type Ia SN explosion models. Future spectropolarimetry of Type Ia SNe
extending to late phases is also essential to search for any systematics, to
explain the polarization mechanism, to discriminate between orientation and
intrinsic diversity, and to further understand the radiation distribution
within the core.
\setlength{\tabcolsep}{3.5pt}
\begin{table}[!h]
\begin{center}
\caption{VLT spectropolarimetic observations of SN\,2021rhu. \label{Table_pol}}
\begin{scriptsize}
\begin{tabular}{c|ccc|cc|cc|ccc}
\hline
\hline
\# & Date (UT) / & Exp. Time$^b$ & Grism / & $q^{\rm Cont}$ & $u^{\rm Cont}$ & $\alpha$ (\%) & $\theta_{d}$ & $\alpha^{\rm Ca\,II\,NIR3}$ (\%) & $\theta_{d}^{\rm Ca\,II\,NIR3}$ & $p_{\rm max}^{\rm Ca\,II\,NIR3}$ \\
& Phase$^a$ (day) & (s) & Res.\ Power & [\%] & [\%] & $\beta$ & (deg) & $\beta^{\rm Ca\,II\,NIR3}$ & (deg) & (\%) \\
\hline
1 & 2021-07-08 / & $2\times4\times150$ & 300V/440 & 0.04$\pm$0.12 & 0.01$\pm$0.11 & 0.00322$\pm$0.00391 & 184.7$_{-1.1}^{+1.1}$ & 0.0795$\pm$0.0765 & 155.0$_{-7.1}^{+16.4}$ & 0.30$\pm$0.06 \\
& $-$7 & & & & & 0.166$\pm$0.040 & & $-$1.194$\pm$0.885 & & \\
\hline
2 & 2021-07-15 / & $2\times4\times80$ & 300V/440 & 0.00$\pm$0.12 & 0.00$\pm$0.11 & 9.60$\times10^{-5}\pm$0.00354 & 179.7$_{-1.3}^{+1.3}$ & 0.138$\pm$0.029 & 158.2$_{-3.2}^{+4.0}$ & 0.44$\pm$0.08 \\
& $+$0 & & & & & $-$0.0107$\pm$0.0439 & & $-$0.951$\pm$0.238 & & \\
\hline
2$^{c}$ & 2021-07-15 & $2\times4\times160$ & 1200R/2140 & & & N.A. & N.A. & N.A. & N.A. & N.A. \\
\hline
3 & 2021-08-20 / & $2\times4\times450$ & 300V/440 & -0.03$\pm$0.12 & 0.00$\pm$0.12 & 0.00894$\pm$0.00516 & 191.9$_{-1.6}^{+1.5}$ & -0.0817$\pm$0.0891 & 207.3$_{-5.1}^{+3.4}$ & 0.82$\pm$0.11 \\
& $+$36 & & & & & 0.441$\pm$0.063 & & 1.412$\pm$0.430 & & \\
\hline
4 & 2021-10-02 / & $2\times4\times300$ & 300V/440 & -0.11$\pm$0.38 & 0.14$\pm$0.41 & N.A. & N.A. & $-$1.664$\pm$7.104 & 222.2$_{-25.6}^{+1.3}$ & 2.48$\pm$0.31 \\
& $+$79 & & & & & N.A. & & 1.412$\pm$0.430 & & \\
\hline
\end{tabular}\\
{$^a$Relative to the estimated peak on UT 2021-07-15/MJD 59410.} \\
{$^b$Each set of observation consists of 2 [loops]$\times$4 [half-wave plate angles]$\times$[time of integration].} \\
{$^c$A higher spectral resolution and a narrower wavelength range of 5700--7100\,\AA\ are offered by the grism 1200R.}
\\
\end{scriptsize}
\end{center}
\end{table}
\begin{acknowledgments}
We thank the anonymous referee for the careful scrutiny which resulted in
quite a few helpful suggestions that improved the quality of the manuscript.
We are grateful to the European Organisation for Astronomical Research
in the Southern Hemisphere (ESO) for the generous allocation of observing
time. The polarimetry studies in this work are based on observations made
with the VLT at ESO's La Silla Paranal Observatory under program ID
105.20AU. We especially thank the staff at Paranal for their proficient
and highly motivated support of this project in service mode.
The high-resolution spectra presented herein were obtained at the
W.\,M.\,Keck Observatory, which is operated as a scientific partnership
among the California Institute of Technology, the University of California,
and the National Aeronautics and Space Administration (NASA). The Observatory was
made possible by the generous financial support of the W.\,M.\,Keck
Foundation.
The authors wish to recognize and acknowledge the very significant
cultural role and reverence that the summit of Maunakea has always had
within the indigenous Hawaiian community. We are most fortunate to have
the opportunity to conduct observations from this mountain.
PyRAF, PyFITS, STSCI$\_$PYTHON are products of the Space Telescope Science
Institute, which is operated by AURA for NASA. STScI is operated by the
Association of Universities for Research in Astronomy, Inc., under NASA
contract NAS5-26555. This research has made use of NASA's Astrophysics
Data System Bibliographic Services, the SIMBAD database, operated at CDS,
Strasbourg, France, and the NASA/IPAC Extragalactic Database (NED) which
is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with NASA.
The research of Y.Y.\ is supported through a
Bengier-Winslow-Robertson Fellowship.
M.B.\ acknowledges support from the Swedish Research Council (Reg. No. 2020-03330).
A.V.F.'s group at U.C.\ Berkeley acknowledges generous support from the Miller Institute for Basic Research in Science (where A.V.F. was a Miller Senior Fellow), Sunil Nagaraj, Landon Noll, Gary and Cynthia Bengier, Clark and Sharon Winslow, Sanford Robertson, and many additional donors.
L.G.\ acknowledges financial support from the Spanish Ministerio de Ciencia e Innovaci\'on (MCIN), the Agencia Estatal de Investigaci\'on (AEI) 10.13039/501100011033, and the European Social Fund (ESF) ``Investing in your future" under the 2019 Ram\'on y Cajal program RYC2019-027683-I and the PID2020-115253GA-I00 HOSTFLOWS project, from Centro Superior de Investigaciones Cient\'ificas (CSIC) under the PIE project 20215AT016, and the program Unidad de Excelencia Mar\'ia de Maeztu CEX2020-001058-M.
P.H.\ acknowledges the support from the NSF project ``Signatures of Type Ia Supernovae, New Physics, and Cosmology,'' grant AST-1715133. The supernova research by L.W.\ is supported by NSF award AST-1817099. M.R.\ is supported by the National Science Foundation Graduate Research Fellowship Program under grantr DGE-1752134.
J.C.W.\ and J.V.\ are supported by NSF grant AST-1813825.
The research of J.M.\ is supported through a Royal Society University
Research Fellowship.
\end{acknowledgments}
\vspace{5mm}
\facilities{VLT(FORS2), Keck:I (HIRES)}
\software{IRAF \citep{Tody_1986, Tody_1993}
}
\bibliographystyle{aasjournal}
|
Title:
CMB lensing with shear-only reconstruction on the full sky |
Abstract: Reconstruction of gravitational lensing effects in the CMB from current and
upcoming surveys is still dominated by temperature anisotropies. Extragalactic
foregrounds in temperature maps can induce significant biases in the lensing
power spectrum obtained with the standard quadratic estimators. Techniques such
as masking cannot remove these foregrounds fully, and the residuals can still
lead to large biases if unaccounted for. In this paper, we study the
"shear-only" estimator, an example of a class of geometric methods that
suppress extragalactic foreground contamination while making only minimal
assumptions about foreground properties. The shear-only estimator has only been
formulated in the flat-sky limit and so is not easily applied to wide surveys.
Here, we derive the full-sky version of the shear-only estimator and its
generalisation to an $m=2$ multipole estimator that has improved performance
for lensing reconstruction on smaller scales. The multipole estimator is
generally not separable, and so is expensive to compute. We explore separable
approximations based on a singular-value decomposition, which allow efficient
evaluation of the estimator with real-space methods. Finally, we apply these
estimators to simulations that include extragalactic foregrounds and verify
their efficacy in suppressing foreground biases.
| https://export.arxiv.org/pdf/2208.14988 |
\title{CMB lensing with shear-only reconstruction on the full sky\\}%
\author{Frank J. Qu}
\email{[email protected]}
\affiliation{DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, UK}
\author{Anthony Challinor}
\email{[email protected]}
\affiliation{DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, UK}
\affiliation{Institute of Astronomy, Madingley Road, Cambridge, CB3 0HA, UK}
\affiliation{Kavli Institute for Cosmology Cambridge, Madingley Road, Cambridge, CB3 0HA, UK}
\author{Blake D.\ Sherwin}
\email{[email protected]}
\affiliation{DAMTP, Centre for Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WA, UK}
\affiliation{Kavli Institute for Cosmology Cambridge, Madingley Road, Cambridge, CB3 0HA, UK}
\date{\today}%
\section{\label{sec:Introduction}Introduction}
Gravitational lensing of the CMB encodes a wealth of information about our Universe. Observing the deflections produced by the intervening large-scale structure on the paths of CMB photons allows us to make integrated measurements of the projected matter distribution to high redshifts \cite{LEWIS_2006}. CMB lensing provides us with a powerful probe to constrain parameters such as neutrino masses \cite{LESGOURGUES_2006,Qu2022} and dark energy \cite{Calabrese_2009}. Analyses with Planck data, building on earlier work by the Atacama Cosmology Telescope (ACT) and South Pole Telescope \cite{Smith:2007rg,Das_2011}, have demonstrated the great potential of this approach; see~\cite{Planck:2018lbu,Carron:2022eyg} for the most recent results. Current and future surveys, such as AdvACT \cite{Henderson_2016}, SPT-3G \cite{2014} and Simons Observatory \cite{Ade_2019}, will improve the precision of CMB lensing measurements significantly, making the identification and reduction of systematic biases increasingly important. While one expects polarisation information to dominate in the reconstruction of lensing from future surveys, many current and upcoming CMB surveys will still rely heavily on temperature. In this regime, extragalactic foreground contamination from the cosmic infrared background (CIB), the thermal Sunyaev--Zel'dovich effect (tSZ), the kinematic Sunyaev--Zel'dovich effect (kSZ) and radio point sources (PS) can leak into the lensing estimator producing significant biases if unaccounted for \cite{van_Engelen_2014}. Several mitigation methods for these biases have been proposed. For example, masking out sources from a known catalogue can decrease this bias, and techniques such as bias hardening \cite{Namikawa2013,Sailer2020}, which involves reconstructing and projecting out foregrounds, are useful for cases when the statistical properties of the foregrounds are known. Another method is multi-frequency component separation \cite{Madhavacheril_2018}, which can reduce or null specific foregrounds, but it was found that simultaneously reducing the CIB and tSZ increases the noise by a large factor. An improved technique, building upon \cite{Madhavacheril_2018}, was introduced in \cite{Darwish_2020} to eliminate foregrounds from the tSZ while preserving most of the signal-to-noise. Finally, \cite{darwish2021optimizing,sailer2021optimal} explore which combinations of multi-frequency cleaning and geometric methods (bias hardening and shear) are most effective in controlling lensing biases with only modest reduction in signal-to-noise.
In this paper, we focus on the shear estimator introduced in \cite{Schaan2019}, which built on earlier work exploring the role of magnification and shear in CMB lensing reconstruction~\cite{2018,PhysRevD.85.043016}. The idea behind the shear estimator is to exploit the different geometric effects on the local CMB two-point function of lensing and extragalactic foregrounds to separate them. In the limit where large-scale lenses are reconstructed from small-scale temperature anisotropies, weak lensing produces local distortions in the 2D CMB power spectrum with an isotropic part (i.e., monopole) due to lensing convergence and a quadrupolar part due to lensing shear.
The quadratic estimators usually employed in lensing reconstruction~\cite{Hu_2002,Okamoto2003} optimally combine convergence and shear in the large-scale-lens limit. However, extragalactic foreground power predominantly biases the local monopole power spectrum, leaving the shear-only estimator much less affected by foregrounds than the standard quadratic estimator. Moreover, with the shear-only estimator, one can include smaller-scale temperature modes in the reconstruction without introducing unacceptable levels of bias, thus mitigating the loss of signal-to-noise from discarding convergence information in the case of high-resolution, low-noise observations~\cite{Schaan2019}.
For the reconstruction of smaller-scale lenses, it is no longer true that the lensing convergence and shear can be considered constant over the coherence scale of the CMB. In this limit, lensing not only introduces monopole ($m=0$) and quadrupole ($m=2$) couplings in the local two-point function of the lensed CMB but also higher-order couplings. Furthermore, the dependencies of the $m=0$ and $m=2$ couplings on the angular scale of the CMB fluctuations and lenses deviate significantly from their large-lens limits. For reconstruction on smaller scales, one can formulate a set of multipole estimators, each extracting information from a specific $m$~\cite{Schaan2019}. Most of the reconstruction signal-to-noise is still contained in the $m=0$ and $m=2$ estimators, and extragalactic foregrounds are expected still to bias mainly $m=0$. However, it is necessary to use the correct scale dependence of the $m=2$ estimator to avoid the poor performance of the shear estimator when it is extended directly to reconstruction of smaller-scale lenses. This makes efficient evaluation with real-space methods difficult since the $m=2$ estimator is not generally separable.
The shear reconstruction discussed in \cite{Schaan2019} is based on the flat-sky approximation. Since current and future high-resolution CMB experiments will cover a significant fraction of the sky, a full-sky formulation of the shear estimator is required. In this paper, we derive the full-sky version of the shear estimator and show how it can be evaluated efficiently in real space with spin-weighted spherical harmonic transforms. We generalise further to an $m=2$ multipole estimator to avoid the sub-optimal performance of the shear estimator on smaller scales. We suggest a simple separable approximation for this estimator using singular-value decomposition, which allows efficient evaluation with real-space methods.
This paper is organised as follows. In Sec. \ref{theory} we review CMB lensing reconstruction and multipole estimators in the flat-sky approximation and show how to generalise these to the curved sky. For the full-sky shear estimator, we
show that it provides an unbiased recovery of the lensing power spectrum using lensed CMB maps. Section~\ref{svd} then explores ways of improving the shear estimator by constructing a separable approximation to the $m=2$ estimator using singular-value decomposition. Finally, in Sec. \ref{test} we test the efficacy of the full-sky shear and $m=2$ estimators in suppressing extragalactic foreground contamination by measuring the lensing power spectrum using CMB simulations injected with foregrounds from the Websky simulation~\cite{Stein2020}.
in Appendix~\ref{appA} we discuss the estimator normalisation and reconstruction noise power.
In Appendix~\ref{appB} we derive the form of the full-sky shear estimator in spherical-harmonic space.
\section{Theory: Shear and multipole estimators}\label{theory}
In this section, we introduce the standard lensing quadratic estimator and decompose it into a series of multipole estimators in the flat-sky limit. Starting from this, we then derive the full-sky form of the multipole ($m=2$) and shear estimators.
\subsection{Multipole and shear estimators in the flat-sky limit}
Lensing produces local distortions in the CMB two-point function, breaking the statistical isotropy of the unlensed CMB field and hence introducing new correlations between different Fourier modes over a range of wavenumbers defined by the lensing potential. Averaging over an ensemble of temperature fields for a fixed lensing potential $\phi$ results in off-diagonal correlations in the observed temperature field $T$:
\begin{equation}
\langle T(\bell_1)T(\bell_2)\rangle_{\text{CMB}}=(2\pi)^2 \delta^{(2)}(\bell_1+\bell_2) C_{\ell_1}^{TT} + f^\phi(\bell_1,\bell_2) \phi(\bell_1 + \bell_2) \, ,
\end{equation}
with $f^{\phi}(\bell_1,\bell_2)=(\bell_1+\bell_2)\cdot \left(\bell_1 C^{TT}_{\ell_1}+\bell_2C^{TT}_{\ell_2}\right)$ and $C^{TT}_{\ell}$ the temperature power spectrum.\footnote{We use an improved version of the lensing response function $f^\phi(\bell_1,\bell_2)$ that describes the linear response of the CMB two-point function to variation in the lensing potential $\phi(\Vec{L})$, averaging over CMB and other lenses~\cite{Lewis:2011fk}. In particular, we use the lensed CMB power spectrum rather than the unlensed spectrum in $f^\phi$, which gives a good approximation to the true non-perturbative response function.}
The standard quadratic estimator~\cite{Hu_2002} exploits the coupling of otherwise-independent temperature modes to reconstruct the lensing field $\hat{\phi}$ by combining pairs of appropriately filtered temperature fields:
\begin{equation}
\hat{\phi}^{\text{QE}}(\Vec{L})=\frac{A^{\text{QE}}_L}{2}\int\frac{d^2\bell}{(2\pi)^2}\frac{T(\bell+\Vec{L}/2)}{C^{\text{total}}_{|\bell+\Vec{L}/2|}} \frac{T(\Vec{L}/2-\bell)}{C^{\text{total}}_{|\Vec{L}/2-\bell|}} f^\phi(\bell+\Vec{L}/2,\Vec{L}/2-\bell) \, ,
\end{equation}
where $A^{\text{QE}}_L$ is a multipole-dependent normalisation to make the reconstructed field unbiased and ${C}_{\bell}^{\text{total}}$ denotes the total temperature power spectrum, including residual foregrounds and instrumental noise. We follow the standard practice of using upper-case $L$ to denote a lensing multipole and lower-case $\ell$ to refer to CMB multipoles.
The angular dependence of the lensing response function can be expanded in a Fourier series in $\theta_{\Vec{L},\bell}$, the angle between $\Vec{L}$ and $\bell$:
\begin{equation}\label{mulres}
f^\phi(\bell+\Vec{L}/2,\Vec{L}/2-\bell) = \sum_{\text{$m$ even}}f^m_{L,\ell}\cos(m\theta_{\vec{L},\bell})\, , \quad \text{with} \quad f^m_{L,\ell}= \begin{cases}
\frac{1}{2\pi}\int d\theta_{\Vec{L},\bell} f^\phi(\bell+\vec{L}/,\vec{L}/2-\bell) & \text{if $m=0$,}\\
\frac{1}{\pi}\int d\theta_{\Vec{L},\bell} f^\phi(\bell+\vec{L}/2,\vec{L}/2-\bell)\cos (m\theta_{\Vec{L},\bell}) & \text{otherwise.}
\end{cases}
\end{equation}
The expansion only involves even multipoles $m\in2\mathbb{N}$ because $f^\phi(\bell+\Vec{L}/2,\Vec{L}/2-\bell)$ is invariant under $\Vec{L} \rightarrow - \Vec{L}$, i.e., $\theta_{\Vec{L},\bell} \rightarrow \theta_{\Vec{L},\bell} + \pi$. In the limit that the lenses are on much larger scales than the CMB fluctuations they are lensing, $L \ll \ell$, the expansion~\eqref{mulres} is dominated by the $m=0$ (monopole) and $m=2$ (quadrupole) moments. The former corresponds to isotropic magnification or demagnification and the latter to shear. Expanding in $x \equiv L/\ell$, we have
\begin{equation}\label{response}
f^\phi(\bell+\Vec{L}/2,\Vec{L}/2-\bell) =\frac{1}{2} L^2 C^{TT}_\ell \left[\left(\frac{d \ln \ell^2 C_\ell^{TT}}{d\ln \ell} + \frac{d \ln C_\ell^{TT}}{d\ln \ell} \cos 2 \theta_{\Vec{L},\bell}\right) + \mathcal{O}(x^2)\right] \, ,
\end{equation}
which involves only $m=0$ and $m=2$ terms at leading order.
The multipole expansion of the response function gives rise to a family of lensing estimators characterised by the multipole $m$, generally of the form
\begin{equation}
\hat{\phi}^{m}(\Vec{L}) = \frac{A_L^m}{2} \int \frac{d^2 \bell}{(2\pi)^2}\, g^m_{L,\ell} \cos (m \theta_{\Vec{L},\bell})
T(\bell+\Vec{L}/2) T(\Vec{L}/2-\bell) \, ,
\end{equation}
where the normalisation $A^m_L$ is chosen so that the estimator is unbiased:
\begin{equation}
1 = \frac{A^m_L}{2} \int \frac{\ell d \ell}{2\pi} \, g^m_{L,\ell} f^m_{L,\ell} \times \begin{cases} 1 & \text{if $m=0$} \\ 1/2 & \text{otherwise} \, .
\end{cases}
\label{eq:flatnorm}
\end{equation}
The multipole weight functions $g^m_{L,\ell}$ may be chosen in various ways. In Ref.~\cite{Schaan2019}, the minimum-variance estimator \emph{at each multipole} is constructed, in which case the $\ell$-dependence of $g^m_{L,\ell}$ follows
\begin{equation}\label{g_weight}
g^m_{L,\ell} \propto \left(\int \frac{d\theta_{\Vec{L},\bell}}{2\pi}\cos^2(m \theta_{\Vec{L},\bell}) C^{\text{total}}_{|\bell+\Vec{L}/2|}
C^{\text{total}}_{|\bell-\Vec{L}/2|}\right)^{-1} f^m_{L,\ell} \, ,
\end{equation}
where the integral here depends only on the magnitudes $\ell$ and $L$. We use a simpler form in this paper (with a relatively minor impact on optimality) whereby we replace $C^{\text{total}}_{|\bell\pm \Vec{L}/2|}$ appearing explicitly in the weight function \ref{g_weight} with $C_\ell^{\text{total}}$, which is correct for their product to $\mathcal{O}(x^2)$. Reference~\cite{Schaan2019} show that most of the information in the lensing reconstruction is captured by the $m=0$ and $m=2$ multipole estimators, even for smaller-scale lenses where the squeezed limit $L \ll \ell$ does not apply.
It is convenient to split the QE estimator into this family of multipole estimators because some multipoles are more affected by foregrounds than others. The $m=2$ estimator, for instance, is expected to be more robust to extragalactic foregrounds since they primarily bias the $m=0$ estimator~\cite{Schaan2019}. We discuss this further in Sec.~\ref{subsec:foregrounds} below.
The above multipole estimators are generally not easy to implement efficiently because they are non-separable expressions of $L$ and $\ell$. To allow for fast evaluation with real-space methods, we first consider the squeezed-limit of the $m=2$ estimator. In this case, we use the approximate form of $f^2_{L,\ell}$ given by the leading-order quadrupole part of Eq.~\eqref{response}:
\begin{equation}
f^2_{L,\ell} \rightarrow f^{\text{shear}}_{L,\ell} = \frac{1}{2} L^2 C^{TT}_\ell \frac{d \ln C_\ell^{TT}}{d\ln \ell} \, ,
\end{equation}
which is clearly separable in $L$ and $\ell$. We make a further simplication~\cite{Schaan2019}, replacing $T(\bell+\Vec{L}/2) T(\Vec{L}/2-\bell)$ with $T(\bell) T(\Vec{L}-\bell)$ to allow fast evaluation of the estimator. Note that this is not simply a variable transformation as we do not change the arguments of the weight function or the angle $\theta_{\Vec{L},\bell}$. The
foreground deprojection argument still holds at leading order in this case (see Sec.~\ref{subsec:foregrounds}).
With these modifications, we obtain the shear estimator
\begin{equation}
\label{eq.flat}
\hat{\phi}^{\text{shear}}(\vec{L})=\frac{A^{\text{shear}}_L}{2} \int\frac{d^2\bell}{(2\pi)^2} \, g^{\text{shear}}_{L,\ell} \cos (2\theta_{\Vec{L},\bell}) T(\bell)T(\vec{L}-\bell) \, ,
\end{equation}
where the shear weight function is
\begin{equation}\label{eq.flatw}
g^{\text{shear}}_{L,\ell}=\frac{L^2}{2}\frac{C^{TT}_\ell}{(C^{\text{total}}_\ell)^2}\frac{d\ln{C^{TT}_\ell}}{d\ln{\ell}} \, .
\end{equation}
The shear normalisation is obtained from Eq.~\eqref{eq:flatnorm}. Equation~\eqref{eq.flat} can be evaluated efficiently by
first noting that the angular term $\cos (2 \theta_{\vec{L},\bell})$ can be expressed in terms of the contraction of the symmetric, trace-free tensors $\hat{L}_{\langle i} \hat{L}_{j\rangle}$ and $\hat{\ell}_{\langle i} \hat{\ell}_{j\rangle}$ (where overhats denote unit vectors in this context and the angular brackets denote the symmetric, trace-free part of the tensor):
\begin{align}
2\hat{L}_{\langle i} \hat{L}_{j\rangle} \hat{\ell}^{\langle i} \hat{\ell}^{j\rangle} & = 2\left(\hat{L}_{i} \hat{L}_{j} - \frac{1}{2} \delta_{ij} \right) \hat{\ell}^i \hat{\ell}^j \nonumber \\
&= 2\cos^2 \theta_{\Vec{L},\bell} - 1 \nonumber \\
&= \cos (2\theta_{\Vec{L},\bell}) \, .
\end{align}
We then simplify the shear estimator as follows:
\begin{align}
\hat{\phi}^{\text{shear}}(\vec{L}) &= \frac{1}{2}{A_{L}^{\text{shear}}}L_{\langle i}L_{j \rangle}\int\frac{d^2\bell_1}{(2\pi)^2}\int\frac{d^2\bell_2}{(2\pi)^2}\, (2\pi)^2 \delta^{(2)}(\Vec{L}-\bell_1-\bell_2)
\left(\frac{C^{{TT}}_{\ell_1}}{(C^{\text{total}}_{\ell_1})^2}\frac{1}{\ell_1^2}\frac{d\ln{C}^{{TT}}_{\ell_1}}{d\ln{\ell_1}}T(\bell_1)\right)\ell_1^{\langle{i}}\ell_1^{j\rangle}T(\bell_2) \nonumber \\
&= \frac{1}{2}{A_{L}^{\text{shear}}}L_{\langle i}L_{j \rangle}\int d^2\vec{x} \int\frac{d^2\bell_1}{(2\pi)^2}\int\frac{d^2\bell_2}{(2\pi)^2}e^{-i\Vec{L}\cdot\vec{x}}\left(\frac{C^{{TT}}_{\ell_1}}{(C^{\text{total}}_{\ell_1})^2}\frac{1}{\ell_1^2}\frac{d\ln{C}^{{TT}}_{\ell_1}}{d\ln{\ell_1}}T(\bell_1)\right)\ell_1^{\langle i}\ell_1^{j\rangle}e^{i\bell_1\cdot \vec{x}}T(\bell_2)e^{i\bell_2\cdot \vec{x}}\nonumber \\ &=-\frac{1}{2}A^{\text{shear}}_{L}L_{\langle i}L_{j \rangle}\int d^2\vec{x} e^{-i\vec{L}\cdot \vec{x}}T(\vec{x})\partial^{\langle i}\partial^{j\rangle} {T}^F(\vec{x}) \nonumber \\
&= \frac{1}{2}A^{\text{shear}}_{L} \int d^2\vec{x} \left(\partial_{\langle i}\partial_{j \rangle}e^{-i\vec{L}\cdot \vec{x}}\right)T(\vec{x})\partial^{\langle i}\partial^{j\rangle} {T}^F(\vec{x}) \, ,\label{eq1111}
\end{align}
where the filtered temperature field is
\begin{equation}
{T}^{F}(\vec{x})=\int\frac{d^2\bell}{(2\pi)^2}\frac{C^{TT}_\ell}{(C^{\text{total}}_\ell)^2}\frac{1}{\ell^2}\frac{d\ln{C^{TT}_\ell}}{d\ln{\ell}}T(\bell) e^{i\bell\cdot \vec{x}} \, .
\label{eq:flatshearfilter}
\end{equation}
The last line of Eq.~\eqref{eq1111} shows that the shear estimator is equivalent to extracting the $E$-mode part of the product of the temperature field and the symmetric, trace-free derivative of the filtered temperature field. Expressing the estimator in this form makes its translation to the curved sky rather straightforward, as we discuss in Sec.~\ref{sec:fullsky}.
The shear estimator can be evaluated very efficiently, and has excellent immunity to extragalactic foregrounds. However, approximating $f^2_{L,\ell}$ with its squeezed-limit $f^{\text{shear}}_{L,\ell}$ in the weight function results in poor performance at high $L$, where $L \ll \ell$ is not a good approximation~\cite{Schaan2019}. In Sec.~\ref{svd}, we suggest an alternative, separable approximation to $f^2_{L,\ell}$ that performs better than the shear estimator at high $L$. The approximation we develop there takes as its starting point the asymmetric form of the standard quadratic estimator:
\begin{equation}
\hat{\phi}^{\text{QE}}(\Vec{L})=\frac{A^{\text{QE}}_L}{2}\int\frac{d^2\bell}{(2\pi)^2}\frac{T(\bell)}{C^{\text{total}}_{\ell}} \frac{T(\Vec{L}-\bell)}{C^{\text{total}}_{|\Vec{L}-\bell|}} f^\phi(\bell,\Vec{L}-\bell) \, .
\end{equation}
We then replace $C^{\text{total}}_{|\Vec{L}-\bell|} \rightarrow C^{\text{total}}_\ell$ and expand the \emph{asymmetric} lensing response function in Fourier series
\begin{equation}\label{mulresasym}
f^\phi(\bell,\Vec{L}-\bell) = \sum_{m \geq 0} \tilde{f}^m_{L,\ell}\cos(m\theta_{\vec{L},\bell})\, ,
\end{equation}
which now involves both even and odd multipoles. In the squeezed limit, the expansion is still dominated by $m=0$ and $m=2$ and the expressions for $\tilde{f}^0_{L,\ell}$ and $\tilde{f}^2_{L,\ell}$ reduce to their symmetric counterparts $f^0_{L,\ell}$ and $f^2_{L,\ell}$, respectively. We expect the majority of the signal-to-noise to remain in these multipoles even for smaller-scale lenses (as was the case for the symmetric estimator above). The argument for foreground immunity of the $m=2$ estimator still holds in this asymmetric case, also (see Sec.~\ref{subsec:foregrounds}).
We end this section by noting that an alternative to considering the $m=2$ estimator (plus higher-multipole estimators) is to remove the $m=0$ contribution from the standard QE, as was done in~\cite{Fabbian2019}. However, it is difficult to find an efficient implementation of this scheme since it requires a very accurate separable approximation to the monopole estimator as any error in the monopole removal will cause leakage of foreground contamination.
\subsection{Foreground immunity}
\label{subsec:foregrounds}
To see why extragalactic foregrounds predominantly affect the $m=0$ multipole estimator, we consider a simple toy model of extragalactic sources all with the same circularly symmetric angular profile $F(\vec{x})$ in the temperature map. Assume that these are Poisson sampled from a fluctuating-mean source density $n(\vec{x}) = \bar{n}[1+b\delta(\vec{x})]$ where $\bar{n}$ is the global mean source density, $\delta(\vec{x})$ is a projected density field correlated with the CMB lensing potential, and $b$ is a linear bias. If we average over the Poisson fluctuations at fixed $\delta(\vec{x})$, the two-point function of the source contribution to the temperature map, $f(\vec{x})$, satisfies
\begin{equation}
\langle f(\bell_1) f(\bell_2) \rangle_{\text{Poisson}} = \bar{n} F(\bell_1) F(\bell_2) \left[ (2\pi)^2 \delta^{(2)}(\bell_1+\bell_2) + b \delta(\bell_1+\bell_2) \right]
\end{equation}
to first order in $\delta$ and for $\bell_1 \neq 0$ and $\bell_2\neq 0$. If we consider applying a multipole estimator $\hat{\phi}^m(\vec{L})$ to a map composed of CMB, instrument noise and foregrounds $f$, the foreground bias in the correlation of the reconstructed $\phi$ with the true $\phi$ is $\langle \hat{\phi}^m_{\vec{L}}(f,f) \phi(\vec{L}')\rangle$, where $\hat{\phi}^m_{\vec{L}}(f,f)$ is the multipole estimator applied to the field $f$. At leading order, and for $\vec{L}\neq 0$, this bias is
\begin{align}
\langle \hat{\phi}^m_{\vec{L}}(f,f) \phi(\vec{L}')\rangle &=
\langle \delta(\vec{L}) \phi(\vec{L}') \rangle \frac{A_L^m}{2} \int \frac{d^2 \bell}{(2\pi)^2}\, g^m_{L,\ell} \cos (m \theta_{\Vec{L},\bell}) b \bar{n}
F(\bell+\Vec{L}/2) F(\Vec{L}/2-\bell) \nonumber \\
&=
\langle \delta(\vec{L}) \phi(\vec{L}') \rangle \frac{A_L^m}{2} \int \frac{d^2 \bell}{(2\pi)^2}\, g^m_{L,\ell} \cos (m \theta_{\Vec{L},\bell}) b \bar{n}
[F(\ell)]^2 \left[1+ \mathcal{O}(x^2)\right] ,
\label{eq:primbispecbias}
\end{align}
so the bias predominantly appears in the $m=0$ estimator. Note that this holds true even at relatively large $L$ provided that the source profile $F(\vec{x})$ is very compact. It also remains true if we replace $T(\bell+\Vec{L}/2) T(\Vec{L}/2-\bell)$ with $T(\bell) T(\Vec{L}-\bell)$ to allow fast evaluation of the estimator, as discussed above. In this case, Eq.~\eqref{eq:primbispecbias} becomes
\begin{align}
\langle \hat{\phi}^m_{\vec{L}}(f,f) \phi(\vec{L}')\rangle &\rightarrow
\langle \delta(\vec{L}) \phi(\vec{L}') \rangle \frac{A_L^m}{2} \int \frac{d^2 \bell}{(2\pi)^2}\, g^m_{L,\ell} \cos (m \theta_{\Vec{L},\bell}) b \bar{n}
F(\bell) F(\Vec{L}-\bell) \nonumber \\
&=
\langle \delta(\vec{L}) \phi(\vec{L}') \rangle \frac{A_L^m}{2} \int \frac{d^2 \bell}{(2\pi)^2}\, g^m_{L,\ell} \cos (m \theta_{\Vec{L},\bell}) b \bar{n}
[F(\ell)]^2 \left[1+ \mathcal{O}(x)\right] ,
\end{align}
Although the expansion of $F(\bell)F(\vec{L}-\bell)$ now introduces terms suppressed by only one power of $x=L/\ell$, these have $m=1$ angular dependence and so do not bias the $m=2$ estimator. The $m=2$ foreground terms are still suppressed in the integrand by $(L/\bell)^2$.
If instead, we consider the power spectrum of the reconstructed lensing potential, foreground biases similar to those above arise from terms like
$\langle \hat{\phi}^m_{\vec{L}}(f,f) \hat{\phi}^m_{\vec{L}'}(T,T) \rangle$ since averaging over the unlensed CMB reduces the second quadratic estimator to $\phi(\vec{L}')$. These terms are similarly suppressed for $m \neq 0$. However, additional ``secondary bispectrum'' terms, of the form $\langle \hat{\phi}^m_{\vec{L}}(T,f) \hat{\phi}^m_{\vec{L}'}(T,f) \rangle$, where the CMB and Poisson fluctuations are averaged across quadratic estimators, are not suppressed. However, these biases are generally small; see~\cite{Schaan2019} and Sec.~\ref{test}. Foreground trispectrum biases also arise that involve the connected four-point function of the foregrounds $\langle \hat{\phi}^m_{\vec{L}}(f,f) \hat{\phi}^m_{\vec{L}'}(f,f) \rangle_c$. These are also suppressed in the limit where shot noise dominates. In this case, our toy model gives
\begin{equation}
\langle f(\bell_1) f(\bell_2) f(\bell_3) f(\bell_4) \rangle = (2\pi)^2 \bar{n} F(\bell_1) F(\bell_2) F(\bell_3) F(\bell_4) \delta^{(2)}(\bell_1+\bell_2+\bell_3+\bell_4) \, ,
\end{equation}
and the trispectrum bias separates to involve the same integral as in Eq.~\eqref{eq:primbispecbias}:
\begin{equation}
\langle \hat{\phi}^m_{\vec{L}}(f,f) \hat{\phi}^m_{\vec{L}'}(f,f) \rangle_c = (2\pi)^2 \delta^{2}(\vec{L}+\vec{L}') \bar{n} \left[
\frac{A_L^m}{2} \int \frac{d^2 \bell}{(2\pi)^2}\, g^m_{L,\ell} \cos (m \theta_{\Vec{L},\bell})
F(\bell+\Vec{L}/2) F(\Vec{L}/2-\bell) \right]^2 .
\end{equation}
\subsection{Full-sky formalism}
\label{sec:fullsky}
In this section, we construct the full-sky versions of the shear and $m=2$ estimators. We start with the real-space form of the shear estimator in the flat-sky limit, Eq.~\eqref{eq1111}. It is easy to extend this to the curved sky, using spherical-harmonic functions as a basis, converting the partial derivatives into covariant derivatives on the sphere $\partial \rightarrow \nabla$, and the integration measure from $d^2\vec{x}\rightarrow d^2 \hat{\vec{n}}$. These changes give
\begin{equation}
\hat{\phi}^{\text{shear}}_{LM}=\frac{A^{\text{shear}}_L}{2}\int{d}^2\hat{\vec{n}}\, \left(\nabla^{\langle a}\nabla^{b\rangle}Y^{*}_{LM}\right)T(\hat{\vec{n}})\nabla_{\langle a}\nabla_{b\rangle}T^{F}(\hat{\vec{n}}) \, .
\label{eq:curvedshear}
\end{equation}
This expression can be evaluated efficiently by writing the symmetric, trace-free derivatives of the spherical-harmonic functions in terms of spin-$\pm 2$ spherical harmonics and then using (fast) spherical-harmonic transforms; see Appendix~\ref{appB}. Evaluating the resulting Gaunt integral in terms of Wigner-$3j$ symbols allows us to derive the following harmonic-space form of the full-sky shear estimator:
\begin{equation}\label{curved1}
\hat{\phi}^{\text{shear}}_{LM}=A^{\text{shear}}_L\sum_{\ell_1m_1}\sum_{\ell_2m_2}(-1)^Mg^{\text{shear}}_{\ell_1,\ell_2}(L)\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}T_{\ell_1m_1}T_{\ell_2m_2} \, .
\end{equation}
Here, the full-sky shear weight function $g^{\text{shear}}_{\ell_1,\ell_2}(L)$ has the same structure as its flat-sky counterpart
in Eq.~\eqref{eq.flatw}:
\begin{multline}
g^{\text{shear}}_{\ell_1,\ell_2}(L)=\frac{1}{2}\sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{16\pi}}\frac{C^{TT}_{\ell_1}}{(C^{\text{total}}_{\ell_1})^2}\frac{d\ln C^{TT}_{\ell_1}}{d\ln \ell_1}\begin{pmatrix}
\ell_1 & \ell_2 & L\\
0 & 0 & 0
\end{pmatrix} \\ \times \omega^2_L
\left[\frac{\left(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}\right)\left(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}-2\right)}{2\omega^2_{\ell_1}\omega^2_{L}}-1\right] \, ,
\label{eq:gshearcurved}
\end{multline}
where $\omega_\ell^2\equiv\ell(\ell+1)$. The $3j$ symbol here enforces mode coupling, the spherical analogue of $\bell_1+\bell_2=\Vec{L}$, and the expression in square brackets accounts for $\cos (2\theta_{\Vec{L},\bell_1})$ noting that, by the cosine rule,
\begin{equation}
\cos\theta_{\Vec{L},\bell_1} = \frac{L^2 + \ell_1^2 - \ell_2^2}{2 L \ell_1} \qquad
\Rightarrow \qquad \cos(2\theta_{\Vec{L},\bell_1}) = \frac{\left(L^2 + \ell_1^2 - \ell_2^2\right)^2}{2 L^2 \ell_1^2} - 1 \, .
\end{equation}
With the usual curvature correction, $\ell \rightarrow \omega_\ell$, and assuming that all multipoles are much larger than $\sqrt{2}$, we recover the correspondence to Eq.~\eqref{eq.flat}.
The explicit form of the spherical normalization, $A^{\text{shear}}_L$, is given in Appendix~\ref{appA}.
One can similarly derive the full-sky version of the $m=2$ estimator, obtained by replacing $f^{\text{shear}}_{L,\ell}$ by $f^2_{L,\ell}$ (or the asymmetric form $\tilde{f}^2_{L,\ell}$) in the reconstruction weight function. We now have
\begin{equation}
\hat{\phi}^{m=2}_{LM}=\frac{A^{m=2}_L}{2} \int{d}^2\hat{\vec{n}}\, \left(\nabla^{\langle a}\nabla^{b\rangle}Y^{*}_{LM}\right)T(\hat{\vec{n}})\nabla_{\langle a}\nabla_{b\rangle}T^{F,m=2}_L(\hat{\vec{n}}) \, ,
\label{phishear1}
\end{equation}
where the non-separable filtered temperature field $T^{F,m=2}_L(\hat{\vec{n}})$ is given by
\begin{equation}
T^{F,m=2}_L(\hat{\vec{n}})=\frac{1}{\omega_L^2}\sum_{\ell m} \frac{2}{\omega_\ell^2(C^{\text{total}}_\ell)^2} f^2_{L,\ell}
T_{\ell{m}}Y_{\ell{m}}(\hat{\vec{n}}) \, .
\label{eq:nonsepmeqtwoweights}
\end{equation}
The harmonic-space form is
\begin{equation}\label{curvedm2}
\hat{\phi}^{m=2}_{LM}=A^{m=2}_L\sum_{\ell_1m_1}\sum_{\ell_2m_2}(-1)^Mg^{m=2}_{\ell_1,\ell_2}(L)\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}T_{\ell_1m_1}T_{\ell_2m_2} \, ,
\end{equation}
where the weight function $g^{m=2}_{\ell_1,\ell_2}(L)$ is no longer a separable function of $L$, $\ell_1$ and $\ell_2$, and is given by
\begin{multline}
g^{m=2}_{\ell_1,\ell_2}(L)=\sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{16\pi}}\frac{1}{(C^{\text{total}}_{\ell_1})^2}\begin{pmatrix}
\ell_1 & \ell_2 & L\\
0 & 0 & 0
\end{pmatrix} \\ \times
\left[\frac{\left(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}\right)\left(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}-2\right)}{2\omega^2_{\ell_1}\omega^2_{L}}-1\right] f^2_{L,\ell_1} \, .
\end{multline}
This $m=2$ estimator performs better than the shear estimator on small scales but is computationally expensive to evaluate due to the non-separability of the weight function. Reconstruction at each $L$ requires evaluation of a separate filtered temperature field, $T^{F,m=2}_L(\hat{\vec{n}})$, making the entire reconstruction a factor of $L_{\text{max}}$ slower than for the separable shear estimator. The reconstruction would therefore scale as $\mathcal{O}(L^4_\text{max})$, which is generally infeasible, particularly as the reconstruction typically has to be run many times for simulation-based removal of biases in the reconstructed lensing power spectrum. In Sec.~\ref{svd}, we suggest a simple way of writing the non-separable $f^2_{L,\ell}$ (or, actually, $\tilde{f}_{L,\ell}^2$) as a sum of separable terms, allowing reconstruction to be performed with far fewer filtered fields.
Finally, for completeness, we point out that the same prescription can be extended to calculate higher-multipole estimators even though, as noted above, most of the signal-to-noise is contained in the monopole and quadrupole. As an example, consider the $m=4$ estimator. On the flat sky, the real-space form of the estimator follows from noting that
\begin{equation}
\cos (4\theta_{\Vec{L},\bell}) = 8 \hat{L}_{\langle i} \hat{L}_j \hat{L}_k \hat{L}_{l\rangle} \hat{\ell}^{\langle{i}} \hat{\ell}^j \hat{\ell}^k \hat{\ell}^{l\rangle}\, .
\end{equation}
This allows us to express the $m=4$ response function directly in terms of $\bell$ and $\Vec{L}$ resulting in the following real-space estimator on the full sky (for $L \geq 4$):
\begin{equation}\label{m4}
\hat{\phi}^{m=4}_{LM}=\frac{A^{m=4}_L}{2} \int{d}^2\hat{\vec{n}}\, \left(\nabla^{\langle a} \nabla^b \nabla^c
\nabla^{d\rangle}Y^{*}_{LM}\right)T(\hat{\vec{n}})\nabla_{\langle a}\nabla_b \nabla_c \nabla_{d\rangle}T^{F,m=4}_L(\hat{\vec{n}}) \, ,
\end{equation}
where the filtered field is now
\begin{equation}
T^{F,m=4}_L(\hat{n})=\frac{8}{\omega_L^4} \sum_{\ell m} \frac{1}{\omega_\ell^4(C^{\text{total}}_\ell)^2} f^4_{L,\ell}
T_{\ell m}Y_{\ell m}(\hat{\vec{n}}) \, .
\end{equation}
Here, we have simply replaced the flat-sky $\ell^4$, which comes from converting multiplication by the unit vector $\hat{\bell}$ to a derivative, with $\omega_\ell^4$ and similarly for $L^4$. The harmonic-space form is
\begin{equation}\label{curvedm4}
\hat{\phi}^{m=4}_{LM}=A^{m=4}_L\sum_{\ell_1m_1}\sum_{\ell_2m_2}(-1)^Mg^{m=4}_{\ell_1,\ell_2}(L)\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}T_{\ell_1m_1}T_{\ell_2m_2} ,
\end{equation}
with weight function
\begin{multline}
g^{m=4}_{\ell_1,\ell_2}(L)=\sqrt{\frac{(L+4)!}{(L-4)!}}\sqrt{\frac{(\ell_1+4)!}{(\ell_1-4)!}}\sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{16\pi}}\begin{pmatrix}
\ell_1 & \ell_2 & L\\
4 & 0 & -4
\end{pmatrix}\frac{1}{2}\left[1+(-1)\right]^{\ell_1+\ell_2+L} \\
\times \frac{1}{\omega_L^4 \omega_{\ell_1}^4} \frac{1}{\left(C^{\text{total}}_{\ell_1}\right)^2} f^4_{L,\ell} \, .
\end{multline}
Note that in going from the flat-sky to full-sky, we could have chosen to use $\sqrt{(\ell+4)!/(\ell-4)!}$ instead of $\omega_\ell^4$ for the terms that arise with the 4th derivatives. The fractional differences are $\mathcal{O}(1/\ell^2)$ and would produce only small changes in optimality (at intermediate and small scales) while simplifying the above weight functions.
We test the full-sky shear estimator using full-sky lensed CMB simulations with the specifications of an upcoming Stage-3 experiment, with $1.4\,\text{arcmin}$ beam width (full-width at half maximum) and $7\,\mu\text{K-arcmin}$ instrument white noise at a frequency of $148\,\text{GHz}$. We first test the estimator using full-sky temperature maps without foreground contamination. The cross-power spectrum between the reconstructed convergence field $\kappa$ (related to the lensing field via $\kappa=-\nabla^2\phi/2$) and the input agrees with the theory lensing spectrum at the percent level. This shows that our estimator is correctly normalised (we use the full-sky normalisation given in Appendix~\ref{appA}). We further verify that we can recover an unbiased estimate of
the convergence power spectrum, $C^{\kappa\kappa}_L = L^2 (L+1)^2 C_L^{\phi\phi}/4$, by subtracting several biases from the empirical auto-spectrum of the reconstruction, $C^{\hat{\kappa}\hat{\kappa}}_L$:
\begin{equation}
\hat{C}^{\kappa\kappa}_L=C^{\hat{\kappa}\hat{\kappa}}_L-N^{(0)}_L-N^{(1)}_L .
\end{equation}
Here, $N^{(0)}_L$ is the Gaussian bias produced from the disconnected part of the CMB four-point function that enters $\langle C^{\hat{\kappa}\hat{\kappa}}_L\rangle$. This can be thought of as the power spectrum of the statistical reconstruction noise sourced by chance, Gaussian fluctuations in the CMB and instrument noise that mimic the effects of lensing.
We estimate this bias by forming different pairings of the simulation that is being treated as data and independent simulations following the method described in \cite{Namikawa2013}. We use 100 different realisations of simulation pairs to obtain an average over simulations. The $N^{(1)}$ bias, which arises from the connected part of the CMB four-point function and at leading order is linear in the lensing power spectrum~\cite{Kesden_2003}, is estimated using 200 pairs of simulations, with the same lensing realisation for each member of the pair but different unlensed CMB realisations, based on \cite{Story2015}. The de-biased bandpowers of the shear reconstruction are shown in Fig.~\ref{fig:reconstruct}.
\section{Beyond the large-scale-lens regime: SVD expansion of the multipole kernels}\label{svd}
The shear estimator in Eq.~\eqref{eq:curvedshear} is separable and so efficient to evaluate, but this comes at the cost of increased noise in the reconstruction on small scales. The sub-optimality of the shear estimator is apparent from Fig.~\ref{fig:noise}, which shows that its disconnected noise bias $N_L^{(0)}$ has a spike at small scales (see also Fig.~2 in~\cite{Schaan2019}). This spike arises because the shear estimator has zero response to lenses at this particular scale. The noise biases in the figure are computed on the full sky using Eq.~\eqref{eq:fullnoise}.
The full $m=2$ estimator, which in this paper we approximate by Eq.~\eqref{phishear1} with non-separable weights~\eqref{eq:nonsepmeqtwoweights}, has better noise performance than the shear estimator for $L>100$ (for the survey specifications adopted here) as shown in Fig.~\ref{fig:noise}. In particular, if the weights are constructed from the $m=2$ component of the \emph{asymmetric} lensing response function, $\tilde{f}^2_{L,\ell}$, the noise spike is eliminated. However, such $m=2$ estimators are inefficient to evaluate since the weights are not separable.
A simple work-around, which we have found to perform quite well, is to retain the squeezed-limit, separable approximation (i.e., the shear estimator) on large scales where its performance is similar to the full $m=2$ estimator, but to approximate the $\tilde{f}^2_{L,\ell}$ as a sum of separable terms on smaller scales. We find these separable terms by singular-value decomposition (SVD) \cite{SVD}.
In detail, we construct a hybrid approximation to $\tilde{f}^2_{L,\ell}$ as follows. For $L < L_\ast$, we use the shear approximation; for $L > L_\ast$, we perform a singular-value decomposition of the block of the asymmetric, \emph{convergence} response function, $2 \tilde{f}^2_{L,\ell} / \omega_L^2$, with $L_\ast \leq L \leq L_{\text{max}}$ and $2 \leq \ell \leq \ell_{\text{max}}$, and approximate this with the first $n$ largest singular values. We found that keeping $n=20$ SVD terms gives a reasonable balance between computational efficiency and optimality. We chose the lensing multipole $L_\ast$ at which to switch such that the reconstruction noise on the SVD-based estimator is lower than that of the shear, which for our survey parameters is $L_\ast = 1000$. (Note that there is a range of $L$ in which the above condition is true, and $L_\ast$ was chosen empirically based on good SVD convergence. Other metrics could certainly be used to determine $n$ and $L_\ast$.) The complete form of our hybrid-SVD response function is
\begin{equation}
\frac{2\tilde{f}^{\text{hybrid-SVD}}_{{L},\ell}}{\omega_L^2} = \begin{cases} C^{TT}_\ell d\ln C^{TT}_\ell / d\ln{\ell} \, , & L<L_\ast \\ \sum^n_{i=1}\Lambda_i U_{L,i} V_{\ell,i} \, & L\geq L_\ast \, . \end{cases}
\end{equation}
Here $\Lambda_i$ corresponds to the $i$th singular value, $U_{L,i}$ are the components of the $i$th left singular vector and $V_{\ell,i}$ are the components of the $i$th right singular vector. We see that the SVD naturally decomposes the response into a sum of separable terms and hence the reconstruction can be performed efficiently for each component. The separable SVD estimator is given explicitly by (for $L \geq L_\ast$)
\begin{equation}
\hat{\phi}^{\text{SVD}}_{LM}=\frac{A_L^{\text{SVD}}}{2}\sum_{i=1}^n \Lambda_i U_{L,i}
\int{d}^2\hat{\vec{n}}\, \left(\nabla^{\langle a}\nabla^{b\rangle}Y^{*}_{LM}\right)T(\hat{\vec{n}})\nabla_{\langle a}\nabla_{b\rangle}T^{F,\text{SVD}}_i(\hat{\vec{n}}) \, ,
\end{equation}
where the filtered field for the $i$th singular value is
\begin{equation}
T^{F,\text{SVD}}_i(\hat{\vec{n}})=\sum_{\ell m} \frac{V_{\ell,i}}{\omega_\ell^2(C^{\text{total}}_\ell)^2}
T_{\ell{m}}Y_{\ell{m}}(\hat{\vec{n}}) \, .
\end{equation}
The normalisation $A_L^{\text{SVD}}$ is chosen, as usual, to ensure the estimator has the correct response to lensing. We show in Sec.~\ref{test} that the separable approximation is still very effective at suppressing foregrounds, as expected since we have not altered the $m=2$ geometric structure of the estimator. As can be seen from Fig.~\ref{fig:noise}, the noise performance is very close to that of the full (asymmetric) $m=2$ estimator for $L \geq L_\ast$, and around a factor of two better than the shear estimator.
\section{Testing the sensitivity to foregrounds using simulations}\label{test}
To test the sensitivity of the estimators to extragalactic foregrounds, we use the component maps of the Websky extragalactic foreground simulations~\cite{Stein2020}. These include CIB, tSZ and kSZ at $143\,\text{GHz}$. The power spectra of these foregrounds are shown in Fig.~\ref{foreground}. In a real analysis, bright galaxy clusters and sources would be dealt with either by masking (i.e., excising regions around them) or in-painting (masking, but with the resulting holes filled with constrained realizations). We mimic this for point sources in our analysis, without introducing the complications of having to deal with masked or in-painted maps, as follows. We apply a matched-filter, with the profile corresponding to the instrumental beam and noise power given by the sum of instrumental noise and foreground power, to maps including the full lensed CMB plus foregrounds plus white noise. Sources with recovered flux density greater than $5\,\text{mJy}$ are catalogued and regions around them are removed from the foreground maps only. These masked foreground maps are then combined with lensed CMB and $7\,\mu\text{K-arcmin}$ white noise to form the final temperature map given by $T_{\text{total}}=T_{\text{CMB}}+T_{f}+T_{\text{noise}}$, where we have written explicitly the contributions to the observed temperature map from the lensed CMB $T_{\text{CMB}}$, the extragalactic foregrounds $T_f$ and the detector noise $T_{\text{noise}}$. We do not mimic the masking of bright galaxy clusters. As noted below, this means that our results for the bias should be considered rather extreme, particularly for the trispectrum bias.
To assess the bias induced by foregrounds in the auto-power spectrum of the reconstruction, $\hat{C}^{\kappa\kappa}_L$, we evaluate the primary foreground bispectrum $2\langle\mathcal{Q}[T_f,T_f]\kappa\rangle$ and the foreground trispectrum term $\langle\mathcal{Q}[T_f,T_f]\mathcal{Q}[T_f,T_f]\rangle_c$, from which the disconnected (Gaussian) contribution is subtracted using simulations. Here $\mathcal{Q}[T_A,T_B]$ represents a quadratic estimator (we consider the standard quadratic, shear and hybrid-SVD estimators) applied to maps $T_A$ and $T_B$. We do not consider the secondary bispectrum bias discussed in \cite{Schaan2019}, as it was found to be subdominant to the primary bispectrum and the trispectrum biases (and we expect the same to hold for our estimator variants).
The primary bispectrum bias on the lensing power spectrum can be seen in Fig.~\ref{fig:bi} for three choices of the maximum CMB multipole used in the reconstruction: $\ell_{\text{max}}=3000$, $3500$ and $5000$. Power spectra are binned with $\Delta{L}=60$. For all the $\ell_{\text{max}}$ choices, significant biases are observed in the standard quadratic estimator, while the shear estimator can remove the bias very effectively, in agreement with the flat-sky results of \cite{Schaan2019}. Furthermore, one can see that the bias induced in the hybrid-SVD estimator is smaller than that of the shear on small scales and comparable to that of the shear on large scales. The improvement in noise performance of the hybrid-SVD estimator compared to the shear estimator can also be appreciated, where the shaded $1\,\sigma$ bandpower errors (which include reconstruction noise and lensing sample variance) for the hybrid-SVD estimator (red) lie between the shear (green) and the standard quadratic estimator (blue), the latter having the lowest variance but a large foreground bias.
Similar improvements can be seen in Fig.~\ref{fig:tri}, where the trispectrum bias reduces significantly when switching from the standard quadratic estimator to the shear or hybrid-SVD estimator. Although for $\ell_{\text{max}}=3500$ and $\ell_{\text{max}}=5000$, the bias is no longer significantly smaller than the statistical error, the improvement compared with the standard quadratic estimator is still large. Furthermore, it should be noted that the trispectrum bias is particularly sensitive to the most massive galaxy clusters, which can be straightforwardly detected and removed by masking or inpainting in a real analysis. As we have not carried this out here, we expect a significant reduction in trispectrum bias for all estimators in practice.
\section{Conclusions}
We showed how to formulate foreground-immune multipole estimators for CMB lensing reconstruction, particularly the $m=2$ estimator that contains most of the signal-to-noise, on the spherical sky. This allows the straightforward application of the estimators proposed in~\cite{Schaan2019} to large-area surveys such as Planck, AdvACT and the forthcoming Simons Observatory. Generally, these estimators are not separable and so cannot easily be evaluated efficiently. Previous separable approximations -- the shear estimator introduced in~\cite{Schaan2019} -- have sub-optimal reconstruction noise when reconstructing small-scale lenses. We presented a simple, first attempt at producing a separable approximation to the full $m=2$ estimator based on singular-value decomposition of the part of its response function at intermediate and large lensing multipoles. We tested the performance of this hybrid-SVD estimator, along with the shear approximation and the standard quadratic estimator, on the Websky~\cite{Stein2020} foreground simulation. As in the flat-sky tests considered in~\cite{Schaan2019}, we found the shear estimator to be very effective in suppressing foreground biases even on single-frequency maps. The same is true of the hybrid-SVD estimator, but it has the advantage of higher signal-to-noise on small scales.
The field of CMB lensing has experienced a fast transition from first detection~\cite{Smith:2007rg,Das_2011} to precision measurements in the last 15 years. With current and upcoming surveys of the CMB, such as AdvACT, SPT-3G and Simons Observatory, probing the millimetre sky with increasing resolution and sky coverage, we can expect further rapid improvements in the quality of lensing products reconstructed from the CMB. However, improvements in statistical noise must be met with more stringent control of systematic effects, such as those from extragalactic foregrounds in temperature maps. The methods explored in this paper provide a robust way to measure CMB lensing, which is largely immune to the effect of these foregrounds. They can be added to the existing repertoire of methods to mitigate foregrounds, such as multi-frequency cleaning and bias hardening, and can be used in combination with these to improve optimality further. For example, it was found in~\cite{darwish2021optimizing} that a robust estimator to reduce foreground biases while having a low impact on signal-to-noise tends to consist of a combination of bias hardening (for point sources and tSZ cluster profiles), explicit tSZ deprojection in multi-frequency foreground cleaning \emph{and} the shear estimator.
\begin{acknowledgments}
We thank William Coulton for providing the code for the matched-filter and Emmanuel Schaan and Simone Ferraro for useful discussions. This work used resources of the Niagara supercomputer at the SciNet HPC Consortium and the National Energy Research Scientific Computing Center. FQ acknowledges the support from a Cambridge Trust scholarship.
AC acknowledges support from the STFC (grant numbers ST/N000927/1 and ST/S000623/1). BDS acknowledges support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 851274) and from an STFC Ernest Rutherford Fellowship.
\end{acknowledgments}
\appendix
\section{Full-sky normalization and $N^{(0)}_L$}\label{appA}
In this appendix we review the normalisation of full-sky quadratic estimators and the disconnected noise bias of their reconstructed power spectrum, following, e.g.,~\cite{Okamoto2003}.
We start with a general, full-sky quadratic estimator for $\phi$:
\begin{equation}
\hat{\phi}_{LM}=A_L\sum_{\ell_1m_1}\sum_{\ell_2m_2}(-1)^M g_{\ell_1,\ell_2}(L)\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}T_{\ell_1m_1}T_{\ell_2m_2} \, .
\label{eq:appa1}
\end{equation}
The full-sky lensing response is
\begin{equation}
\langle{T_{\ell_1 m_1}T_{\ell_2 m_2}}\rangle_{\text{CMB}}=(-1)^{m_1} C^{TT}_{\ell_1} \delta_{\ell_1\ell_2}\delta_{m_1-m_2}+\sum_{L M} (-1)^{M}\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}{f^\phi_{\ell_1 L \ell_2}}\phi_{LM} \, ,
\label{eq:appa2}
\end{equation}
where the weight $f^{\phi}_{\ell_1 L \ell_2}=C^{TT}_{\ell_1}{F_{\ell_2 L \ell_1}}+C^{TT}_{\ell_2}{F_{\ell_1 L \ell_2}}$ with
\begin{equation}
F_{\ell_1 L \ell_2}=\left[L(L+1)+\ell_2(\ell_2+1)-\ell_1(\ell_1+1)\right]\sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{16\pi}}\begin{pmatrix}
\ell_1 & L& \ell_2\\
0 & 0 & 0
\end{pmatrix} \, .
\end{equation}
Note that $f^\phi_{\ell_1 L \ell_2}$ is symmetric in $\ell_1$ and $\ell_2$. In practice, we use the lensed power spectrum in $f^\phi_{\ell_1 L \ell_2}$, which is a good approximation to the true non-perturbative response~\cite{Lewis:2011fk}.
The normalisation $A_L$ is determined by demanding that the estimator is unbiased, i.e., $\langle \hat{\phi}_{LM} \rangle_{\text{CMB}} = \phi_{LM}$. Evaluating the average of Eq.~\eqref{eq:appa1} over the unlensed CMB fluctuations, the first term on the right of Eq.~\eqref{eq:appa2} only contributes at $L=0$ and so can be dropped. Simplifying the contribution of the second term with the properties of the $3j$-symbols gives the normalisation as
\begin{equation}
\frac{1}{A_L} = \frac{1}{2L+1}\sum_{\ell_1 \ell_2} g_{\ell_1,\ell_2}(L) f^\phi_{\ell_1 L \ell_2} \, .
\end{equation}
We now consider the disconnected (Gaussian) noise bias $N^{(0)}_L$ on the reconstructed power spectrum. We have
\begin{multline}
\langle \hat{\phi}_{LM} \hat{\phi}_{L'M'} \rangle_G = A_L A_{L'} \sum_{\ell_1m_1}\sum_{\ell_2m_2} \sum_{\ell'_1 m'_1}\sum_{\ell'_2 m'_2} (-1)^{M} (-1)^{M'} \begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M \end{pmatrix} \begin{pmatrix}
\ell'_1 & \ell'_2 & L'\\
m'_1 & m'_2 & -M' \end{pmatrix} \\ \times g_{\ell_1,\ell_2}(L) g_{\ell'_1,\ell'_2}(L') \langle T_{\ell_1 m_1} T_{\ell_2 m_2} T_{\ell'_1 m'_1} T_{\ell'_2 m'_2} \rangle_G
\, , \label{eq:appa3}
\end{multline}
where the subscript $G$ denotes the disconnected (Gaussian) part of the expectation value. For the CMB four-point function, we have
\begin{equation}
\langle T_{\ell_1 m_1} T_{\ell_2 m_2} T_{\ell'_1 m'_1} T_{\ell'_2 m'_2} \rangle_G = (-1)^{m_1} (-1)^{m_2} C_{\ell_1}^{\text{total}} C_{\ell_2}^{\text{total}} \left[ \delta_{\ell_1 \ell'_1} \delta_{m_1 -m'_1} \delta_{\ell_2 \ell'_2} \delta_{m_2 -m'_2} + \left(\ell'_1 m'_1 \leftrightarrow \ell'_2 m'_2 \right) \right] \, ,
\end{equation}
where we have dropped the contractions that couple the temperature fields within the same quadratic estimator as these only give $L=L'=0$ contributions. Substituting in Eq.~\eqref{eq:appa3}, and noting that parity enforces $\ell_1 + \ell_2 + L = \text{even}$ and $\ell'_1 + \ell'_2 + L' = \text{even}$, we have
\begin{equation}
\langle \hat{\phi}_{LM} \hat{\phi}_{L'M'} \rangle_G = (-1)^M \delta_{LL'} \delta_{M -M'} N_L^{(0)} \, ,
\end{equation}
where
\begin{equation}
N_L^{(0)} = A_L^2 \sum_{\ell_1 \ell_2} C_{\ell_1}^{\text{total}} C_{\ell_2}^{\text{total}} g_{\ell_1,\ell_2}(L)
\left[g_{\ell_1,\ell_2}(L) + g_{\ell_2,\ell_1}(L) \right] \, .
\label{eq:fullnoise}
\end{equation}
\section{Full-sky shear estimator in harmonic space}\label{appB}
In this appendix we show how to write the full-sky shear estimator in Eq.~\eqref{eq:curvedshear},
\begin{equation}\label{BBB}
\hat{\phi}^{\text{shear}}_{LM}=\frac{A^{\text{shear}}_L}{2}\int{d}^2\hat{\vec{n}}\, \left(\nabla^{\langle a}\nabla^{b\rangle}Y^{*}_{LM}\right)T(\hat{\vec{n}})\nabla_{\langle a}\nabla_{b\rangle}T^{F}(\hat{\vec{n}}) \, ,
\end{equation}
in harmonic space as (Eq.~\ref{curved1})
\begin{equation}
\hat{\phi}^{\text{shear}}_{LM}=A^{\text{shear}}_L\sum_{\ell_1m_1}\sum_{\ell_2m_2}(-1)^Mg^{\text{shear}}_{\ell_1,\ell_2}(L)\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}T_{\ell_1m_1}T_{\ell_2m_2} \, ,
\label{eq:appB2}
\end{equation}
and determine the weight function $g^{\text{shear}}_{\ell_1,\ell_2}(L)$.
We start by converting the covariant derivatives on the sphere into expressions involving spin-weighted spherical harmonics (see~\cite{Goldberg:1966uu} and, e.g.,~\cite{Lewis:2001hp}). For $s>0$ derivatives, we have
\begin{align}
\nabla^{\langle a_1}\ldots \nabla^{a_s \rangle}Y_{\ell m}&=\left(-\frac{1}{2}\right)^s \left(m_+^{a_1} \ldots m_+^{a_s} \bar{\eth}^s Y_{\ell m} + m_-^{a_1} \ldots m_-^{a_s} \eth^s Y_{\ell m} \right) \nonumber \\
&= \left(-\frac{1}{2}\right)^s \sqrt{\frac{(\ell+s)!}{(\ell-s)!}}\left[(-1)^s m_+^{a_1} \ldots m_+^{a_s} {}_{-s}Y_{\ell m} + m_-^{a_1} \ldots m_-^{a_s} {}_s Y_{\ell m} \right] \, ,
\end{align}
where $\vec{m}_\pm \equiv \hat{\boldsymbol{\theta}} \pm i \hat{\boldsymbol{\phi}}$ are null basis vectors constructed from unit vectors along the coordinate directions of spherical-polar coordinates $(\theta,\phi)$.
Expanding the filtered field in \eqref{BBB} in terms of spherical harmonics,
\begin{equation}
T^{F}(\hat{\vec{n}})=\sum_{\ell m} \frac{1}{\omega_\ell^2} \frac{C^{TT}_\ell}{(C^{\text{total}}_\ell)^2}\frac{d\ln{C^{TT}_\ell}}{d\ln{\ell}}
T_{\ell{m}}Y_{\ell{m}}(\hat{\vec{n}}) \, ,
\end{equation}
where we have used the flat-sky expression~\eqref{eq:flatshearfilter} with $\ell^2$ replaced by its usual spherical equivalent $\omega_\ell^2 = \ell(\ell+1)$, we have the contraction
\begin{equation}
\nabla^{\langle a_1}\ldots \nabla^{a_s \rangle}Y_{LM}^\ast \nabla_{\langle a_1}\ldots \nabla_{a_s \rangle}Y_{\ell_1 m_1} = \left(\frac{1}{2}\right)^s \sqrt{\frac{(L+s)!}{(L-s)!}}\sqrt{\frac{(\ell_1+s)!}{(\ell_1-s)!}} \left({}_{-s}Y_{LM}^\ast {}_{-s} Y_{\ell_1 m_1} + {}_s Y_{LM}^\ast {}_s Y_{\ell_1 m_1} \right) \, .
\end{equation}
Multiplying by $Y_{\ell_2 m_2}$ from the expansion of the unfiltered field in Eq.~\eqref{BBB}, the resulting integral over $\hat{\vec{n}}$ can be performed with the Gaunt integral to obtain
\begin{multline}
\int d^2 \hat{\vec{n}} \, \left(\nabla^{\langle a_1}\ldots \nabla^{a_s \rangle}Y_{LM}^\ast \right) \left(\nabla_{\langle a_1}\ldots \nabla_{a_s \rangle}Y_{\ell_1 m_1}\right) Y_{\ell_2 m_2} = (-1)^M \left(-\frac{1}{2}\right)^s \sqrt{\frac{(L+s)!}{(L-s)!}}\sqrt{\frac{(\ell_1+s)!}{(\ell_1-s)!}} \\ \times \sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{4\pi}}
\begin{pmatrix}
\ell_1 & \ell_2 & L\\
m_1 & m_2 & -M
\end{pmatrix}
\begin{pmatrix}
\ell_1 & \ell_2 & L\\
s & 0 & -s
\end{pmatrix} \left[1+(-1)^{\ell_1+\ell_2+L}\right] \, ,
\end{multline}
which forces $\ell_1+\ell_2+L = \text{even}$, as required by parity. Finally, setting $s=2$ and comparing with Eq.~\eqref{eq:appB2}, we find
\begin{multline}
g^{\text{shear}}_{\ell_1,\ell_2}(L)=\frac{1}{2}\sqrt{\frac{(L+2)!}{(L-2)!}}\sqrt{\frac{(\ell_1+2)!}{(\ell_1-2)!}}\sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{16\pi}}\begin{pmatrix}
\ell_1 & \ell_2 & L\\
2 & 0 & -2
\end{pmatrix} \frac{1}{2}\left[1+(-1)\right]^{\ell_1+\ell_2+L} \\
\times \frac{1}{\omega_{\ell_1}^2} \frac{C_{\ell_1}^{TT}}{\left(C^{\text{total}}_{\ell_1}\right)^2} \frac{d\ln C_{\ell_1}^{TT}}{d\ln \ell_1} \, .
\end{multline}
This can be made to look closer to its flat-sky counterpart by making use of the recursion relations of the $3j$-symbols to show that
\begin{multline}\label{wig2}
\left[1+(-1)^{(\ell_1+\ell_2+L)}\right]
\begin{pmatrix}
\ell_1 & \ell_2 & L\\
2 & 0 & -2
\end{pmatrix}=\begin{pmatrix}
\ell_1 & \ell_2 & L\\
0 & 0 & 0
\end{pmatrix}\sqrt{\frac{(L-2)!}{(L+2)!}}\sqrt{\frac{(\ell_1-2)!}{(\ell_1+2)!}}\\ \times \left[(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2})(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}-2)-2\omega^2_L\omega^2_{\ell_1}\right] \, ,
\end{multline}
where, as discussed in the main text, the term in square brackets on the right accounts for the $\cos (2\theta_{\vec{L},\bell_1})$ weighting in the flat-sky limit. With this simplification, the shear weight function reduces to
\begin{multline}
g^{\text{shear}}_{\ell_1,\ell_2}(L)=\frac{1}{2}\sqrt{\frac{(2L+1)(2\ell_1+1)(2\ell_2+1)}{16\pi}}\frac{C^{TT}_{\ell_1}}{(C^{\text{total}}_{\ell_1})^2}\frac{d\ln C^{TT}_{\ell_1}}{d\ln \ell_1}\begin{pmatrix}
\ell_1 & \ell_2 & L\\
0 & 0 & 0
\end{pmatrix} \\ \times \omega^2_L
\left[\frac{\left(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}\right)\left(\omega^2_L+\omega^2_{\ell_1}-\omega^2_{\ell_2}-2\right)}{2\omega^2_{\ell_1}\omega^2_{L}}-1\right] \, ,
\end{multline}
given as Eq.~\eqref{eq:gshearcurved} in the main text.
\bibliography{apssamp}%
\nocite{*}
|
Title:
The challenge of ruling out inflation via the primordial graviton background |
Abstract: Recent debates around the testability of the inflationary paradigm raise the
question of how to model-independently discriminate it from competing
scenarios. We argue that a detection of the Cosmic Graviton Background (CGB),
the relic radiation from gravitons decoupling around the Planck time, would
rule out the inflationary paradigm, as realistic inflationary models would
dilute the CGB to an unobservable level. The CGB contribution to the effective
number of relativistic species, $\Delta N_{{\rm eff},g} \approx 0.054$, is well
within the reach of next-generation cosmological probes. We argue that
detecting the high-frequency stochastic gravitational wave background
associated to the CGB will be challenging but potentially feasible. We briefly
discuss expectations within alternatives to inflation, focusing on bouncing
cosmologies and emergent scenarios.
| https://export.arxiv.org/pdf/2208.14088 |
\title{The challenge of ruling out inflation via the primordial graviton background}
\author[0000-0002-7614-6677]{Sunny Vagnozzi}
\affiliation{Department of Physics, University of Trento, Via Sommarive 14, 38123 Povo (TN), Italy}
\affiliation{Kavli Institute for Cosmology, University of Cambridge, Madingley Road, Cambridge CB3 0HA, United Kingdom}
\correspondingauthor{Sunny Vagnozzi}
\email{[email protected]}
\author[0000-0003-4330-287X]{Abraham Loeb}
\affiliation{Department of Astronomy, Harvard University, 60 Garden Street, Cambridge, MA 02138, USA}
\keywords{inflation --- gravitational waves --- cosmology: observations}
\section{Introduction}
\label{sec:intro}
Inflation, a postulated stage of quasi-de Sitter expansion in the primordial Universe, is widely regarded as the leading paradigm for the very early Universe. Originally introduced to address various fine-tuning problems of the hot Big Bang (hBB) model, inflation provides a compelling mechanism for generating the density perturbations from which structure eventually originated~\citep{Starobinsky:1980te,Guth:1980zm,Mukhanov:1981xt,Linde:1981mu,Albrecht:1982wi}. The predictions of some of the simplest inflationary models are in remarkable agreement with observations of the Cosmic Microwave Background (CMB) and the Large-Scale Structure (LSS), which in turn is commonly viewed as a sign of the inflationary paradigm's success.
Despite these successes, inflation is not free of open issues, and over the years criticisms have been raised about its status~\citep[see e.g.][]{Ijjas:2014nta,Martin:2019zia}. One of the major bones of contention is driven by the large flexibility with regards to the predictions of individual inflationary models, and concerns whether or not the inflationary paradigm is falsifiable. We use the term ``paradigm'' and not ``model'' since any given inflationary model is clearly falsifiable, whereas these doubts concern the inflationary scenario as a whole. Here we do not seek to take sides in the debate, but simply note that these issues strongly motivate the question of how to \textit{model-independently} discriminate the inflationary paradigm from alternative scenarios for the production of density perturbations.
We address the above question by identifying a signature \textit{de facto} precluded to any realistic inflationary model, and whose observation would thus rule out the inflationary paradigm. The decoupling of primordial gravitons around the Planck time should leave behind a thermal background of relic gravitons: the Cosmic Graviton Background (CGB). An inflationary phase taking place between the Planck era and today would wash out the CGB, rendering it unobservable: an unambiguous CGB detection would therefore pose a major threat to the inflationary paradigm. In this \textit{Letter}, we formalize these arguments and discuss prospects for detecting the CGB.
\section{The Cosmic Graviton Background}
\label{sec:cgb}
We now discuss the features of the CGB in the absence of inflation. We adopt the working assumption that above the Planck scale point-like four-particle interactions involving two gravitons, whose rate at temperature $T$ is of order $\Gamma_g \sim T^5/M_{\rm Pl}^4$, kept gravitons in thermal equilibrium in the primordial plasma~\citep[see also][]{Zhao:2009pt,Giovannini:2019oii}. If we assume adiabatic evolution throughout the early stages of the primordial plasma, and therefore that the Universe was radiation dominated up to then, the Hubble rate scales as $H \sim T^2/M_{\rm Pl}$. Comparing the two rates indicates that gravitons decouple at a temperature $T_{g,{\rm dec}} \sim M_{\rm Pl}$ (or equivalently around the Planck time $t_{g,{\rm dec}} \sim t_{\rm Pl}$): besides ruling out inflation, a detection of the CGB would thus provide an experimental testbed for theories attempting to unify quantum mechanics and gravity.
Being massless and thus decoupling while relativistic, primordial gravitons preserve the blackbody form of their spectrum following decoupling, with the effective CGB temperature $T_g$ redshifting with the scale factor $a$ as $T_g \propto 1/a$. Since the entropy density $s=2\pi^2g_{\star}^s(T)T^3/45$ scales as $s \propto a^{-3}$, where $g_{\star}^s(T)$ is the (temperature-dependent) effective number of entropy degrees of freedom (DoF), we can relate the present-day temperatures of the CGB and CMB, $T_{g,0}$ and $T_{\gamma,0}$ respectively, as follows:
\begin{eqnarray}
\frac{T_{g,0}}{T_{\gamma,0}} = \left ( \frac{g_{\star}^s(T_0)}{ \left ( g_{\star}^s(T_{\rm Pl})-2 \right ) } \right )^{1/3}\,,
\label{eq:tg0tgamma0}
\end{eqnarray}
where $g_{\star}^s(T_0) \simeq 3.91$ is the present-day effective number of entropy DoF \textit{excluding} gravitons (accounting for photons and neutrinos), and $g_{\star}^s(T_{\rm Pl})$ is the effective number of entropy DoF prior to graviton decoupling, \textit{including} gravitons. Accounting only for Standard Model (SM) DoF up to the Planck scale, above the electroweak (EW) scale $g_{\star}^s(T_{\rm Pl})-2 \simeq 106.75$. Precise measurements of the CMB frequency spectrum from COBE/FIRAS fix $T_{\gamma,0} \approx 2.7\,{\rm K}$ and therefore under these minimal assumptions the present-day CGB temperature is predicted to be $T_{g,0} \simeq (3.91/106.75)^{1/3}T_{\gamma,0} \approx 0.9\,{\rm K}$, making the CGB about 3 times colder than the CMB.
Lacking a precise knowledge of the type of new physics lying beyond the ${\rm TeV}$ scale, the assumption of only considering SM DoF up to the Planck scale is conservative, but likely somewhat unrealistic, as one might expect several additional DoF to appear in the ``desert'' between the EW and Planck scales. If so, $g_{\star}^s(T_{\rm Pl})$ in the denominator of Eq.~(\ref{eq:tg0tgamma0}) can only increase, decreasing $T_{g,0}$ with respect to the previous estimate $T_{g,0} \approx 0.9\,{\rm K}$, which therefore should be viewed more as a conservative upper bound on $T_{g,0}$. However, the exact numbers are highly model-dependent and depend on the specific new physics model. For instance, $T_{g,0} \approx 0.7\,{\rm K}$ in a supersymmetric-like scenario where $g_{\star}^s(T_{\rm Pl})$ doubles, whereas $T_{g,0} \approx 0.4\,{\rm K}$ in a hypothetical scenario where $g_{\star}^s(T_{\rm Pl})$ increases by an order of magnitude.
\section{Can inflation be ruled out?}
\label{sec:inflation}
Our assumption of adiabatic evolution from $T_{\rm Pl}$ down to present times breaks down whenever comoving entropy is generated, e.g.\ during reheating at the end of inflation. An inflationary phase alters the relation between $T_{g,0}$ and $T_{\gamma,0}$ in Eq.~(\ref{eq:tg0tgamma0}), as the latter would be determined by the dynamics of reheating, which however can at most produce out-of-equilibrium graviton excitations, unless the effective gravitational constant $G_{\rm eff}$ was significantly higher at reheating. Since the scale factor increases exponentially during inflation, the CGB temperature itself is exponentially suppressed by a factor of $e^{-N}$, with $N$ the number of \textit{e}-folds of inflation.
We can obtain an extremely conservative upper limit on $\widetilde{T}_{g,0}$ in the presence of a phase of inflation (the tilde distinguishes the present-day graviton temperatures with and without inflation), using the facts that \textit{a)} solving the horizon and flatness problems requires $N \gtrsim 60$, and \textit{b)} reheating should occur at $T_{\rm rh} \gtrsim 5\,{\rm MeV}$ in order to not spoil Big Bang Nucleosynthesis predictions~\citep{deSalas:2015glj}. From these requirements we find that $\widetilde{T}_{g,0} \lesssim 50\,\mu{\rm K}$, implying that inflation would dilute the CGB to an unobservable level. More generically, we find the following upper limit:
\begin{eqnarray}
\widetilde{T}_{g,0} \ll 0.25 \left ( \frac{T_{\rm rh}}{{\rm GeV}} \right ) ^{-1}e^{60-N}\,\mu{\rm K}\,.
\label{eq:upperlimitinflation}
\end{eqnarray}
However, $\widetilde{T}_{g,0} \lesssim 50\,\mu{\rm K}$ is a very conservative upper limit, for two reasons. Firstly, in most realistic models inflation typically proceeds for more than 60 \textit{e}-folds, leading to further exponential suppression [see Eq.~(\ref{eq:upperlimitinflation})]. Next, although reheating at scales as low as $T_{\rm rh} \simeq {\cal O}({\rm MeV})$ is observationally allowed, models realizing this in practice are very hard to construct~\citep[see e.g.][]{Kawasaki:1999na,Hannestad:2004px,Khoury:2011ii}.~\footnote{Note, however, that a viable interpretation of the signal recently observed by various Pulsar Timing Arrays~\citep[e.g.][]{NANOGrav:2020bcs} is in terms of inflationary GWs given a rather low reheating scale~\citep{Vagnozzi:2020gtf,Kuroyanagi:2020sfw,Oikonomou:2021kql,Odintsov:2021kup,Benetti:2021uea,Oikonomou:2022ijs}.} It if far more likely that, if inflation did occur, reheating took place way above the EW scale, further tightening the upper bound on $T_{g,0}$.
One may try to evade these conclusions invoking models of \textit{incomplete inflation} with a limited number of \textit{e}-folds $46 \lesssim N \lesssim 60$: however, if inflation is indeed the solution to the flatness problem, such models are essentially ruled out by current stringent bounds on spatial curvature~\citep{Vagnozzi:2020dfn}, as argued explicitly in~\cite{Efstathiou:2020wem}. Even if $N<60$, bringing $\widetilde{T}_{g,0}$ to a detectable level still requires an extremely low reheating scale, typically harder to achieve within models of incomplete inflation.
A caveat to our previous results is our assumption of inflation occurring at sub-Planckian scales. Specifically, $T_{\rm rh}>M_{\rm Pl}$ is required for the CGB not to be washed out by inflation. However, on general grounds there are serious concerns about the consistency of trans-Planckian effects both during inflation and at reheating~\citep[e.g.][]{Brandenberger:2012aj,Brandenberger:2022pqo}. A specific concern is given by the trans-Planckian censorship conjecture, which sets tight limits on the maximum inflationary scale $\Lambda_{\rm inf}^{\max}$ and reheating temperature: $\Lambda_{\rm inf}^{\max}\,, T_{\rm rh} \ll M_{\rm Pl}$~\citep{Bedroya:2019snp,Bedroya:2019tba,Mizuno:2019bxy,Kamali:2019gzr}.
More importantly, the lack of detection of inflationary B-modes indicates that $\Lambda_{\rm inf}^{\max}$ is at least four orders of magnitude below the Planck scale. For instantaneous reheating, the reheating temperature is obviously limited to $T_{\rm rh}<\Lambda_{\rm inf}^{\max}$, as reheating to higher temperatures would violate (covariant) stress-energy conservation. For non-instantaneous reheating, $T_{\rm rh}$ is of course even lower~\citep[see also][]{Cook:2015vqa}. Therefore, we deem it very safe to assume that $T_{\rm rh} \ll M_{\rm Pl}$, corroborating all our earlier findings. In summary, within realistic inflationary cosmologies one does not expect to be able to detect the relic thermal graviton background -- conversely, a convincing detection thereof would rule out the inflationary paradigm.
\section{Detectability of the CGB}
\label{sec:detectability}
We now investigate whether detecting the CGB is experimentally feasible, considering our benchmark $T_{g,0} \approx 0.9\,{\rm K}$ case. The contribution of the CGB to the effective number of relativistic species $N_{\rm eff}$ is given by:
\begin{eqnarray}
\Delta N_{{\rm eff},g} \equiv \frac{8}{7} \left ( \frac{11}{4} \right )^{\frac{4}{3}}\frac{\rho_g}{\rho_{\gamma}} = \frac{8}{7} \left ( \frac{11}{4} \right )^{\frac{4}{3}} \left ( \frac{g_{\star}^s(T_0)}{ \left ( g_{\star}^s(T_{\rm Pl})-2 \right ) } \right )^{\frac{4}{3}}\,.
\label{eq:neffg}
\end{eqnarray}
For $g_{\star}^s(T_{\rm Pl})-2=106.75$, we therefore find that $\Delta N_{{\rm eff},g} \approx 0.054$, as expected for a species with 2 spin DoF decoupling before the QCD phase transition.
A contribution to $N_{\rm eff}$ of this size is a factor of $3$ below the sensitivity of current probes. However, this number is well within the reach of a combination of next-generation CMB and LSS surveys. For instance, even after marginalizing over the total neutrino mass, \cite{Brinckmann:2018owf} forecast a sensitivity of $\sigma_{N_{\rm eff}} \simeq 0.021$ combining CMB data from CMB-S4 and LiteBIRD with galaxy clustering and cosmic shear data from Euclid, whereas with a PICO-like experiment in place of CMB-S4+LiteBIRD the sensitivity improves to $\sigma_{N_{\rm eff}} \simeq 0.017$. Therefore, if the benchmark $0.9\,{\rm K}$ CGB were present, CMB-S4+LiteBIRD+Euclid would be able to detect it through its imprint on $N_{\rm eff}$ at $\simeq$2.5$\sigma$, whereas PICO+Euclid would be able to do so at $\simeq$3.2$\sigma$.
Should the CGB contribution to $N_{\rm eff}$ be detected, one may wonder how we know that the excess radiation density is associated to the CGB, rather than another dark radiation component. To remove this ambiguity, we consider the stochastic background of (high-frequency) gravitational waves (GWs) associated to the CGB. It is useful to think in terms of characteristic strain $h_c$, i.e.\ the dimensionless strain which would be produced due to the passing stochastic GW background (SGWB) in the arms of an interferometer with arms of equal length $L$ in the $x$ and $y$ directions, $h_c(\nu) \simeq \Delta L/L$. The characteristic CGB strain $h_g(\nu)$ is given by:
\begin{align}
h_g(\nu) = \frac{1}{\nu}\sqrt{\frac{3H_0^2}{2\pi^2}\Omega_g(\nu)} \approx 1.26 \times 10^{-27} \left ( \frac{\nu}{{\rm GHz}} \right )^{-1}\sqrt{h^2\Omega_g(\nu)}\,.
\label{eq:cgbstrain}
\end{align}
where $h^2\Omega_g(\nu)$ is the CGB spectral energy density in units of the present-day critical density:
\begin{eqnarray}
h^2\Omega_g(\nu) = \frac{15}{\pi^4}h^2\Omega_{\gamma,0} \left ( \frac{T_{g,0}}{T_{\gamma,0}} \right )^4F(x_g)\,,
\label{eq:h2omegagnu}
\end{eqnarray}
with $h$ the reduced Hubble parameter, $h^2\Omega_{\gamma,0}$ the photon density parameter today, $x_g \equiv h\nu/(k_BT_{g,0})$, and $F(x_g) \equiv x_g^4/(e^{x_g}-1)$. The CGB spectrum peaks at frequencies $\nu \approx 75\,{\rm GHz}$, making it a source of a high-frequency GWs: Fig.~\ref{fig:primordial_graviton_blackbody} shows the characteristic CGB strain alongside demonstrated or forecasted sensitivities of various detector concepts~\citep[see][]{Aggarwal:2020olq}.
Aside from optically levitated sensors~\citep{Arvanitaki:2012cn} and bulk acoustic wave (BAW) devices~\citep{Goryachev:2014yra}, all probes in Fig.~\ref{fig:primordial_graviton_blackbody} exploit the \textit{inverse Gertsenshtein effect} (IGE), whereby GWs convert to photons within a strong magnetic field~\citep{Gertsenshtein:1962ghw}. While apart from small prototypes dedicated instruments exploiting the IGE do not exist, \cite{Ito:2019wcb} and~\cite{Ejlli:2019bqj} showed how constraints on high-frequency GWs can be obtained re-interpreting data from ongoing or planned axion experiments: in Fig.~\ref{fig:primordial_graviton_blackbody} this includes ADMX, SQMS, IAXO SPD, JURA, OSQAR, and DMRadio$_8$-100~\citep{Domcke:2022rgu}. The IGE can also be exploited in strongly magnetized astrophysical environments~\citep{Chen:1994ch,Domcke:2020yzq}, recasting observations from radio telescopes such as EDGES and ARCADE. For more details on these detector concepts, see~\cite{Aggarwal:2020olq,Berlin:2021txa,Domcke:2022rgu}.
Unfortunately, as is clear from Fig.~\ref{fig:primordial_graviton_blackbody}, all these detector concepts fall short of the CGB signal by several orders of magnitude. The only promising probe is enhanced magnetic conversion (EMC), a proposal to enhance the efficiency of IGE-based magnetic conversion detectors by seeding the conversion volume with locally generated auxiliary EM fields, e.g.\ EM Gaussian beams (GBs) oscillating at the frequency of the GW signal searched for~\citep{Li:2004df,Baker:2008zzb}. Until recently, EMC appeared to be well beyond technological reach, particularly due to the requirement of a GB geometric purity at the $10^{-21}$ level to reach strain levels of $h_c \sim 10^{-30}$ at $\nu \sim {\cal O}(100)\,{\rm GHz}$.
However, \cite{Ringwald:2020ist} argued that reaching the above benchmark limit is feasible, exploiting state-of-the-art superconducting magnets utilized in near-future axion experiments to generate the required EM signal, then enhanced by a GB produced by a MW-scale $40\,{\rm GHz}$ gyrotron. While this still leaves us 2 orders of magnitude short of the CGB peak strain, realistic improvements in the development of gyrotrons, single-photon detectors (SPDs), and superconducting magnets, can bring the projected sensitivity down to $h_c \sim 10^{-32}$, sufficient to detect the CGB in our benchmark scenario. We estimate that an increase in the gyrotron available power to $\sim 100\,{\rm MW}$ (which is realistically achievable) over a stable running time of $\sim 1\,$month (which is much more challenging), alongside improvements in SPDs dark count rates to $\sim 10^{-5}\,{\rm s}^{-1}$, would result in a sensitivity to strains of order $h_c \sim 10^{-33}$, sufficient to detect our benchmark CGB. All quoted sensitivities can be further improved by increasing the reflector size, and the intensity and length of the magnets. Therefore, measuring strains as small as $h_c \sim 10^{-33}$ at $\nu \sim {\cal O}(100)\,{\rm GHz}$, and detecting the benchmark CGB, might be feasible in the not-too-far-off future.~\footnote{We recall that the quoted CGB strength assumes that the SM holds up to the Planck scale, and that the appearance of additional DoF would lower the CGB temperature and associated SGWB strength. However, even in the extremely unrealistic scenario where the number of DoF increases by an order of magnitude, the temperature of the CGB would only decrease by a factor of $\gtrsim 2$, making the SGWB signal only a factor of $\approx 5$ weaker [see the $T_{g,0}$ dependence in Eqs.~(\ref{eq:cgbstrain},\ref{eq:h2omegagnu})].}
Another interesting potential detection channel proposed very recently by~\cite{Brandenberger:2022xbu} proceeds through a parametric instability of the EM field in the presence of GWs. This would allow for conversion of high-frequency GWs to photons without the need for a strong background magnetic field. Sensitivity reach estimates for this probe, while not yet available, are worth further investigation in this context.
An important issue concerns how to distinguish the CGB from competing SGWB sources. Possible examples could be the SGWB produced during preheating~\citep{Easther:2006gt} or during oscillon formation~\citep{Zhou:2013tsa}: however, both these sources are important at lower frequencies, ${\cal O}(10^6-10^9)\,{\rm Hz}$~\citep[see][]{Aggarwal:2020olq}, and hence should not confuse the CGB detection. The CGB SGWB can also be distinguished from the SGWB produced by out-of-equilibrium gravitational excitations at reheating~\citep{Ringwald:2020ist}: the latter would not be of the blackbody form, and its strength would be orders of magnitude below the CGB as long as the reheating temperature is $T_{\rm rh} \ll M_{\rm Pl}$, which as argued earlier can be safely assumed. This highlights the importance of detecting the CGB over a range of frequencies, given the clear prediction for its frequency dependence. Within the EMC experimental setup, this can be achieved by tuning the gyrotron frequency: the output frequencies available for typical gyrotrons fall within the $\sim 20-500\,{\rm GHz}$ range, perfectly suited to probe the CGB spectrum around its peak frequency. A similar tuning procedure should also be possible for the GW-photon parametric instability probe.
A caveat to our findings is the assumption of a pure blackbody spectrum for primordial gravitons. This is likely to be an approximation at best, particularly at low frequencies, whose modes would have been super-horizon at the Planck time. However, in the absence of detailed knowledge regarding the underlying theory of quantum gravity, this is among the most conservative assumptions we can make~\citep[note that the same approximation has been made in several earlier works discussing primordial gravitons, e.g.][]{Zhao:2009pt,Giovannini:2019oii}. Moreover, what is important for our results is the high-frequency tail of the CGB spectrum, where our assumption is likely to be far more realistic. Overall, it remains true that finding any trace of a GW background of the estimated amplitude at the estimated frequency will rule out the standard inflationary scenario.
\section{Alternatives to inflation}
\label{sec:alternatives}
Our previous discussion raises the question of whether an unambiguous CGB detection would also spell trouble for alternative paradigms, where density perturbations are produced during a non-inflationary phase. While the answer to this question is highly model-dependent, we wish to provide a brief qualitative assessment limited to two well-motivated paradigms: bouncing cosmologies and emergent scenarios.
Within bouncing cosmologies, the challenge is to produce a thermal CGB in first place. This is hard to achieve during the contracting phase, when the characteristic energy scale is typically $\Lambda_c \ll M_{\rm Pl}$~\citep[e.g.][]{Brandenberger:2016vhg}. Another possibility is one where a relatively long bouncing phase with energy density around the Planck scale occurs between the initial contracting phase and the later hBB expansion~\citep[e.g.][]{Cai:2014bea}, in which case a thermal CGB would be generated and would survive the phase transition between the bouncing and expanding phases.
In emergent scenarios, the Universe emerges from an initial high density state with matter in global thermal equilibrium, and producing the CGB is far less unlikely. A particularly well-studied emergent scenario is the string gas proposal of~\cite{Brandenberger:1988aj}, where the Universe originates from a quasi-static Hagedorn phase of a string gas at temperature close to the Hagedorn temperature, before a T-dual symmetry breaking-driven phase transition connects to the hBB expansion. On general grounds, the energy density in the emergent phase is close to the Planck density, making it likely for gravitons to be in thermal equilibrium and therefore for a CGB to be generated.
However, the initial state in string gas cosmology is not a thermal state of particles but of strings, giving a different scaling of thermodynamical quantities. It is therefore unlikely that the string gas CGB takes the blackbody form, although it is in principle possible that its spectral energy density may be higher than our benchmark CGB, enhancing detection prospects. Fully exploring these points requires a dedicated study, going beyond the scope of our work.
\section{Conclusions}
\label{sec:conclusions}
Despite its enormous success, recent debates around the inflationary paradigm raise the question of how to \textit{model-independently} discriminate it from competing scenarios for the production of primordial density perturbations. In this \textit{Letter}, we have argued that a detection of the Cosmic Graviton Background (CGB), the left-over graviton radiation from the Planck era, would rule out the inflationary paradigm, as realistic inflationary models dilute the CGB to an unobservable level. Assuming the validity of the SM up to the Planck scale, the CGB contribution to the effective number of relativistic species $\Delta N_{{\rm eff},g} \approx 0.054$ is well within the reach of next-generation cosmological probes, whereas detecting the associated stochastic background of high-frequency GWs in the $\nu \sim {\cal O}(100)\,{\rm GHz}$ range is challenging but potentially feasible. We also argued that the CGB may be detectable within well-motivated alternatives to inflation such as bouncing and emergent scenarios. We hope that this work will spur further investigation into the possibility of model-independently confirming or ruling out the inflationary paradigm with upcoming observations~\citep[for similar endeavors see e.g.][]{Chen:2018cgg}.
\section*{Acknowledgements}
We are grateful to Robert Brandenberger, Massimo Giovannini, Will Kinney, Nick Rodd, and Luca Visinelli for useful discussions and suggestions. S.V. is partially supported by the Isaac Newton Trust and the Kavli Foundation through a Newton-Kavli Fellowship, by a grant from the Foundation Blanceflor Boncompagni Ludovisi, n\'{e}e Bildt, and by a College Research Associateship at Homerton College, University of Cambridge. A.L. is partially supported by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation.
\bibliography{Primordial_graviton_background}{}
\bibliographystyle{aasjournal}
\label{lastpage} |
Title:
The amplification of cosmological magnetic fields in Extended $f(T,B)$ Teleparallel Gravity |
Abstract: Observations indicate that intergalactic magnetic fields have amplitudes of
the order of $\sim 10^{-6}$ G and are uniform on scales of $\sim 10$ kpc.
Despite their wide presence in the Universe, their origin remains an open
issue. Even by invoking a dynamo mechanism or a compression effect for magnetic
field amplification, the existence of seed fields before galaxy formation is
still problematic. General Relativity predicts an adiabatic decrease of the
magnetic field evolving as $|\mathbf{B}|\propto 1/a^{2}$, where $a$ is the
scale factor of the Universe. It results in very small primordial fields,
unless the conformal symmetry of the electromagnetic sector is broken. In this
paper, we study the possibility that a natural mechanism for the amplification
of primordial magnetic field can be related to extended teleparallel gravity
$f(T, B)$ models, where $T$ is the torsion scalar, and $B$ the boundary term.
In particular, we consider a non-minimal coupling with gravity in view to break
conformal symmetry in a teleparallel background, investigating, in particular,
the role of boundary term $B$, which can be consider as a further scalar field.
We find that, after solving exactly the $f(T,B)$ field equations both in
inflation and reheating eras, a non-adiabatic behavior of the magnetic field is
always possible, and a strong amplification appears in the reheating epoch. We
also compute the ratio $r=\rho_{B}/ \rho_{\gamma}$ between the magnetic energy
density and the cosmic microwave energy density during inflation, in order to
explain the present value $r\simeq 1$, showing that, in the slow-roll
approximation, power-law teleparallel theories with $B^{n}$ have effects
indistinguishable from metric theories $R^{n}$ where $R$ is the Ricci curvature
scalar..
| https://export.arxiv.org/pdf/2208.11186 |
\preprint{APS}
\title{The amplification of cosmological magnetic fields in Extended $f(T,B)$ Teleparallel Gravity}
\author{Salvatore Capozziello$^{a,b,c}$, Amodio Carleo$^{d,e}$, Gaetano Lambiase$^{d,e}$}
\email{[email protected]}
\affiliation{$^a$Dipartimento di Fisica "E. Pancini", Universit\'a degli Studi di Napoli Federico II", Via Cinthia, I-80126, Napoli, Italy}
\affiliation{$^{b}$Istituto Nazionale di Fisica Nucleare (INFN), sez. di Napoli, Via Cinthia 9, I-80126 Napoli, Italy}
\affiliation{$^{c}$Scuola Superiore Meridionale, Largo S. Marcellino, I-80138, Napoli, Italy}
\email{[email protected]}
\affiliation{$^{d}$Dipartimento di Fisica "E. R. Caianiello", Universit\'a degli Studi di Salerno, Via Giovanni Paolo II, 132, I-84084, Fisciano (SA), Italy}
\email{[email protected]}
\affiliation{$^{e}$Istituto Nazionale di Fisica Nucleare (INFN), gruppo collegato di Salerno, Italy}
\date{\today}
\section{Introduction}\label{sec1}
The presence of magnetic fields in any structure of our Universe is a consolidated issue confirmed by several fine observational data. Our Galaxy and other spiral galaxies are endowed with coherent large scale magnetic fields with typical length $\ge 10$ kpc and strength of $\sim 3 \times 10^{-6}$ G. They play important roles in a multitude of astrophysical phenomena, such as the confinement of cosmic rays, the transfer of angular momentum away from protostellar clouds (allowing their collapse in stars), the genesis of gamma ray-bursts (GRBs) and, recently, the extraction of energy from BHs \cite{Comisso,Carleo:2022qlv,Khodadi_2022,Wei_2022}. While local strong magnetic fields (up to $10^{14}$ G) come out from stars or compact objects (like neutron stars), galactic and intergalactic magnetic fields still have no explanation, being one of the long standing problem of astrophysics and cosmology. While in the case of aged galaxies, one could explain them by invoking a dynamo \cite{Widrow_2002} or a compression mechanism \cite{Turner:1987bw}, the presence of fields also in protogalaxy and mostly in the intergalactic medium (IGM), suggests their cosmological rather than local origin. This eventually means to search for some unknown physical process for generating such large scale fields. One possibility is that they are relics from the early Universe, with a subsequent amplification in a pre-galactic era through dynamo or compression effects. In the first case, a galactic dynamo for the entire age of the galaxy could
have amplified a primordial magnetic field (PMF) by a factor of $10^{13}$, thus requiring a primordial field of $10^{-19}$ G; in the second case, the strength of the seed field required to explain today galactic values is much greater, $10^{-9} G$ \cite{Turner:1987bw}, therefore resulting less efficient. Such a primordial magnetic field, also called seed field, however, have not a clear origin and its evolution and signatures are still a matter of debate today. The size of the initial magnetic seed is also an issue, since it should not be smaller that $100$ pc after the collapse of the protogalaxy, which implies a comoving scale of approximately $10$ Kpc before the collapse. Seeds generated after inflation, during the radiation era, for example, are typically too small in size because their coherence length can never exceed that of the causal horizon at the time of magnetogenesis. Inflation remains the only mechanism capable of producing super-horizon correlations, so, in principle, it could easily generate primordial fields of the required length.
Specifically, General Relativity (GR) predicts an adiabatic decay for the PMF: since the Universe is believed to have been a good conductor for much of its post-inflationary epochs, any cosmological
magnetic field, from the end of inflation onwards, will preserve the flux, i.e. $a^2 B \sim const $, and then $B \sim 1/a^2$, or, in terms of the magnetic energy density, $\rho_{B} = |\mathbf{B}|^{2}/(8 \pi ) \propto 1/a^4$, where $a$ is the scale factor of the (flat) Friedman-Robertson-Walker (FRW) metric. Since the scale factor tends to infinity during inflation, this type of decay involves too many weak or practically absent magnetic fields at the end of the inflation period. This scaling is the same for every cosmic energy density present in the Universe. In particular, the Universe is filled with a cosmic
microwave background radiation (CMB), a relic of the hot big bang, with a thermal spectrum at the (current) temperature of $T = 2.725$ K. The energy density of this radiation, $\rho_{\gamma} = \pi^{2} T^{4}/ 25$, which formed
a dominant component of the energy density of the
early Universe, corresponded to the energy of the void and was almost constant during inflation. Immediately afterwards, it began to dilute as the Universe expands as $\rho_{\gamma} \sim 1/a^4$ (the extra factor $1/a$ w.r.t to matter, which decays as $\sim 1/a^{3}$, comes from energy redshift); therefore, the ratio $r \doteq \rho_{B}/ \rho_{\gamma}$ remained constant until today, with a current value of $r \approx 1$. It is then
standard practice to characterize the primordial field
with either this ratio $r$, or the present day value $\mathbf{B}_{0}$ as
a function of its coherence scale $L$. Precisely, a present day
magnetic field strength of $3.2 $ $\mu G$ has an energy density equal
to the present day CMB energy density, i.e. $r = 1$. In order to explain this value, one needs a pregalactic seed field with a ratio $r \simeq 10^{-34}$ if dynamo amplification is assumed, and $r \simeq 10^{-8}$ if compression occurred in the collapse of protogalactic cloud.
This problem has been studied not only in the context of GR but also e.g. in the Poincare gauge theory \cite{Kothari:2018aem}, string theory \cite{Gasperini:1995dh}, Gauss-Bonnet gravity \cite{Atmjeet:2013yta} and gravity theories with torsion \cite{Kothari:2018aem}. An adiabatic decay-law on all length scales, moreover, implicitly assumes the existence of electric currents with super-horizon
correlations, thus violating causality \cite{Tsagas}. One way to overcome these critical points is to assume that the flux is not conserved and therefore that the electromagnetic sector is no longer conformally invariant, i.e. $\int d^{4}x F^{\mu \nu} F_{\mu \nu} \not= \int d^{4}\Tilde{x} \Tilde{F}^{\mu \nu} \Tilde{F}_{\mu \nu}$, where $F^{\mu \nu}$ is written in terms of the new metric $\Tilde{g}_{\mu \nu} = \Omega^{-2} g_{\mu \nu}$, with $\Omega$ the conformal factor.
This would imply a non-adiabatic decay of the primordial magnetic field $\mathbf{B}$, ensuring its survival even after inflation, in the form of current large-scale fields.
On the other hand, post-inflationary scenarios consider PMFs created after inflation via either cosmological phase transitions or during the recombination era \cite{Subramanian:2015lua}, even if, in this case, the role of helicity is fundamental to transfer energy from small to large scales \cite{Durrer:2013pga}. For other mechanisms, aimed at explaining the origin and the amplification, see \cite{Giovannini:2004yn} for a review.
Moreover, it has also been pointed out that a FRW negative curvature $(K=-1)$ allows a super-adiabatic decay, i.e. in this case the magnetic field would have had a relative amplification w.r.t. the radiation.
In order to have a large scale field, i.e. causally disconnected, the PMF must have crossed outside the Hubble radius $\sim H^{-1}$ during de Sitter phase, i.e. with a typical wavelength $\lambda_{phys} \gg H^{-1}$ or $k \eta \ll 1$, where $\eta$ is the conformal time during inflation, leading to a static, large scale $\mathbf{B}$. These long-wavelength modes, in particular, may have been generated by quantum fluctuations which grew during inflation and the reheating eras, in a similar way to those which led to density fluctuations and thus to the large scale structure. Their effect on gravitational waves (GWs) and on the CMB has been studied in several papers, see, for example
\cite{GW-2021al,Kunze:2013kza,Addazi:2022ukh}.
In this debate, a particular role has been recently assumed by Teleparallel Gravity (TG) and its extensions which seem to fix several cosmological issues ranging from inflation to dark energy, from primordial Big Bang nucleosynthesis to cosmological perturbations, up to the $H_0$ tension
\cite{Ferraro:2006jd,Dent:2010nbw,Chen:2010va,Cai:2015emx,Benetti:2020hxp,Capozziello:2017bxm, Escamilla-Rivera:2019ulu}. The approach
starts from the so called Teleparallel Equivalent General Relativity (TEGR) \cite{Maluf:2013gaa} which is an alternative formulation of GR, firstly conceived by Einstein himself, where dynamical variables are tetrads and dynamics is given by the torsion instead of curvature in a teleparallel affine formulation of gravity.
TG and its extensions could have a prominent role in generating and amplifying primordial magnetic fields.
In this paper, we want to investigate whether an alternative theory of gravity, the $f(T,B)$ theory where $T$ is the torsion scalar and $B$ a boundary term, with a non-minimal coupling between matter and gravity, can generate primordial magnetic fields with a non-adiabatic behaviour, exploiting the breaking of conformal symmetry which naturally arises from such non-minimal couplings. These couplings are well motivated since according to the Quantum Electrodynamics (QED) formulated on curved space-times, one-loop vacuum-polarization effects \cite{Drummond:1979pp} can lead to non-minimal gravitational couplings between the curvature and the electromagnetic field.
The electromagnetic sector $F_{\mu \nu}F^{\mu \nu}$ is assumed coupled to the curvature Ricci scalar $R$ and Riemann tensor $R^{\mu \nu \rho \sigma}$ curvatures in \cite{Turner:1987bw,Lambiase:2004zb,Lambiase:2008zz,deAndrade:2013fga}. On the other hand, curvature power-law models $R^{n}$ are considered in \cite{Garretson:1992vt,Mazzitelli:1995mp,Lambiase:2008zz,Bertolami:2022hjk}, while torsion power-law models $T^{n}$ in \cite{Bamba:2013rra}. Specifically, we are going to adopt here a TG background where $T$ and $B$ are related to curvature scalar as $R = -T + B$. In this context, it results interesting to explore the role of the boundary term $B$ which link the metric picture to the teleparallel one. In fact, $f(R)$ gravity is not equivalent to $f(T)$ gravity \cite{Bamba:2013ooa} since the latter is not invariant for local Lorentz transformations and has second order field equations (see \cite{Cai:2015emx} for a review). To restore fourth-order field equations, one has to introduce the boundary term $B$ which depends on the derivatives of the torsion vector $T^{\mu}$, obtaining the $f(T,B)$ theory which, as a special case, reduce to $f(R)$ \cite{Bahamonde:2016grb,Capozziello:2019msc}. This is the only framework to investigate the role of $B$ separated from $R$ and $T$. Finally, as noted in \cite{Kranas:2018jdc}, a torsion dominated early universe could go trough a phase of accelerated expansion without the need of a cosmological constant or dark energy component, thus constituting an interesting theoretical framework to be studied.
In this paper, we are going to investigate the role of $T$, $B$, and eventual non-minimal couplings to matter field to enhance PMFs to make them compatible with observations.
The layout of the paper is as follows: in Sec. \ref{sec2} we review the teleparallel $f(T,B)$ gravity theory. Sec. \ref{sec3} is devoted to computing and solving the cosmological equations in a spatially flat FRW metric, distinguishing between inflationary and reheating eras. In the Sec. \ref{sec4} we consider a non-minimal coupling gravity-photon and we obtain a differential equation for the magnetic field, which we are going to solve for both inflation and reheating epochs. A different approach to evaluate the amplification effect during inflation is treated in Sec. \ref{sec5}, while discussion and conclusions are drawn in the Sec. \ref{sec6}.
In this work, we adopt natural units $\hbar=c=1$, and we define the Planck
mass be $M^{2}_{pl} = {8 \pi G}$. The metric signature $(+, -, -, -)$ is also adopted. The Greek indices are coordinate ones contracted with the metric tensor $g_{\mu \nu}$, the
Latin indices $a,b,c$ are tetrad ones contracted with the Kronecker delta. The Latin indices $i,j$ indicate the spatial coordinates. The overdots and overprimes, as usual, denote derivatives with respect to the cosmic time $t$ (measured by a comoving observer) and the conformal time $\eta$, respectively, unless otherwise stated.
\section{Field equations in $f(T,B)$ gravity}
\label{sec2}
Extensions or modifications of GR are becoming an ideal arena on which longstanding cosmological problems can be solved or incorporated into the theory itself from a gravitational viewpoint \cite{Capozziello:2011et}. TG differs from metric theories for several features, among them a non-zero torsion, which arises from a non-symmetric connection\cite{Capozziello:2022zzh}. A property of TG is that the Lovelock theorem is weakened \cite{Gonzalez:2015sha}, allowing for additional theories beyond TEGR that continue to produce second order field equations \cite{Cai:2015emx}. The teleparallel boundary term $B$ (see later) embodies the fourth order contributions to the field equations and allows TG, in particular $f(T)$ gravity to be compared with metric $f(R)$ gravity. This is an important aspect for many theories beyond GR \cite{Faraoni:2010pgm}. In TG, second and fourth order contributions can be easily decoupled
from each other unlike metric gravity formulated by the Levi-Civita connection; in this framework, $f(T,B)$ models are useful to study further degrees of freedom beyond GR (or TEGR). On the other hand, $f(T,B)$ gravity has shown promising giving viable models at various scales, ranging from Solar System and weak field regime \cite{Bahamonde:2016grb,Farrugia:2020fcu,Capozziello:2019msc} up to cosmological scales \cite{Capozziello:2018qcp, Farrugia:2018gyz,Bahamonde:2020lsm,Bahamonde:2020bbc,Kadam:2022lxt,Briffa:2022fnv}.
In TG (and, specifically in TEGR and in its extensions), dynamical variables are not the metric components $g_{\mu \nu}$, but tetrads, $h_{a}(x^{\mu})$. Called also {\it vierbeins}, they are orthonormal vector fields defining a basis at any point $p$ of the space-time manifold $\mathcal{M}$. One can express the tetrad basis $\{ h_{a}\}$ and its dual $\{ h^{a}\}$ in terms of the holonomic coordinate basis $\{ e_{\mu} \} = \{ \partial_{\mu} \}$ and its dual $\{ e^{\mu} \} = \{ dx^{\mu} \}$, yielding $h_{a}=h\indices{_a ^\mu}e_{\mu}h^{a}=h\indices{^a _\mu}e^{\mu} $. In this way, tetrads allow going from the space-time manifold to the Minkowski space through
\begin{equation}
g_{\mu \nu}=\eta_{a b}h\indices{^a _\mu}h\indices{^b _\nu}, \; \; \; \eta_{a b}=g_{\mu \nu}h\indices{_a ^\mu}h\indices{_b ^\nu},
\end{equation}
with the orthogonality conditions
\begin{equation}\label{eq2}
h\indices{ ^{a} _{\mu}} h\indices{_{b}^{\mu}}=\delta_{a}^{b}, \quad h\indices{^{a}_{\mu}} h\indices{_{a}^{\nu}}=\delta_{\mu}^{\nu}.
\end{equation}
The teleparallel connection can then be defined as \cite{Capozziello:2018qcp}
\begin{equation}\label{eq3}
\Gamma\indices{^{\sigma}_\nu _\mu}:=h\indices{_{a} ^{\sigma}} \partial_{\mu} h\indices{^{a} _{\nu}}+h\indices{_{a} ^{\sigma}} \omega\indices{^a _b _\mu} h\indices{^{b} _{\nu}}
\end{equation}
where $\omega \indices{^a _b _\mu}$ is the spin connection. With this connection, one can show $\nabla_{\mu} h\indices{^a _\nu} = 0$, hence the 'teleparallelism'. A particular realization of Eq.~(\ref{eq3}) is the Weitzenb\"{o}ck connection, which implies a zero curvature, $R=0$, and a vanishing Lorentz connection, i.e. $\omega \indices{^a _b _\mu}=0$ (see \cite{Capozziello:2022zzh} for details). It is defined as
\begin{equation}
\Dot{\Gamma}\indices{^\rho _\mu _\nu }:= h\indices{_a ^\rho}\partial_{\nu}h\indices{^a _\mu} = - h\indices{^a _\mu} \partial_{\nu}h\indices{_a ^\rho},
\end{equation}
which is compatible both with metricity and teleparallelism conditions, $\Dot{\nabla}_{\rho}g_{\mu\nu}=0=\Dot{\nabla}_{\rho}h\indices{^a _\mu}$, where the overdot means the derivative w.r.t. the Weitzenb\"{o}ck connection. The torsion tensor is defined as
\begin{equation}
T\indices{^\rho _\mu _\nu}:=\Dot{\Gamma}\indices{^\rho _\nu _\mu } - \Dot{\Gamma}\indices{^\rho _\mu _\nu }\,,
\end{equation}
and it is clearly antisymmetric in the last two indices. The difference between the Levi-Civita connection (designated with a ring) and the Weitzenb\"{o}ck one is given by the {\it contortion},
\begin{equation}
K\indices{^\rho _\mu _\nu} := \Dot{\Gamma}\indices{^\rho _\mu _\nu} - \mathring{\Gamma}\indices{^\rho _\mu _\nu} = -\dfrac{1}{2} \Big( T\indices{^\rho _\mu _\nu} -T\indices{_\nu ^\rho _\mu} + T\indices{_\mu _\nu ^\rho} \Big),
\end{equation}
which is instead antisymmetric on the first two indices. Finally, the torsion scalar is defined as
\begin{equation}\label{eq6}
T := \dfrac{1}{4} T\indices{^\rho ^\mu ^\nu} T\indices{_\rho _\mu _\nu} + \dfrac{1}{2} T\indices{^\rho ^\mu ^\nu} T\indices{_\nu _\mu _\rho} - T\indices{^\rho _\mu _\rho} T\indices{^\nu ^\mu _\nu},
\end{equation}
which can be shortened to $T=S\indices{^\rho ^\mu ^\nu} T\indices{_\rho _\mu _\nu}$, where
\begin{equation}
S\indices{^\rho ^\mu ^\nu}= \dfrac{1}{2}\Big( K\indices{^\mu ^\nu ^\rho} -g^{\rho \nu}T\indices{^\sigma ^\mu _\sigma} + g^{\rho \mu}T\indices{^\sigma ^\nu _\sigma} \Big)
\end{equation}
is the so-called {\it superpotential}. Imposing the zero curvature condition, $\Dot{R}\indices{^\rho _\mu _\nu _\lambda}=0$ and contracting, one finds for the Ricci scalar in the Levi-Civita connection
\begin{equation}\label{eq8}
\mathring{R} = -2\nabla^{\rho} S\indices{^\mu _\rho _\mu} -4\nabla^{\rho} T\indices{^\sigma _\rho _\sigma} -2 S\indices{^\rho ^\sigma ^\nu}K\indices{_\sigma _\rho _\nu},
\end{equation}
where all covariant derivatives are Levi-Civita. Using $S\indices{^\mu _\rho _\mu}=-T\indices{^\mu _\rho _\mu}$ and noting that the last term of Eq.~(\ref{eq8}) is equal to $-T$, one obtains the important relation
\begin{equation}
\Dot{R} = -T -2 \nabla_{\mu}T^{\mu}
\end{equation}
where $T^{\mu}$ indicates the contraction $T\indices{^\nu ^\mu _\nu}$. Being $h:= det (h\indices{^a _\rho})$, we can identify the last term with the boundary term
\begin{equation}
B= \dfrac{2}{h}\partial_{\mu}(h T \indices{^\sigma _\sigma ^\mu})
\end{equation}
thus yielding $R = -T + B$. The reader has to pay attention to the definition of $B$: in some papers, like in\cite{Capozziello:2018qcp}, the opposite sign is adopted. From now on, we omit to write rings and dots, thus implying that all quantities are calculated in the Levi-Civita connection. \\
Let us now consider the total action
\begin{equation}\label{eq11}
\mathcal{S}_{}=\frac{1}{2 \kappa^{2}} \int d^{4} x h f(T, B)+\int d^{4} x h \mathcal{L}_{m}
\end{equation}
where $\kappa^{2}=8 \pi G$, $\mathcal{L}_{m}$ is the standard matter Lagrangian, and $e=\sqrt{g}$ is the metric determinant. From the variation of the action with respect to the tetrad $h\indices{^a _\mu}$, we have \cite{Bahamonde:2020lsm}
\begin{equation} \label{eq12}
\begin{array}{c}
-f_{T} G^{\mu}_{\nu}+\delta^{\mu}_{\nu}\Box {f_{B}}-\stackrel{}{\nabla}^{\mu} {\nabla}_{\nu} f_{B}+\frac{1}{2}\left(B f_{B}+T f_{T}-f\right) \delta^{\mu}_{ \nu} \\ + 2\left[{\nabla}^{\lambda} f_{T}+{\nabla}^{\lambda} f_{B}\right] S\indices{_\nu _\lambda ^\mu}=\kappa^{2} \mathcal{T}^{\mu}_{ \nu}
\end{array}
\end{equation}
where $\mathcal{T}\indices{_a ^\mu} = - \frac{1}{h}\frac{\delta (h\mathcal{L}_{m} )}{\delta h\indices{^a _\mu}}$ is the matter stress-energy tensor, and we used the Einstein tensor written in the form
\begin{equation}\label{eq13}
G_{\mu \nu} = S\indices{^\rho ^\sigma _\mu}K\indices{_\rho _\sigma _\nu} -2 \nabla^{\rho}S\indices{_\nu _\rho _\mu} + \dfrac{1}{2}g_{\mu \nu}T
\end{equation}
in order to have a covariant form of the field equations. In Eq.~(\ref{eq13}), we used Eq.~(\ref{eq8}) and the contracted relation $T\indices{^\sigma ^\lambda _\sigma}=-S\indices{^\sigma ^\lambda _\sigma}$ . Notice that when $B=0$, Eq.~(\ref{eq12}) reduces to the field equations of $f(T)$ gravity, while putting $f(T,B)=f(-T+B)$ one obtains $f(\mathring{R})$ gravity.
\section{Cosmology from $f(T,B)$ gravity}\label{sec3}
Let us now take into account cosmology from $f(T,B)$ gravity with the aim to develop a background for cosmological magnetic fields. We consider a
(spatially) flat FRW conformal metric
\begin{equation}\label{eq14}
ds^{2} = a^{2}(\eta)\Big( d\eta^{2} - d\mathbf{x}^2 \Big) = dt^{2} - a(t) d\mathbf{x}^2
\end{equation}
where $\eta = \int_{0}^{t} a^{-1}(t) dt $ is the conformal time and $a(t)$ is the cosmological conformal factor. A tetrad choice for this metric can be
\begin{equation}\label{eq15}
h\indices{^b _\mu} = a \cdot diag (1,1,1,1), \; \; \; h\indices{_b ^\mu} = \dfrac{1}{a} \cdot diag (1,1,1,1)
\end{equation}
with orthogonality conditions given in Eq.~(\ref{eq2}). In this metric, the Ricci tensor components are
\begin{equation}
R\indices{^i _j}= -\dfrac{1}{a^{2}}\Big( \dfrac{a''}{a} + \dfrac{{a'}^{2}}{a^{2}} \Big), \; \; \; R\indices{^0 _0}= -\dfrac{3}{a^{2}}\Big( \dfrac{a''}{a} - \dfrac{{a'}^{2}}{a^{2}} \Big)
\end{equation}
where $i,j=1,2,3$ and prime denotes derivative w.r.t. the conformal time $\eta$. Hence, the Ricci and torsion scalars are
\begin{equation}\label{eq17}
R=-\dfrac{6}{a^{3}}a'', \; \; \; T=-6 \mathcal{H}^{2}
\end{equation}
where $\mathcal{H}= H/a = a'/a^{2}$ is the Hubble constant in the conformal time (see Appendix for a full computation of $T$). The boundary term then is
\begin{equation}\label{eq18}
B = - \dfrac{6}{a^{3}}\Big( a'' + \mathcal{H}a' a \Big).
\end{equation}
In the cosmological setup of metric~(\ref{eq14}), the field equations are
\begin{equation}\label{eq19}
\begin{array}{c}
-f_{T} G^{0}_{0} + \dfrac{1}{a^3}\big[ 2a'f_{B}' + af_{B}'' \big] - \dfrac{1}{a^{2}} \big[ f_{B}'' - \dfrac{a'}{a}f_{B}' \big] \\
+ \dfrac{1}{2}\Big(B f_{B} + T f_{T} -f \Big) + 2 \Big( \nabla^{0} f_{T} + \nabla^{0} f_{B} \Big) S\indices{_0 _0 ^0} = \kappa^{2} \rho
\end{array}
\end{equation}
for the time-time component, and
\begin{equation}\label{eq20}
\begin{array}{c}
-f_{T} G^{i}_{j} + \delta^{i}_{j}\dfrac{1}{a^3}\big[ 2a'f_{B}' + af_{B}'' \big] + \dfrac{1}{2} \delta^{i}_{j} \big(B f_{B} + Tf_{T} - f \big) \\
+ 2 \Big( \nabla^{\lambda} f_{T} + \nabla^{\lambda} f_{B} \Big) S\indices{_j _\lambda ^i} = -\kappa^{2} p \delta^{i}_{j}
\end{array}
\end{equation}
for the spatial components. Here, we defined the energy density and the pressure as $\rho=T^{0}_{0}$ and $p \delta^{i}_{j} =-T^{i}_{j}$, respectively; and we used
\begin{equation}
\Box{f_{B}}=\dfrac{1}{a^{4}}\partial_{0}\big(a^{2}f_{B}' \big), \; \; \; \; \nabla^{i}\nabla_{j}f_{B}= 0
\end{equation}
since $f_{B}$ may be a function of the conformal time only. \\
The components of the contorsion tensor in the metric ~(\ref{eq14}), from Eqs.~(\ref{eq19}) and (\ref{eq20}), are
\begin{equation}
S\indices{_0 _\lambda ^0}=\dfrac{3}{2}\dfrac{a'}{a^3}(a^2-1)\delta^{0}_{j}, \; \; \; \; S\indices{_j _0 ^i} = -\dfrac{3}{2}\dfrac{a'}{a^3}\delta^{i}_{j}.
\end{equation}
Besides Eqs.~(\ref{eq19}) and (\ref{eq20}), the Bianchi identities for matter, $\nabla_{\mu}\mathcal{T}^{\mu \nu}=0$, have to be taken into account. This gives the conservation condition:
\begin{equation}\label{eq22}
\rho' + 3 \dfrac{a'}{a}(\rho + p) = 0
\end{equation}
whose solution is
\begin{equation}\label{eq23}
\rho(\eta)= \Big[\dfrac{a(\eta)}{a_{0}}\Big]^{-3(1-w)} \cdot \rho_{0}
\end{equation}
where $a_{0}$ and $\rho_{0}$ are the the scale factor and the energy density of a reference time, respectively. Notice that the above equation has the same form both in the cosmological time $t$ and in the conformal one $\eta$. \\
In the following, we assume that, during the epochs relevant for the amplification of the primordial magnetic field, i.e. de Sitter and reheating eras, the Universe is described by the action ~(\ref{eq11}) with $f(T,B)$ to be specified. We shall consider two different models in order to highlight the role of the boundary term, namely $f(T,B)=-T + \lambda B^{n}$, which gives the Hilbert- Einstein action for $n=\{0,1\}$ (i.e. TEGR), and the non-separable model $f(T,B)=-\lambda T B^{n}$, which gives TEGR for $n=0$. In the first case the dimensions of $\lambda$ are $[\lambda]=M_{pl}^{2(1-n)}$, while in the second case $[\lambda]=M_{pl}^{-2n}$. Notice that both these models are proven to have also bouncing solutions, which is an alternative to the standard inflationary paradigm \cite{Caruana_2020}. Finally, they could govern the cosmological evolution today too, albeit with different values of the coupling constant $\lambda$, in order to fit the $\Lambda$CDM phenomenology.
\subsection{The inflationary phase}
We assume that, during inflation, the energy density $\rho$ and the pressure $p$ are related by $p=-w\rho$, where $w$ is the adiabatic index. Assuming a quasi-de Sitter evolution, the scale factor is chosen as $a(\eta)=1/(-c \eta)^{\alpha}$, where $\alpha>0$, $c=H_{dS}\simeq3 \times 10^{24}$ eV \cite{Bertolami:1999}. The minus sign compensates for the negativity of $\eta$ in this epoch. Choosing $c=-1/\eta$, one finds the solution, for $\alpha$ and $w$, of the field Eqs. (\ref{eq19}) and (\ref{eq20}).
For the model $f(T,B)=-T + \lambda B^{n}$, in the limit $\lambda=0$, the solution is $w=1 \Longleftrightarrow \alpha = 1$, i.e. the GR (TEGR) solution is the only possible one compatible with a cosmological constant scenario. If $\lambda/M_{pl}^{2(1-n)} \gg 1$, the general solution for $w$ is
\begin{equation}
w= \dfrac{n\big[2-\alpha+8\alpha^{2} -4n(\alpha-1)^{2}\big]}{3\alpha \big[ 1-2n+2\alpha (n+1) \big]}.
\end{equation}
Having two unknowns and a single independent equation, the only way to get a solution for $\alpha$ is to choose a value of $w$. Putting $w=1$ in the above equation, one gets the double solution
\begin{equation}
\alpha_{\pm}= \dfrac{3-5n-8n^{2}\pm 3 \sqrt{\Sigma_{1}}}{2\big(-4n^{2}+2n-6 \big)}
\end{equation}
where $\Sigma_{1} := 16n^3-15n^2+2n+1$. Notice that, in order to avoid trivial solutions, we assume $n \not=1$ for this model, in addition to $n>0$. In this range, it results $\Sigma_1>0$. The solution $\alpha_{-}$ is the only one compatible with GR, so we discard $\alpha_{+}$, which predicts a scale factor with $\alpha < 1$ for all values of $n$. In particular, we have $\alpha_{-}\simeq 1$ for $n \simeq 1$ (remember that $n\not=1)$, close to the GR result, and that $\alpha_{-}<2$, $\forall n>0$. Notice that in this limit, the torsion scalar $T$ disappears, leaving only a boundary term power-law model. When $n>1$, then $\alpha_{-}>1$, implying a faster inflation w.r.t. GR solution. An opposite result is obtained for $n<1$. Finally, for generic $\lambda$, it is quite difficult to obtain an analytical solution (which will clearly depend on $\lambda$) and a choice for $n$ is required. For $n=2$, the implicit solution for $\alpha$ and a generic $w$ is
\begin{equation}
\lambda = \dfrac{-2+\alpha(3w-1)}{18c^{2}(1+2\alpha)\big[-4+\alpha\big(6+4\alpha+7w-10w\alpha+\Delta\big)\big]}
\end{equation}
where $\Delta:=4(\alpha-1)(w-1)$. Finally, from Eq.~(\ref{eq23}) with $w=1$ and choosing the inflation epoch as the reference time, one finds the solution to Eq.~(\ref{eq22}), getting $\rho(\eta)=\rho_{dS}$, where $\rho_{dS}$ is a constant depending on the specific values of $\lambda$ and $n$ (see next section for more details). \\
For the non-minimal model $f(T,B)=-\lambda T B^{n}$, a distinction between the various regimes of the coupling constant $\lambda$ is not necessary, but the solution for a generic $w$ is very involved. For brevity, we report here the interesting case $w=1$. In this case, defining $n_{0}=1.1617$, we have that for $ 0 \leq n \leq n_{0}$ the only positive solution is $\alpha=1$ (i.e. the GR result), while for $ n > n_{0}$, in addition to $\alpha_{1}=1$, one has a second positive solution, i.e.
\begin{equation}
\alpha_{2}= \dfrac{2n^{3}+n^{2}-3n-1}{2(n^{3}+2n^{2}+3n+1)}
\end{equation}
where it is clear that $\alpha_{2}(n_{0})=0$ and $\lim\limits_{n \to \infty} \alpha_{2}(n) = 1$.
Therefore, for this model, the range $\alpha\leq 1$, $\forall n>0$, is compatible with GR.
\subsection{The reheating phase}
During this era, the scale factor is $a(\eta)=c^{\alpha}\eta^{\alpha}$, with $\eta>0$ and $c=(1/4)M_{pl}^{2}H_{0}^{2}R_{0}^{3}$, where $R_{0}\sim 10^{26}h_{0}^{-1}$ m ($h_{0}\simeq 0.7$) is the present Hubble radius of the Universe and $H_{0}\sim 100 h_{0}$ km$\cdot$Mpc$^{-1}$ is the Hubble parameter. Since the pressure is zero, we take $w=0$ in Eq.~(\ref{eq23}). Choosing $\eta=1/c$, the field equations (\ref{eq19}) and (\ref{eq20}) can be solved in a similar way to the previous case, but with one less unknown. \\
For the model $f(T,B)=-T + \lambda B^{n}$, in particular, the Eq.~(\ref{eq20}) yield
\begin{equation}
\begin{array}{c}
\alpha c^{2}(\alpha-2)+\dfrac{\lambda(n^2-n)K^{n}}{6\alpha(2\alpha-1)}\Big[8\alpha^{2}+\alpha+2-4n(1+\alpha)^{2}\Big]=0
\end{array}
\end{equation}
where we defined $K:=\big[(-1)^{}6^{}\alpha^{}c^{2}(2\alpha-1)^{}\big]$. As before, if $\lambda=0$ (or $n=1$), the only solution is the GR one, i.e. $\alpha=2$. On the other hand, in the limit $\lambda/M_{pl}^{2(1-n)} \gg 1$, the first term is negligible, and the (double) solution is ($n\not=\{1,2\}$):
\begin{equation}\label{eq29}
\alpha_{\pm}= \dfrac{1}{8}\Big(\dfrac{1-8n\pm 3\sqrt{16n-7}}{n-2}\Big)
\end{equation}
provided that $n\geq 7/16$, in addition to the solution $\{\alpha=1/2, \forall n>0\}$. By requiring $\alpha_{\pm}>0$, we definitely found that a solution exists only in the range $7/16 \leq n<2$, as it is shown in Fig. \ref{fig:1} (a). In particular, for $7/16<n<1/2$, a double solution exists (when $n=7/16$ they coincide and involve $\alpha=1/5$). However, as before, we will consider just the branch compatible with GR, i.e. $\alpha_{-}$. \\
For intermediate $\lambda$ values, an analytical expression for the expansion exponent $\alpha(n)$ is available only when $n$ is fixed. For $n=2$, the implicit form for $\alpha$ is
\begin{equation}
\lambda = \dfrac{\alpha-2}{36c^{2}(2\alpha-1)(5\alpha+2)}.
\end{equation}
Finally, putting $w=0$ into Eq.~(\ref{eq23}) and choosing the reheating epoch as the reference time, one finds $\rho(\eta)=\rho_{RH}/a^{3}$, where $\rho_{RH}$ can be obtained from Eq.~(\ref{eq19}). In details, in the boundary term dominated regime, one gets
\begin{equation}
\rho_{RH}= \dfrac{\lambda}{\kappa^{2}} \big[6\alpha c^{2}(1-2\alpha)\big]^{n}\Big[ \dfrac{1}{2}(n-1)+(\alpha+1)\dfrac{n(n-1)}{(2\alpha-1)} \Big]
\end{equation}
where $\alpha$ is here expressed by $\alpha_{-}$, so to get an expression which depends only on the model parameter $n$, as well as on the constants $\lambda$, $\kappa$ and $c$.
For the non-minimal model $f(T,B)=-\lambda T B^{n}$, finding a solution for $\alpha$ from Eq.~(\ref{eq20}) is more challenging. As before, there is the constant solution $\alpha=1/2$, $\forall n>1$. The equation is now of fourth degree; however, only two solutions are real and positive. We define them $\alpha_{1,2}$ and show some values in the Table~\ref{tab1}. When $n=0$, the only solution is $\alpha=2$, i.e. the GR limit. As it can be seen from the table, $\alpha_{1}<1/2$ for every $n$; therefore the branch to be taken is $\alpha_{2}$, which is also in agreement with the GR solution. Furthermore, for $n>3$, just one (positive) solution exists, namely $\alpha_{2}$. Finally, from Eqs.~(\ref{eq23}) and (\ref{eq19}), one finds the expression for the energy density, i.e.
\begin{equation}
\rho_{RH}= \dfrac{\lambda(-1)^{n}6^{n}c^{2n+2}}{\kappa^{2}}\Omega_{RH}
\end{equation}
where we defined
\begin{equation}
\Omega_{RH}:= \dfrac{3 \alpha^{n+2}}{(2\alpha-1)^{1-n}}\Big[ \alpha(2\alpha-1) +2n\Big(\alpha^{}n+ n +\alpha- \dfrac{1}{2} \Big) \Big]
\end{equation}
where the values of $\alpha$ are given by $\alpha_{2}$ in Table~\ref{tab1}, so to obtain a relation depending only on the power $n$.
\begin{table}[b]
\caption{\label{tab1}Solutions of Eq.~(\ref{eq20}) for the expansion exponent $\alpha$ in the reheating era, for the non-minimal model $f(T,B)=-\lambda T B^{n}$ as a function of the power $n$. Although the equation is of fourth degree, only two are real and positive. The branch compatible with GR is only $\alpha_{2}$. In the last column, the exponent of the Fourier mode $F_{k}\sim a^{\gamma}$ is reported, using the $\alpha_{2}$ solution. }
\begin{ruledtabular}
\begin{tabular}{lccc}
\textrm{n}&
\textrm{$\alpha_{1}$}&
\textrm{$\alpha_{2}$}&
\textrm{$\gamma$}
\\
\colrule
1/2 & 0.35 & 1.81 & 5.21\\
1 & 0.17 & 1.70 & 6.90\\
3/2 & 0.03 & 1.74 & 8.45\\
2 & 0.08 & 1.81 & 9.87\\
5/2 & 0.18 & 1.89 & 11.23\\
3 & -0.25 & 1.97 & 12.57\\
7/2 & -0.32 & 2.05 & 13.88\\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{The non-minimal coupling}\label{sec4}
Let us now add a non-minimal coupling to the action between matter (photons) and torsion, in order to break the conformal invariance\footnote{It is possible to overcome electromagnetic conformal invariance in a \textit{minimal} contest too. The main idea in this case is to generalize the electromagnetic Lagrangian and promote it to a non-linear function of $F\doteq (1/4)F_{\mu \nu}F^{\mu nu}$. See for example \cite{Bertolami_2022,MosqueraCuesta:2009tf,MosqueraCuesta:2017iln,Otalora_2018,Dittrich_1998} and references therein.} which leads to a fast decay of any magnetic field generated in the primordial Universe. There are various ways to do this, both tensorial and scalar couplings, as for $f(R)$ theories. GR assumes that the electromagnetic stress-tensor curves the space-time as any other source of energy density. However, there are strong reasons to expect that this picture should be changed for very strong gravitational fields, where non-minimal couplings should appear. On the one hand, it is possible that, at the very high energies of early Universe, all forces were unified and described by a single field then decayed through phase transitions (the inflation itself could be a product of a phase transition of this field \cite{Watari_2004}). On the other hand, QED calculations \cite{Drummond:1979pp} on the vacuum polarization in curved space-times show a non-minimal coupling between gravity and electromagnetism, coming from the tidal influences of the space-time geometry on the production of electron/positron pairs in the vacuum.\\
The action ~(\ref{eq11}) can be then modified as
\begin{equation}\label{eq32}
\int \Big[ \dfrac{1}{2\kappa^{2}} f_{1}(T,B) + ( f_{2}(T,B)+1 ) \mathcal{L}_{m} \Big]\, e \, d^{4}x
\end{equation}
where $f_{1}$ and $f_{2}$ are two sufficiently smooth arbitrary functions of the scalar torsion $T$ and the boundary term $B$, and $e=a^{4}(\eta)$. As we are interested in a coupling between gravity and magnetic fields, we take $\mathcal{L}_{m}= -\frac{1}{4}F^{\mu\nu}F_{\mu\nu}$, where the electromagnetic tensor $F_{\mu\nu}$, in the metric Eq.~(\ref{eq14}), is given by
\begin{equation}\label{eq33}
F_{\mu \nu}=a^{2}(\eta)\left(\begin{array}{cccc}
0 & E_{x} & E_{y} & E_{z} \\
-E_{x} & 0 & -B_{z} & B_{y} \\
-E_{y} & B_{z} & 0 & -B_{x} \\
-E_{z} & -B_{y} & B_{x} & 0
\end{array}\right)
\end{equation}
thanks to conformal invariance of classical electromagnetism. The full contravariant components $F^{\mu\nu}$ are obtained from those of Eq.~(\ref{eq33}), changing the sign to the electric field $\mathbf{E}$ and replacing $a^{2}$ with $1/a^{2}$. Notice that the definition of $F$ depends on the metric signature: if the signature is $(-,+,+,+)$, \textit{all} signs should be changed. Calling $\Tilde{F}_{\mu\nu}$ the piece in brackets, corresponding to the electromagnetic fields as measured by a comoving (inertial) observer, and $F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}$ the Faraday tensor in a curved space-time, then the following relations for every couple of spatial indices $\{i,j\}$ fixed hold:
\begin{equation}
F^{ij}=\dfrac{1}{a^{4}}F_{ij}=\dfrac{1}{a^{4}} \big( \partial_{i}{A}_{j}-\partial_{i}{A}_{j} \big)=\dfrac{1}{a^{2}}\Tilde{F}_{ij}\,,
\end{equation}
where ${A}_{\mu}= ({\Phi},-{\mathbf{A}})$ is the gauge $U(1)$ field in the curved space-time (a minus sign in the above relations appears for the other non-zero components $F^{j 0}$, regardless of the metric signature). To clarify, we highlight that $\Tilde{F}_{\mu\nu}$ is not the Minkowski Faraday tensor: generally, the electric and magnetic fields measured by
inertial (here comoving) observers in a conformally flat space-time does not coincide with their values
in Minkowski space-time as these frame are not equivalent. Here, a comment is in order. In Eq.~(\ref{eq32}), the above $\lambda$ can play the role of coupling constant with photons. Indeed, if these couplings played a dominant role in the primordial Universe, at the present epoch, they could be very small for cosmological background. In the case of a Riemann tensor coupling of the form $\lambda R^{\mu\nu\rho\sigma}F_{\mu\nu}F_{\rho\sigma}$, for example, it has been shown \cite{Prasanna:2003ix} that $\lambda < 10^{22}$ eV$^{-2}$ today. In this section, we take the models $f_{2}(T,B)=\lambda B^{n}$ and $f_{2}(T,B)=-\lambda TB^{n}$, with the only constraint $n\not= 0$. As in the previous section, the dimensions of $\lambda$ will depend on the specific power $n$. \\
The energy-momentum tensor corresponding to the action (\ref{eq32}) is
\begin{equation}
T^{\mu\nu} = (1+f_{2})\big[ - F^{\mu\sigma}F\indices{^\nu _\sigma}+\dfrac{1}{4}F^{\alpha\beta}F_{\alpha\beta}g^{\mu\nu} \big],
\end{equation}
where, again, we are using natural units. \\
Varying the action (\ref{eq32}) w.r.t. the conformal potential vector $A_{\mu}$, one gets the field equations
\begin{equation}\label{eq35}
\partial_{\mu}\Big[ e \Big( F^{\mu\nu} + f_{2}(T,B)F^{\mu\nu} \Big)\Big] = 0
\end{equation}
where we decided to maintain the first term coming from classical electromagnetism, differently from \cite{Bertolami:2022hjk}. Classical electromagnetism is obtained when $f_{2}=0$ is taken into account. In the FRW background (\ref{eq14}), the electromagnetic potential vector $A_{j}$ satisfies, in the conformal time, the following differential equations:
\begin{equation}\label{eq36}
\Big[ A_{j}''(\eta,\mathbf{x}) + \dfrac{f_{2}'}{1+f_{2}} A_{j}'(\eta,\mathbf{x}) -\Delta A_{j}(\eta,\mathbf{x}) \Big]\delta^{j\nu} = 0
\end{equation}
when $\nu=j$, where $j=1,2,3$ and we defined $\Delta:=\delta^{ki}\partial_{k}\partial_{i}$ (where $k,i=1,2,3$); and
\begin{equation}
\Big[\delta^{ij} \Big(1+f(T,B)\Big) \partial_{i}\big( \partial_{0}A_{j} \big) \Big] \delta^{\nu}_{0} = 0
\end{equation}
valid when $\nu=0$. Here, we used the radiation gauge, i.e. $A_{0}(t,\mathbf{x})=0$ and $\partial_{j}A^{j}(t,\mathbf{x})=0$ (compatible with zero net charge). In particular, the above equation is nothing more than the null diverge condition $\nabla \cdot \mathbf{E} = 0$, as we expect. This condition can also be obtained directly by deriving the gauge fixing condition. Notice that this does not imply a null electric field (see later). Using the relations
\begin{equation}
A_{i}'=a^{2}E_{i}\;, \; \; \; \; B_{i}=-a^{-2}\epsilon_{ijk}\partial_{j}A_{k}
\end{equation}
where $\epsilon$ is the totally anti-symmetric Levi-Civita symbol (implicit summation over repeated indices), then Eq.~(\ref{eq36}) can be written in function of the electric and magnetic fields as
\begin{equation}\label{eq37}
\partial_{0}\Big[ a^2 E_{k} \Big(1 + f_{2}(T,B)\Big) \Big] + a^{2}\Big(1+f_{2}(T,B)\Big)\partial_{j}\big(\epsilon_{jki}B_{i} \big) = 0.
\end{equation}
Similarly, the dual tensor $F^{*\mu\nu}=\frac{1}{2}F_{\alpha\beta}\epsilon^{\alpha\beta\mu\nu}$ obeys \footnote{To obtain the complete set of electromagnetic equations, the Bianchi identities must be considered.}
\begin{equation}
\nabla_{\mu} \Big( (-g)^{-\frac{1}{2}} F^{*\mu\nu} \Big) = 0 \; \; ,
\end{equation}
which can be recast as
\begin{equation}\label{eq38}
\partial_{0}\Big[ a^2 B_{k} \Big] - a^{2}\partial_{j}\big(\epsilon_{jki}E_{i} \big) = 0.
\end{equation}
together with the condition $\nabla \cdot \mathbf{B}=0$. Eqs.~(\ref{eq37}) and (\ref{eq38}) are the Maxwell equations in the curved space-time (\ref{eq14}) derived from the non-minimal coupling action (\ref{eq32}). One easily verifies that when $f_{2}(T,B)=0$, the usual Maxwell equations in GR are recovered. Deriving Eq.~(\ref{eq37}) w.r.t spatial indices, multiplying by $\epsilon^{\lambda k l}$ and using Eq.~(\ref{eq38}), one finds a purely magnetic equation, i.e.
\begin{equation}\label{eq39}
\partial_{0}^{2}\Big[a^{2} \Big(1+ f_{2}(T,B)\Big) B_{k} \Big] - a^{2}\Big(1+ f_{2}(T,B)\Big)\Delta B_{k} = 0
\end{equation}
where we used the relation
\begin{equation*}
\partial_{\lambda}\partial_{j}\Big(\epsilon\indices{_\lambda _k _l} \epsilon\indices{_j _k _i} B_{i} \Big) = \delta^{ij}\partial_{i}\partial_{j} B_{l}\; \;,
\end{equation*}
and we assumed that
\begin{equation*}
\dfrac{\partial }{\partial \tau}\Big(a^2 B_{k}f_{2}(\ln{f_{2}})' \Big) \simeq 0
\end{equation*}
since $(\ln{f_{2}})'\equiv \partial_{0}(\ln{f_{2}})\simeq 0$.
In this way, we have eliminated explicitly the electric field, but it is still present as derivative of $\mathbf{B}$ in the first term of Eq.~(\ref{eq39}).
Since $\mathbf{B}(\eta,\mathbf{x})=a^{-2}\nabla \times \mathbf{A}$ and defining the spatial Fourier transform of the magnetic field vector
\begin{equation}
\mathbf{B}(\eta,\mathbf{k})=\int \dfrac{d^{3}\mathbf{x}}{2\pi} e^{i \mathbf{k}\cdot \mathbf{x}} \mathbf{B}(\eta,\mathbf{x}),
\end{equation}
then Eq.~(\ref{eq39}) becomes
\begin{equation}\label{eq41}
\partial_{0}^{2}\Big[\Big(1+f_{2}(T,B)\Big) \mathbf{F}_{k} \Big] + k^{2}\Big(1+f_{2}(T,B)\Big) \mathbf{F}_{k} = 0,
\end{equation}
i.e. the second order differential equation
\begin{equation}\label{eq42}
\begin{array}{cc}
\mathbf{F}_{k}'' \Big(1 + f_{2}(T,B)\Big) + 2 f_{2}'(T,B) \mathbf{F}_{k}' \\ + \Big[f_{2}''(T,B) + k^{2} f_{2}(T,B) + k^{2} \Big] \mathbf{F}_{k} = 0
\end{array}
\end{equation}
where we defined
\begin{equation}\label{eq43}
\mathbf{F_{k}}(\eta):=a^{2}(\eta)\mathbf{B}(\eta,\mathbf{k}) \end{equation}
with $k=|\mathbf{k}|$ (here k is not an index). When $f_{2}=0$, then from Eqs.~(\ref{eq38}) and (\ref{eq39}) it follows that
\begin{equation*}
|\mathbf{B}| \propto \dfrac{1}{a^{2}(\tau)}\;\; , \; \; |\mathbf{E}| \propto \dfrac{1}{a^{2}(\tau)}
\end{equation*}
which is the adiabatic trend\footnote{It is not correct to derive exact solutions for the fields directly from Eq.~(\ref{eq42}) since it is not fully equivalent to Eqs.~(\ref{eq38}) and (\ref{eq39}). Indeed, to arrive at Eq.~(\ref{eq42}), we derived one more time. By solving it, one would get a physical equivalent solution but mathematically different, namely $|\mathbf{B}| \propto 1/a^{3}$. } in the standard GR settings. As during inflation $a(\tau) \rightarrow \infty$, this would mean a strong and fast decay, preventing any form of amplification.
Notice that the module $F_{k}:= |\mathbf{F}_{k}|=\sqrt{\mathbf{F}_{k}^{*}\mathbf{F}_{k}}$ is a good approximation of the magnetic flux through the expanding Universe. \\
A comment here is in order. If the electric field is assumed to be zero, i.e. $F^{0j}=0$, then just one of the two Maxwell equations is required, namely Eq.~(\ref{eq38}). Indeed, imposing $\mathbf{E}=0$, deriving w.r.t. to the conformal time $\eta$ and Fourier transforming, this equation gives
\begin{equation}
\mathbf{F}_{k}'' \Big(1 + f_{2}(T,B)\Big) + 2 f_{2}'(T,B) \mathbf{F}_{k}' + f_{2}''(T,B)\mathbf{F}_{k} = 0
\end{equation}
which differs from Eq.~(\ref{eq42}) only for the $k$-terms. Effectively, these terms are null as evident solving the other Maxwell equation, Eq.~(\ref{eq37}), which gives the vector equation
\begin{equation}
a^{2} \Big(1 + f_{2}(T,B)\Big) \nabla \times \mathbf{B}(\eta,\mathbf{x}) = 0
\end{equation}
which, after a Fourier transform (and multiplication by $i \mathbf{k}$), becomes $(1+f_{2})k^{2}\mathbf{F}_{k} = 0$. Since $F_{k}=0$ would be a trivial solution, this means that imposing a zero electric field, automatically leads to the super-horizon approximation $k \eta \leq 1$ in the Maxwell equations. However, the inverse is not generally true. Indeed, imposing this approximation to the Maxwell Eqs. (\ref{eq37}) and (\ref{eq38}) (after Fourier transforming), gives $E \propto 1/a^{2}$ (as well as for $\mathbf{B}$) during inflation (where we neglect the non-minimal coupling function $f_{2}$), and $E \propto 1/(a^{2}f_{2})$ during the reheating epoch. Therefore, if during inflation one could state that $E\approx 0$ (the scale factor grows exponentially), the trend in the reheating era depends on the specific function $f_{2}$, and so $E\not= 0$ in general. However, assuming a zero electric field from the beginning is not a natural choice, since the context is that of a quickly varying magnetic flux. \\
In order to evaluate the magnetic field for the de Sitter and reheating phases of the Universe, we concern ourselves with the evolution of the magnetic field fluctuations whose wavelengths are well outside the horizon, i.e. we assume $k \eta \ll 1$. Only this condition can ensure large scale magnetic fields. Furthermore, we will assume a strong non-minimal coupling only in the reheating era. Generally, in order to have amplification, if the magnetic flux decays as $a^{\gamma}$, then it should be $\gamma_{rh}> - \gamma_{dS}$, where $\gamma_{dS,rh}$ stay for the power during inflation and reheating epochs, respectively.
\subsection{The inflationary phase}
As a first case, we consider the purely boundary term model $f_{2}(T,B)=\lambda B^{n}$. During this phase, we assume that the non-minimal coupling term is zero (or negligible). Therefore, Eq.~(\ref{eq42}) gives the harmonic oscillator differential equation
\begin{equation}\label{eq44}
\mathbf{F}_{k}''(\eta) + k^2 \mathbf{F}_{k}(\eta) = 0
\end{equation}
whose module solution is
\begin{equation}
F_{k}=\sin(k\eta)\sim \dfrac{1}{a}
\end{equation}
where we used the super-horizon approximation $k \eta \ll 1$. Here, we assumed $\alpha=1$ (as well as $w=1$), as found in the previous section for this regime. Therefore, no divergences from GR appear in this epoch. \\
For the non-minimal model $f_{2}(T,B)=-\lambda T B^{n}$, making the above assumptions, one arrives at the same Eq.(\ref{eq44}), and then the same solution $F_{k}\sim \eta$. Hence, as long as $n\leq n_{0}$, one simply have $\gamma_{dS}=-1$. Otherwise, for the alternative solution $\alpha<1$, the condition becomes $\gamma_{dS}<-1$, thus requiring a strong constraint on the $\gamma$ power of the reheating era in order to have possible amplifications. For example, when $n=3$, then $\alpha \simeq 1/2$. In this case, to have an amplification effect, one should expect a power-law solution like $a^{\gamma}$ with $\gamma > 2$ in the reheating epoch.
\subsection{The reheating phase}
During this phase, we assume dominant the non-minimal coupling. Therefore, Eq.~(\ref{eq42}) becomes
\begin{equation}
\begin{array}{c}
\mathbf{F}_{k}'' + 2nB^{-1}B'\mathbf{F}_{k}' \\
+ \Big[n(n-1)(B')^{2}B^{-2}+nB^{-1}B''+k^2+\dfrac{k^2}{\lambda B^{n}} \Big] \mathbf{F}_{k} = 0
\end{array}
\end{equation}
where $B$ is the boundary term of Eq.~(\ref{eq18}). Writing $\mathbf{F}_{k}$ as a function of the scale factor $a$, hence using the relations
\begin{equation}
\begin{array}{ll}
\mathbf{F}_{k}'(\eta) = \dfrac{d\mathbf{F}_{k}}{da} a' \; ,\\
\mathbf{F}_{k}''(\eta) = {a'}^{2}\dfrac{d^{2}\mathbf{F}_{k}}{da^{2}} + \dfrac{d\mathbf{F}_{k}}{da}\Big(\dfrac{\alpha-1}{\alpha} \Big) \dfrac{{a'}^{2}}{a}
\end{array}
\end{equation}
one gets
\begin{equation}
\mathbf{F}_{k}''(a) + \mathcal{C}(n) \mathbf{F}_{k}'(a) + \dfrac{\mathcal{D}(n)}{a^{2}} \mathbf{F}_{k}(a) = 0
\end{equation}
where the prime denotes derivative w.r.t. the new variable $a$ and we define
\begin{equation}
\mathcal{C}(n):=\dfrac{\alpha-1}{\alpha}-4n\Big(\dfrac{\alpha+1}{\alpha} \Big)
\end{equation}
and
\begin{equation}
\mathcal{D}(n):=\dfrac{4n(n-1)(\alpha+1)^{2}}{\alpha^{2}} + \dfrac{2n(3+2\alpha)(\alpha+1)}{\alpha^{2}}.
\end{equation}
In the above functions one should substitute $\alpha_{-}$ from Eq.~(\ref{eq29}). With the ansatz $F_{k}(a)=a^{\gamma}$, the following solution is found
\begin{equation}\label{eq48}
F_{k}(a)=c_{1} a^{\frac{1}{2}\big(-\sqrt{\mathcal{G}(n)}-\mathcal{C}(n)+1\big)}+c_{2}a^{\frac{1}{2}\big(\sqrt{\mathcal{G}(n)}-\mathcal{C}(n)+1\big)}
\end{equation}
where $c_{1,2}$ are constants and
\begin{equation}\label{eq49}
\mathcal{G}(n):=\mathcal{C}^{2}(n)-2\mathcal{C}(n)-4 \mathcal{D}(n)+1 .
\end{equation}
Solution ~(\ref{eq48}) is valid as long as $\mathcal{G}(n)>0$. When $\mathcal{G}(n)=0$, i.e. $\mathcal{D}=(\mathcal{C}^{2}-2\mathcal{C}+1)/4 $, then the solution is
\begin{equation}\label{eq50}
F_{k}(a)=c_{1} a^{\frac{1-\mathcal{C}(n)}{2}}+c_{2} \big(\mathcal{C}(n)-1\big)a^{\frac{1-\mathcal{C}(n)}{2}}\log(a).
\end{equation}
For our background solution $\alpha_{-}$, the condition $\mathcal{G}(n)=0$ is reached only in the limiting case $n \rightarrow 2$, hence we neglect it in the following and consider just the solution (\ref{eq48}). Therefore, as showed in Fig.\ref{fig:1} (b), we always have $\gamma>1$ in all the range of existence of $\alpha_{-}$. Since the exponent is greater than the inflationary one ($\gamma_{dS}=-1$), a magnetic field amplification is always possible ($\gamma_{rh}>1$). Also, notice that any adiabatic decrease is avoided. \\
For the non-minimal model $f_{2}(T,B)=-\lambda T B^{n}$, instead, assuming as before a dominant non-minimal gravity-photon coupling, and rewriting in the variable $a$, Eq.~(\ref{eq42}) becomes
\begin{equation}
\mathbf{F}_{k}''(a) + \mathcal{Q}(n) \mathbf{F}_{k}'(a) + \dfrac{\mathcal{S}(n)}{a^{2}} \mathbf{F}_{k}(a) = 0
\end{equation}
where prime here stay for derivative w.r.t. the scale factor $a$ and we define
\begin{equation}
\mathcal{Q}(n):= - \dfrac{\Big(5+3\alpha+4n(1+\alpha) \Big)}{\alpha}
\end{equation}
and
\begin{equation}
\mathcal{S}(n):=\dfrac{2(1+\alpha)(1+n)}{\alpha^{2}}\Big(3+2n+\alpha(1+n) \Big),
\end{equation}
where $\alpha$ is given by $\alpha_{2}$ in Table~\ref{tab1}. The solution of this differential equation is analogous to the previous case, i.e.
\begin{equation}\label{eq52}
F_{k}(a) \sim a^{\frac{1}{2}\big(\sqrt{\mathcal{H}(n)}-\mathcal{Q}(n)+1\big)}
\end{equation}
where, in a completely analogous way to Eq.~(\ref{eq49}), we define
\begin{equation}
\mathcal{H}(n):=\mathcal{Q}^{2}(n)-2\mathcal{Q}(n)-4 \mathcal{S}(n)+1 .
\end{equation}
Denoting by $\gamma$ the exponent
of Eq.~(\ref{eq52}), we listed it in the last column of Table~\ref{tab1}. In the special case $n=0$, the (only) solution $\alpha=2$ leads to $\gamma=3.5$, which is high enough to allow amplification w.r.t. the inflationary period. Interestingly, as the power $n$ increases, this value increases in turn, similarly to what was found in \cite{Lambiase:2008zz} for a non-minimal coupling involving the Riemann tensor and the photon. At least, we have $\gamma>3$, then the amplification is possible also for more \textit{exotic} (meaning far from GR) inflationary solutions, like $\alpha=1/2$ or even $\alpha=1/3$. Finally, it is worth noticing that, in this discussion, the role of the coupling constant $\lambda$ is irrelevant, since it disappears thanks to the super-horizon approximation.
\section{Amplification during inflation}\label{sec5}
In the previous sections, we assumed that the main amplification for the seed of PMs was achieved during the \textit{reheating} epoch, neglecting a gravity-photon non-minimal coupling during inflation. In this section, we generalize and extend what was found in \cite{Bertolami:2022hjk}, in order to estimate amplification of the magnetic field assuming its manifestation during \textit{inflation}, i.e. turning on the non-minimal coupling in this epoch, rather than in the reheating one. Indeed, neglecting the classical electromagnetic term in Eq.~(\ref{eq35}), and assuming a negligible electric field, the consequent Eq.~(\ref{eq37}) ensures that the magnetic field intensity goes like
\begin{equation}\label{eq54}
|\mathbf{B}(\eta, \mathbf{x})| \propto \dfrac{1}{a^{2}f_{2}(T,B)}.
\end{equation}
Notice that to derive the above relation, one can alternatively use a different definition of $\mathbf{F}_{k}$, instead of Eq.~(\ref{eq43}), to brake the adiabatic decay, namely \cite{Bertolami:2022hjk}
\begin{equation}
\mathbf{F_{k}}(\eta):=a^{2} f_{2}(T,B)(\eta)\mathbf{B}(\eta,\mathbf{k})
\end{equation}
whose Fourier transform gives
\begin{equation}
\mathbf{B}(\eta,\mathbf{x})= \dfrac{1}{a^{2} f_{2}(T,B)} \int \mathbf{F}_{k}(\eta)e^{i\mathbf{k}\cdot \mathbf{x}} d\mathbf{k}.
\end{equation}
Thanks to the super-horizon approximation, the integral is bounded, thus giving the relation (\ref{eq54}). Notice that, in the GR limit, $f_{2}=1$, one would have $\mathbf{B} \propto 1/a^{2}$, i.e. an adiabatic decay. \\
The rate between the magnetic field at the beginning and the end of the inflation period is
\begin{equation}
\dfrac{\mathbf{B}_{end}}{\mathbf{B}_{in}} \simeq \Big(\dfrac{a_{in}}{a_{end}} \Big)^{2} \dfrac{f_{2}^{in}(T,B)}{f_{2}^{end}(T,B)}
\end{equation}
where $f_{2}^{in}(T,B)$ is the non-minimal function at the beginning of inflation, while $f_{2}^{end}(T,B)$ is its evaluation at the end, which we can approximate with the beginning of the reheating epoch. Since $a_{end} \sim e^{60} a_{in}$ and considering the model $f_{2}(T,B)=\lambda B^{n}$, we have
\begin{equation}
\dfrac{\mathbf{B}_{end}}{\mathbf{B}_{in}} \simeq 10^{-53} \Big( \dfrac{H_{in}}{H_{end}}\Big)^{2n},
\end{equation}
where we used the relations
\begin{equation}
T = - 6H^{2}, \; \; \; B \simeq - 18 H^{2}
\end{equation}
($H=\Dot{a}/a$) valid in the well-known slow-roll approximation, i.e. $\Dot{H} \simeq 0$, which ensure that the inflaton evolution is sufficiently damped to allow for an accelerated expansion. \\
At the end of inflation, the Hubble constant is \cite{Bertolami:2022hjk}
\begin{equation}
H_{end} \simeq \dfrac{\pi}{\sqrt{90}}\dfrac{T^{2}_{RH}}{M_{pl}}
\end{equation}
where $T_{RH}$ is the reheating temperature. During inflation, instead, it is usually assumed $H_{in} \simeq 10^{-6} M_{pl} $.
Finally, since $10^{-8} M_{pl} \lesssim T_{RH}\lesssim 10^{-4} M_{pl}$ \cite{Bertolami:1999}, we have that
\begin{equation}
10^{-53+4n} \lesssim \dfrac{\mathbf{B}_{end}}{\mathbf{B}_{in}} \lesssim 10^{-53+20n}.
\end{equation}
Let us consider now the parameter $r = \rho_{B}/ \rho_{\gamma}$ described in Sec. \ref{sec1}. From this definition, it is easy to deduce the relation with the magnetic field intensity, namely
\begin{equation}\label{eq62}
\dfrac{\mathbf{B}_{end}}{\mathbf{B}_{in}} = \sqrt{\dfrac{r_{RH}}{r_{IN}}}.
\end{equation}
Here, we considered $\rho_{\gamma}$ constant during inflation, and we called $r_{RH,IN}$ the rate value in the reheating and inflation epochs, respectively.
Even if we know with enough confidence that the present value is $r\simeq1$ and $r\approx10^{-34}$ or $r\approx10^{-8}$ for the pre-galactic epoch, we have no idea of its value in other periods of the Universe evolution, so some assumptions have to be made. Following \cite{Turner:1987bw}, it is possible to consider that the pre-galactic value, $r_{pg}$, is related to $r_{RH}$ by $r_{pg} \approx 10^{-14} r_{RH}$. From Eq.~(\ref{eq62}), and considering $r_{IN} = 10^{\chi}$ with $\chi$ an arbitrary power, the following constraints are deduced:
\begin{itemize}
\item if $r_{pg}=10^{-34}\;\;$ $\longrightarrow$ $\; \; \chi \geq 58 - 40 n$
\item if $r_{pg}=10^{-8}\;\;\;\,$ $ \longrightarrow$ $\; \; \chi \geq 84 - 40 n$.
\end{itemize}
We have to remember that, in the first case, a (galactic) dynamo mechanism is invoked, while in the second case the amplification is achieved during the collapse of the protogalactic cloud. The above inequalities are shown in Fig. \ref{fig:2}. Notice that, at a fixed power $n$, wide intervals are allowed to $\chi$, which become less restrictive as $n$ increases. In particular, when $n=3$, one recovers results in \cite{Bertolami:2022hjk}. The fact to obtain similar results in metric and teleparallel theories is not surprising: thanks to the slow-roll approximation, the boundary term $B$ plays the same role of the Ricci scalar $R$, thus gaining its own importance even in the absence of the torsion scalar $T$. By the way, considering a non-minimal coupling function like $f_{2}(T,B)=-\lambda T B^{n}$, after similar calculations, we arrive at the same inequalities with the only change $2n \mapsto 2n+2$, thus getting even less tight constraints.
\section{Discussion and Conclusions}\label{sec6}
Today, it is well known that stars or compact objects (like neutron stars) are capable of generating very strong local magnetic fields (up to $10^{14}$ G). The increasingly evident presence of galactic and especially intergalactic magnetic fields, however, still have no explanation, being one of the long standing puzzle of astrophysics and cosmology. While in the case of aged galaxies, one could explain them by invoking a dynamo \cite{Widrow_2002} or a compression mechanism \cite{Turner:1987bw}, the presence of fields also in protogalaxy and in the intergalactic medium, suggest a cosmological rather than local origin of them. A recent observational analysis \cite{DiGennaro:2020zkz} found high radio luminosities in high redshift clusters, suggesting that magnetic field amplification happens during the first phases of cluster formation. Indeed, a direct link between PMFs (as constrained by the CMB anisotropy power spectra) and the nowadays cosmic
magnetism was also confirmed by simulations \cite{Vazza:2020phq}. This status of art suggests some today unknown physical process for generating such large scale fields. One possibility is that they are relics from the early Universe, with a subsequent amplification in a pregalactic era. Inflation remains the only epoch capable of producing super-horizon correlations, in order to generate such large scale fields. These long-wavelength modes, in particular, may have been generated by quantum fluctuations which have grown during inflation and the reheating eras, in a similar way to that which led to density fluctuations and thus to the large scale structure of the Universe. Their effect on gravitational waves and on the CMB has been studied in \cite{GW-2021al} and \cite{Kunze:2013kza}, respectively. The standard GR predicts an adiabatic decay for this primordial field: since the Universe is believed to have been a good conductor for much of its post-inflationary history, any cosmological
magnetic field, from the end of inflation onwards, will preserve the flux, i.e. $a^2 \textbf{B} \sim const $, and then $\textbf{B} \sim 1/a^2$ (adiabatic decay). The same fate occurs during inflation, as is evident by solving Maxwell's equations. Since $a$ grows exponentially during inflation, this means that GR cannot explain their survival. Hence the need to move towards extended gravity theories.
In this paper, we studied the amplification of PMFs in the context of extended teleparallel theories of gravity, i.e. $f(T,B)$ gravity, which recently revealed extremely useful to fix several cosmological issues.
In the first part, we computed and solved the exact cosmological equations in a spatially flat FRW metric, distinguishing between inflationary and reheating eras. We used two different models, namely $f(T,B)=-T+\lambda B^{n}$ and $f(T,B)=-\lambda T B^{n}$. Here, we found important deviations from GR, both in inflation and in the reheating era. In the inflationary phase, for the first model and in the strong field regime $\lambda/M_{pl}^{2(1-n)} \gg 1$, we showed that the time power $\alpha$ of the scale factor $a(\eta)==1/(-c \eta)^{\alpha}$ is such that $\alpha>1$ $\forall n>1$, implying a faster inflation. For the second model, instead, a distinction between the various regimes of the coupling constant $\lambda$ is not necessary, but the solution for a generic equation of state parameter $w$ is very involved. For this model, we found $\alpha \geq 1 $ $\forall n >0$. In the reheating phase, a solution for $\alpha$ exists only in the range $7/16 \leq n<2$ (see Fig. \ref{fig:1} (a)) for the first model, while an analytical solution is not possible for the second model (see Table~\ref{tab1}).
In the second part, we adopt a non-minimal gravity-photon coupling in order to generate primordial magnetic fields with a non-adiabatic behaviour, exploiting the breaking of conformal symmetry (which naturally arises from such non-minimal couplings) and the results from the first part. These couplings are well motivated, since according to the QED on curved space-time, one-loop vacuum-polarization effects can lead to non-minimal gravitational couplings between the curvature and the electromagnetic field \cite{Drummond:1979pp}. The electromagnetic sector $F_{\mu \nu}F^{\mu \nu}$ was coupled to the scalar $R$ and tensor $R^{\mu \nu \rho \sigma}$ curvatures in \cite{Turner:1987bw,Lambiase:2008zz,deAndrade:2013fga}, curvature power-law models $R^{n}$ in \cite{Garretson:1992vt,Mazzitelli:1995mp,Lambiase:2008zz,Bertolami:2022hjk}, torsion power-law models $T^{n}$ in \cite{Bamba:2013rra}. It was proposed in \cite{Pavlovic:2018idi} that the signatures of such non-minimal couplings could be in principle observed (or constrained) by investigating the magnetic fields around the event horizons of black holes and that the same effect could be exploited for constraining the size of primordial black holes.
After obtaining the corresponding Maxwell equations, we studied the evolution of magnetic field $\mathbf{B}$ in the two epochs. We assumed a negligible non-minimal coupling during inflation, so that an amplification effect is realized only during the reheating era, where, for the first $f(T,B)$ model (now coupled to the electromagnetic sector), we found that amplification is always possible ($\gamma>1$), as showed in Fig. \ref{fig:1} (b), and any adiabatic decrease is avoided. The amplification effect is even more evident in the second model, especially for $n>1/2$. Interestingly, we found that imposing a zero electric field, automatically leads to the super-horizon approximation, $k \eta \leq 1$, while the inverse is guaranteed (approximately) only for inflation phase.
We finally considered results in \cite{Bertolami:2022hjk}, in order to estimate amplification of the magnetic field assuming its manifestation during \textit{inflation}, i.e. turning on the non-minimal coupling in this epoch, rather than in the \textit{reheating} one. In order to explain the present value of the rate between the magnetic energy density and cosmic microwave energy density, $r=\rho_{B}/ \rho_{\gamma} \simeq 1$, we found that, during inflation, it should have been $r_{IN}=10^{\chi}$, with $\chi \geq 58 - 40 n$, if a (galactic) dynamo mechanism is invoked, and $\chi \geq 84 - 40 n$ if the amplification is achieved during the collapse of the protogalactic cloud. Contextually, we found that thanks to the slow-roll approximation, the boundary term $B$ plays the same role of the Ricci scalar $R$, thus gaining its own importance even in the absence of the torsion scalar $T$.
Considering an electromagnetic tensor in a space-time with torsion is a very delicate task. Since the connection symbols are no longer symmetric, the standard definition of $F_{\mu \nu}$ is no more compatible with the $U(1)$ gauge invariance of QED, thus requiring a change in the minimal prescription $\partial_{\mu} \rightarrow \nabla_{\mu}$ (at the action level) to switch from flat to curved space-times (see \cite{Fresneda:2014kua} for a discussion and references therein). This aspect is not often emphasized when dealing with electromagnetism in cosmological backgrounds with torsion and neglecting it implicitly means that photons do not react to torsion. Although this aspect is not considered in this work, it certainly constitutes an important issue to be taken into account in a general discussion of magnetic fields in presence of torsion. We plan to study this topic in a forthcoming paper.
\begin{acknowledgments}
The authors acknowledge the support by the Istituto Nazionale di Fisica Nucleare (INFN) {\it Iniziativa Specifica} QGSKY.
\end{acknowledgments}
\appendix
\section{The torsion scalar }
In this appendix, we report in details the calculations needed to derive the second equality in Eq.~(\ref{eq17}), using the definition of the torsion tensor.
Let us start with the definition (\ref{eq6}) of the scalar torsion
\begin{equation}\label{eqA1}
T=\dfrac{1}{4}T\indices{^\rho ^\mu ^\nu}T\indices{_\rho _\mu _\nu} + \dfrac{1}{2}T\indices{^\rho ^\mu ^\nu}T\indices{_\nu _\mu _\rho} -T\indices{^\rho _\mu _\rho}T\indices{^\nu ^\mu _\nu} .
\end{equation}
The easier term to compute is
\begin{equation*}
T\indices{^\rho _\mu _\nu}=h\indices{_a ^\rho}\left(\partial_{\mu} h\indices{^a _\nu}-\partial_{\nu} h\indices{^a _\mu}\right)
\end{equation*}
which, in the conformal metric Eq.~(\ref{eq14}), becomes
\begin{equation*}
T\indices{^\rho _\mu _\nu}= \frac{a'}{a}\Big(\delta_{0}^{\rho} \delta_{\nu}^{0} \delta_{\mu}^{0} -\delta_{0}^{\rho} \delta_{\mu}^{0} \delta_{\nu}^{0} \Big)+\frac{1}{a}\Big[\delta_{i}^{p} \partial_{\mu}\left(a \delta_{\nu}^{i}\right)-\delta_{i}^{\rho} \partial_{\nu}(a \delta_{\mu}^{i})\Big]\,,
\end{equation*}
that is
\begin{equation*}
T\indices{^\rho _\mu _\nu}= \mathcal{H}\left( \delta_{i}^{\rho}\delta_{\nu}^{i}\delta_{\mu}^{0} - \delta_{i}^{\rho}\delta_{\mu}^{i}\delta_{\nu}^{0} \right) a
\end{equation*}
where we used the definition of the Hubble constant in the conformal time, $\mathcal{H}= a'/a^{2}$ .
The next component we need is
\[\arraycolsep=1.4pt\def\arraystretch{2.2}
\begin{array}{ccc}
T \indices{_\rho _\mu _\nu}= g_{\lambda\rho}T\indices{^\lambda _\mu _\nu} = \\
g_{0\rho}T\indices{^0 _\mu _\nu} + g_{j \rho}T\indices{^j _\mu _\nu}= - a^{3} \mathcal{H} \cdot \\
\Big[ \delta_{\rho}^{1}\Big(\delta_{\nu}^{1} \delta_{\mu}^{0}-\delta_{\mu}^{1} \delta_{\nu}^{0}\Big) + \delta_{\rho}^{2}\Big(\delta_{\nu}^{2} \delta_{\mu}^{0}-\delta_{\mu}^{2} \delta_{\nu}^{0}\Big)+ \delta_{\rho}^{3}\Big(\delta_{\nu}^{3} \delta_{\mu}^{0}-\delta_{\mu}^{3} \delta_{\nu}^{0}\Big) \Big].
\end{array}
\]
Similarly, one finds
\[\arraycolsep=1.4pt\def\arraystretch{2.2}
\begin{array}{ccc}
T \indices{^\rho ^\lambda _\nu}=g^{\lambda\mu}T\indices{^\rho _\mu _\nu} = \\
\dfrac{\mathcal{H}}{a} \Big( \delta_{0}^{\lambda}\delta_{i}^{\rho}\delta_{\nu}^{i} + \delta_{1}^{\lambda}\delta_{1}^{\rho}\delta_{\nu}^{0} + \delta_{2}^{\lambda}\delta_{2}^{\rho}\delta_{\nu}^{0} + \delta_{3}^{\lambda}\delta_{3}^{\rho}\delta_{\nu}^{0}\Big).
\end{array}
\]
Finally,
\[\arraycolsep=1.4pt\def\arraystretch{2.2}
\begin{array}{ccc}
T \indices{^\rho ^\lambda ^\sigma}=g^{\sigma\nu}T\indices{^\rho ^\lambda _\nu} = \\
\dfrac{\mathcal{H}}{a^{3}} \Big[ \delta_{0}^{\sigma}\delta_{1}^{\lambda}\delta_{1}^{\rho} + \delta_{0}^{\sigma}\delta_{2}^{\lambda}\delta_{2}^{\rho} + \delta_{0}^{\sigma}\delta_{3}^{\lambda}\delta_{3}^{\rho} - \delta_{0}^{\lambda} \Big( \delta_{1}^{\sigma}\delta_{1}^{\rho} + \delta_{2}^{\sigma}\delta_{2}^{\rho} + \delta_{3}^{\sigma}\delta_{3}^{\rho}\Big)\Big].
\end{array}
\]
We can now compute all the terms in Eq.~(\ref{eqA1}). The first term is
\[\arraycolsep=1.4pt\def\arraystretch{2.2}
\begin{array}{ccc}
T \indices{^\rho ^\mu ^\nu} T \indices{_\rho _\mu _\nu}= \\ -\mathcal{H}^{2}\Big[\delta_{0}^{\nu}\Big( \delta_{1}^{\mu} \delta_{1}^{\rho}+ \delta_{2}^{\mu} \delta_{2}^{\rho}+ \delta_{3}^{\mu} \delta_{3}^{\rho}\Big)- \delta_{0}^{\mu} \Big( \delta_{1}^{\nu} \delta_{1}^{\rho}- \delta_{2}^{\nu} \delta_{2}^{\rho}-\delta_{3}^{\nu} \delta_{3}^{\rho}\Big)\Big] \\
\Big[\delta_{\mu}^{0} \Big(\delta_{\rho}^{1} \delta_{\nu}^{1}
+\delta_{\rho}^{2}\delta_{\nu}^{2}
+\delta_{\rho}^{3} \delta_{\nu}^{3} \Big)
-\delta_{\nu}^{0}\Big(\delta_{\rho}^{1} \delta_{\mu}^{1}
+\delta_{\rho}^{2} \delta_{\mu}^{2}
+\delta_{\rho}^{3} \delta_{\mu}^{3} \Big) \Big]=6\mathcal{H}^{2}.
\end{array}
\]
Similarly, the second term reduces to
\[\arraycolsep=1.4pt\def\arraystretch{2.2}
\begin{array}{ccc}
T \indices{^\rho _\mu _\rho} T \indices{^\nu ^\mu _\nu}= \\
-\mathcal{H}^{2}\Big[\delta_{0}^{\nu}\Big( \delta_{1}^{\mu} \delta_{1}^{\rho}+ \delta_{2}^{\mu} \delta_{2}^{\rho}+ \delta_{3}^{\mu} \delta_{3}^{\rho}\Big)- \delta_{0}^{\mu} \Big( \delta_{1}^{\nu} \delta_{1}^{\rho}- \delta_{2}^{\nu} \delta_{2}^{\rho}-\delta_{3}^{\nu} \delta_{3}^{\rho}\Big)\Big] \\
\Big[\delta_{\rho}^{0} \Big(\delta_{\rho}^{1} \delta_{\nu}^{1}
+\delta_{\rho}^{2}\delta_{\nu}^{2}
+\delta_{\rho}^{3} \delta_{\nu}^{3} \Big)
-\delta_{\rho}^{0}\Big(\delta_{\nu}^{1} \delta_{\mu}^{1}
+\delta_{\nu}^{2} \delta_{\mu}^{2}
+\delta_{\nu}^{3} \delta_{\mu}^{3} \Big) \Big]=3\mathcal{H}^{2},
\end{array}
\]
while the last term is
\[
T \indices{^\rho _\mu _\rho} T \indices{^\nu ^\mu _\nu}= -\mathcal{H}^{2}\Big[\delta_{i}^{\rho} \delta_{\rho}^{i} \delta_{0}^{\mu}\Big]
\Big[\delta^{\mu}_{0} \delta_{i}^{\nu} \delta_{\nu}^{i}\Big]=9\mathcal{H}^{2}
\]
Putting all together in Eq.~(\ref{eqA1}), one arrives to
\begin{equation}
T=-6\mathcal{H}^{2}.
\end{equation} Notice that in the cosmological time, it is $T=-6 H^{2}$, and the two relations are clearly related through the variable change $t \mapsto \eta$.
\bibliography{biblio2}
|
Title:
Have Pulsar Timing Arrays detected the Hot Big Bang? Gravitational Waves from Strong First Order Phase Transitions in the Early Universe |
Abstract: The origins of matter and radiation in the universe lie in a Hot Big Bang. We
present a number of well-motivated cosmologies in which the Big Bang occurs
through a strong first order phase transition -- either at the end of
inflation, after a period of kination ("Kination-Induced Big Bang"), or after a
second period of vacuum-domination in the early universe ("Supercooled Big
Bang"); we also propose a "Dark Big Bang" where only the dark matter in the
Universe is created in a first-order phase transition much after inflation. In
all of these scenarios, the resulting gravitational radiation can explain the
tentative signals reported by the NANOGrav, Parkes and European Pulsar Timing
Array experiments if the reheating temperature of the Hot Big Bang, and
correspondingly the energy scale of the false vacuum, falls in the range $T_*
\sim \rho_{{\rm vac}}^{1/4} $= MeV--100 GeV. All the same models at higher
reheating temperatures will be of interest to upcoming ground- and space-based
interferometer searches for gravitational waves at larger frequency.
| https://export.arxiv.org/pdf/2208.03330 |
\vspace*{0mm}
\clearpage
\section{Introduction}
In standard cosmology the Hot Big Bang denotes the reheating of the universe at the end of inflation. During this process a hot plasma of particles is created containing the photons, electrons and baryons of our present universe. Guth's pioneering ``old inflation'' featured a universe trapped in a false vacuum driving the exponential expansion of space~\cite{Guth:1980zm}. The decay of the false vacuum by quantum tunnelling was meant to terminate the inflationary epoch and to transform the vacuum energy into radiation. The original inflation model, hence, already featured the idea of the Hot Big Bang occurring through a first order phase transition. Unfortunately, old inflation is plagued by the infamous ``empty universe problem''~\cite{Guth:1982pn}: sufficient inflation requires a suppressed tunneling rate. As a consequence, the phase transition would be too slow to ever complete and the universe would never enter the radiation-dominated epoch.
The empty universe problem was resolved in slow-roll inflation~\cite{Linde:1981mu,Albrecht:1982wi} which identifies the Hot Big Bang with the perturbative or non-perturbative decay of the inflaton field, rather than with a first order phase transition.
Yet there exist equally successful theories of the early universe closer to Guth's old inflation, i.e.\ models in which inflation ends via a first order phase transition. A prime example is double field inflation~\cite{Adams:1990ds,Linde:1990gz} which features an inflaton sector comprised of two fields: one field direction requires tunneling to get from the false to the true vacuum. In the other direction the field rolls, thereby reducing the potential barrier in the tunneling direction. The tunneling rate switches from very slow to very fast, and the universe reheats suddenly and uniformly in a Big Bang phase transition. Another successful implementation of a tunneling model is chain inflation~\cite{Freese:2004vs,Freese:2005kt,Ashoorioon:2008pj}. The latter features a universe in a false vacuum similar as old inflation. However, the false vacuum decays in a series of first order phase transitions instead of just one. Each individual transition completes quickly within a fraction of a Hubble time, while all transitions together can easily support sufficient e-foldings of inflation.
But the idea of a first order Big Bang phase transition is not only tied to the inflationary epoch. In models with a unified description of inflation and dark energy~\cite{Peebles:1998qn}, the universe typically runs through a period of kination in which the universe is dominated by the kinetic energy of the quintessence field. The Hot Big Bang may then occur through a first order phase transition at the end of kination. More generally, we dub as ``Kination-Induced Big Bang'' the scenario in which an epoch of kination ends in a first order phase transition that produces the matter and radiation of our Universe.
Another complementary example consists in a strongly supercooled phase transition which is often associated with the thermal breaking of a gauge symmetry~\cite{Witten:1980ez}. Such a transition can occur long after inflation has ended and the universe was reheated. Due to the strong supercooling, the universe becomes vacuum-dominated for a second time before the phase transition converts the vacuum energy into a hot plasma. The resulting large entropy release dilutes the preexisting plasma and (virtually) all radiation we observe today stems from the supercooled transition. The latter plays the role of the Hot Big Bang in this case. Finally, we also propose the possibility that only the dark matter (and dark radiation) is created in a first order phase transition – a Dark Big Bang – while visible matter and radiation are produced earlier by the decay of the inflaton.
In this work we will present in detail these five different cosmological scenarios in which the Hot Big Bang is associated with a first order phase transition. The formation and collision of true vacuum bubbles during the phase transition induces a strong gravitational radiation signal. By determining the gravitational wave spectrum we will be able to directly link the Hot Big Bang to observational data.
A particularly intriguing possibility is that the Big-Bang-induced gravity waves are responsible for the tentative signal reported by the NANOGrav collaboration~\cite{NANOGrav:2020bcs}. NANOGrav recently found evidence for a stochastic common-spectrum process which affects pulsar timing residuals in its 12.5-year dataset. The signal was meanwhile confirmed by the Parkes (PPTA)~\cite{Goncharov:2021oub} and the European Pulsar Timing Array (EPTA)~\cite{Chen:2021rqp,Chalumeau:2021fpz} (see also~\cite{Antoniadis:2022pcn}). While proof of the characteristic quadrupolar Hellings-Downs correlations~\cite{Hellings:1983fr} is still withstanding, these observations may amount to the first detection of a stochastic gravitational wave background. Among the most plausible sources for such a background in the sensitivity window of pulsar timing arrays are mergers of super-massive black-hole binaries~\cite{Rajagopal:1994zj,Jaffe:2002rt,Wyithe:2002ep,Sesana:2008mz,Burke-Spolaor:2018bvk,Middleton:2020asl}, a cosmic-string network in the early universe~\cite{Vilenkin:1981bx,Vachaspati:1984gt,Damour:2004kw,Siemens:2006yp,Olmez:2010bi,Ringeval:2017eww,Ellis:2020ena,Blasi:2020mfx,Buchmuller:2020lbh} and a first order phase transition~\cite{Caprini:2010xv,Schwaller:2015tja,Kobakhidze:2017mru,Nakai:2020oit,Addazi:2020zcj,Ratzinger:2020koh,Brandenburg:2021tmp,NANOGrav:2021flc,Borah:2021ocu,DiBari:2021dri,Lewicki:2021xku,Ashoorioon:2022raz} -- of which the latter is the subject of this study. By linking the phase transition properties to the Hot Big Bang, we will be able to strongly
constrain the parameter space. Further, we will show that a Big Bang first order phase transition can perfectly fit the pulsar timing signals.
Fig.~\ref{fig:spectra2} shows our main results of matching predictions of our five cosmological models to the data. Needless to say that a direct experimental probe of the Hot Big Bang would be of paramount importance.
The paper is organized as follows: in Sec.~\ref{sec:gravitywaves} we review the calculation of the gravitational wave spectrum from a first order phase transition. The derivation of the time, duration and strength of the phase transition (entering the spectrum) are also provided. In Sec.~\ref{sec:nanograv} we perform a fit to the pulsar timing signal with the focus on a Big Bang phase transition. In Sec.~\ref{sec:scenarios} we describe several cosmological scenarios in which the Big Bang occurs through a first order phase transition. We also determine the corresponding gravitational wave signals and show that they can potentially explain the pulsar timing data. Finally, Sec.~\ref{sec:conclusion} contains our concluding remarks.
\section{Gravity Waves from a First Order Phase Transition}\label{sec:gravitywaves}
\subsection{Gravitational Wave Spectrum}
We consider a first order phase transition in the early universe triggered by the decay of a false vacuum with energy density $\rho_{\text{vac}}$. Following the standard convention, we introduce the parameter~\cite{Kamionkowski:1993fg}
\begin{equation}\label{eq:alpha}
\alpha = \frac{\rho_{\text{vac}}}{\rho_{\text{r}}(T_n)}\,,
\end{equation}
which specifies the ratio of the vacuum energy density to the energy density of the surrounding radiation plasma characterized by its temperature $T_n$ right before the transition,
\begin{equation}
\rho_{\text{r}}(T_n) = \frac{\pi^2}{30}g_{\text{eff}}(T_n)T_n^4\,,
\end{equation}
where $g_{\text{eff}}$ denotes the effective number of relativistic species. The special case of a phase transition in vacuum (i.e.\ without any preexisting plasma) corresponds to $T_n=0$ and $\alpha\rightarrow \infty$.
During the phase transition, bubbles of true vacuum are formed at random nucleation sites which quickly grow and collide with other bubbles. In this process the universe is reheated, i.e. the vacuum energy is converted to thermal energy of the radiation plasma. We denote the temperature of the radiation bath right after the transition by $T_*$. If the transition time is short (compared to the Hubble time) we can approximate,
\begin{equation}\label{eq:rhotot}
\rho_{\text{r}}(T_*) \simeq \rho_{\text{tot}} \simeq \rho_{\text{r}}(T_n) + \rho_{\text{vac}},
\end{equation}
where $\rho_{\text{tot}}$ stands for the total energy density at the phase transition. This implies,
\begin{equation}\label{eq:Tst}
T_* \simeq\left(\frac{30}{\pi^2 g_{\text{eff}}(T_*)}\left(\rho_{\text{vac}}+\rho_{\text{r}}(T_n)\right)\right)^{1/4}=\left(\frac{\alpha+1}{\alpha}\right)^{1/4}\left(\frac{30\,\rho_{\text{vac}}}{\pi^2 g_{\text{eff}}(T_*)}\right)^{1/4}\,.
\end{equation}
In the case of a phase transition in vacuum $\alpha\rightarrow\infty$ and the factor $(\alpha+1)/\alpha$ simply becomes unity.
First order phase transitions can source strong gravitational radiation~\cite{Witten:1984rs,Hogan:1986qda} which is generated by the collisions of true vacuum bubbles~\cite{Kosowsky:1992rz,Kosowsky:1992vn} as well as sound waves~\cite{Hindmarsh:2013xza,Hindmarsh:2015qta,Hindmarsh:2017gnf} and magneto-hydrodynamic turbulence in the surrounding plasma induced by the expanding bubbles~\cite{Kosowsky:2001xp,Dolgov:2002ra,Caprini:2009yp}. The relative importance of the different contributions depends on the underlying microphysics. In the following we will mostly focus on the case $\alpha\gtrsim 1$, in which the vacuum decay generates most (or all) of the radiation plasma in the universe, while the preexisting plasma is subdominant (or absent). Assuming, furthermore, that the field undergoing the phase transition does not couple strongly to the radiation plasma (if present), we expect the bubbles to propagate at the speed of light and their collisions to be the dominant source of gravitational radiation~\cite{Espinosa:2010hh}. A possible exception occurs if the phase transition is connected with the breaking of a gauge symmetry. The radiation of soft gauge bosons inflicts a pressure on the bubble walls which grows linearly with their Lorentz boost~\cite{Bodeker:2017cim} (or even quadratically~\cite{Hoche:2020ysm}). In this case the bubble walls may lose most of their energy to the surrounding plasma even if $\alpha \gg 1$ such that the gravitational wave emission is dominated by plasma processes.
The gravitational wave spectrum today, induced by bubble collisions at a phase transition in the early universe, normalized to the critical density today as a function of frequency $f$ takes the form~\cite{Kosowsky:1992rz,Kosowsky:1992vn}
\begin{equation}\label{eq:gravityspectrum}
\Omega_{\text{GW}} h^2(f) = \left(\frac{7.6\times 10^{-5}}{g^{1/3}_{\text{eff}}(T_*)}\right)\;\,\widetilde{\Omega}\;\left(\frac{H_*}{\beta}\right)^2\,\left(\frac{\kappa_\phi \alpha}{1+\alpha}\right)^2\, \frac{(a+b)\left(f/f_{\text{peak}}^0\right)^a}{b+a\left(f/f_{\text{peak}}^0\right)^{a+b}}
\,,
\end{equation}
where we set the bubble wall velocity to the speed of light (which is valid for all scenarios discussed in this work).
The expected spectrum corresponds to a (smoothly) broken power law with a maximum at the redshifted peak frequency $f_{\text{peak}}^0$. The parameter $\widetilde{\Omega}$ sets the overall normalization of the spectrum, while $a$, $b$ determine the power law index in the infrared ($f<f_{\text{peak}}^0$) and ultraviolet ($f>f_{\text{peak}}^0$) respectively.
The expected values of these quantities from simulations of bubble collisions (shown in Tab.~\ref{tab:gwparameters}) will be discussed in more detail shortly.
The first term in brackets on the right-hand side of Eq.(\ref{eq:gravityspectrum}) accounts for the redshift of the gravity wave amplitude from production until now.
We note that the gravitational wave amplitude depends on the square of the factor
\begin{equation}
\alpha / ( 1 + \alpha) = \rho_{\text{vac}} / \rho_{\text{tot}}\, .
\end{equation}
Furthermore, $H_*$ is the Hubble rate at the phase transition, while $\beta$ stands for the inverse time duration of the phase transition (a precise definition of $\beta$ will follow in Eq.~\eqref{eq:beta}).
The gravitational wave amplitude depends on the quantity $(H_*/\beta)$, the number of e-foldings (of the scale factor) during the phase transition. In this paper, as we will see, we will always be driven to $(H_*/\beta) <1$, a requirement that suppresses the gravitational wave amplitude.
The quantity $\beta$ also determines the peak frequency of the gravitational wave spectrum at the time of production~\cite{Huber:2008hg},
\begin{equation}\label{eq:peakf_emission}
f_{\text{peak}} \simeq 0.2\, \beta\,,
\end{equation}
which in the present universe has redshifted to the value,
\begin{equation}\label{eq:peakf}
f_{\text{peak}}^0 \simeq 7.7\times 10^{-8}\:\text{Hz}\;\,\left(\frac{f_{\text{peak}}}{H_*}\right)\,\left(\frac{g^{1/6}_{\text{eff}}(T_*)\,T_*}{\text{GeV}}\right)\,.
\end{equation}
The parameter $\kappa_\phi$ in~Eq.~\eqref{eq:gravityspectrum} specifies the energy fraction carried by the bubble walls at collision. For phase transitions in vacuum or with negligible impact of the surrounding plasma one can simply set $\kappa_\phi=1$.
A particular relevant special case is a first order phase transition in vacuum (e.g.\ at the end of inflation). In the absence of a preexisting plasma the factor $\alpha / (1+ \alpha) =\rho_{\text{vac}}/\rho_{\text{tot}}$ in the gravitational wave spectrum simply becomes unity (cf.~Eq.~\eqref{eq:alpha} and~\eqref{eq:rhotot}). In order to find a rough estimate for the gravity wave amplitude at the peak frequency in the pure vacuum case, we approximate $g_{\text{eff}}(T_*)=10$ and $\widetilde{\Omega}= 0.05$ to find
\begin{equation}\label{eq:vacuumomega}
\Omega_{\text{GW}} h^2(f_{\text{peak}}^0) \sim 1.8\times 10^{-6}\;\left(\frac{H_*}{\beta}\right)^2\quad \text{(vacuum phase transition)}\,.
\end{equation}
Note that the gravitational wave amplitude in this case is completely determined by the quantity $(H_*/\beta)$, the number of e-foldings (of the scale factor) during the tunneling transition. As mentioned above, in this paper we will find the requirement $(H_*/\beta) <1$, leading to suppression of the gravitational wave amplitude. The peak frequency for the pure vacuum case can be estimated as,
\begin{equation}\label{eq:vacuumf}
f_{\text{peak}}^0 \sim 1.7\times 10^{-8}\:\text{Hz}\;\,\left(\frac{\beta}{H_*}\right)\,\left(\frac{\rho_{\text{vac}}^{1/4}}{\text{GeV}}\right)\quad \text{(vacuum phase transition)}\,.
\end{equation}
From Eq.(\ref{eq:gravityspectrum}), we can see that the largest value of the gravitational wave amplitude $\Omega_{\text{GW}}$ is achieved for the pure vacuum case, in which $\alpha \rightarrow \infty$ so that the factor $\alpha / ( 1 + \alpha)$ takes its largest possible value of unity. Below (see Eq.~\eqref{eq:percolation}) we will require $H_*/\beta < 1/3$; with this requirement, Eq.(\ref{eq:gravityspectrum}) leads to a maximum predicted value $\Omega_{\text{GW}} < 10^{-7}$.
Previously~\cite{Schmitz:2020syl} studied a variety of benchmark cases in agreement with this upper bound.
Let us now also briefly turn to the second potential source of gravitational radiation, which are sound waves in the plasma induced by the expanding vacuum bubbles. The corresponding acoustic gravitational wave spectrum has been computed to be~\cite{Hindmarsh:2015qta,Caprini:2018mtu},
\begin{equation}\label{eq:gravityspectrumplasma}
\Omega_{\text{GW}} h^2(f) = \left(\frac{7.6\times 10^{-5}}{g^{1/3}_{\text{eff}}(T_*)}\right)\;\left(\frac{H_*}{\beta}\right)\,\left(\frac{\kappa_v \alpha}{1+\alpha}\right)^2\,
\left(\frac{f}{f_{\text{peak}}^0}\right)^a\left(\frac{7}{4+3(f/f_{\text{peak}}^0)^2}\right)^{\frac{b+a}{2}}
\,,
\end{equation}
where $\kappa_v$ denotes the fraction of vacuum energy which is converted into bulk motion of the plasma. According to the recent simulation~\cite{Hindmarsh:2017gnf}, the peak frequency of the gravitational waves from sound waves is very similar to the one from bubble collisions\footnote{A somewhat higher peak frequency $f_{\text{peak}} \simeq 1.15 \beta$ of the acoustic gravitational wave spectrum had previously been suggested in~\cite{Hindmarsh:2015qta}}. Therefore,~Eq.~\eqref{eq:peakf_emission} and~\eqref{eq:peakf} can also be applied for the acoustic gravitational wave spectrum in Eq.~\eqref{eq:gravityspectrumplasma}.
Vacuum bubbles expanding through a plasma can also induce magneto-hydrodynamic turbulence which is another possible source of gravitational waves~\cite{Kosowsky:2001xp,Dolgov:2002ra,Caprini:2009yp}. Since this contribution suffers from a high degree of uncertainty we will not explicitly consider it in this work (but we will comment in case of relevance).
Let us now discuss in more detail the frequency-dependence of the gravitational spectrum from bubble collisions in Eq.~\eqref{eq:gravityspectrum} and from sound waves in Eq.~\eqref{eq:gravityspectrumplasma}.
In both cases the expected spectrum peaks at the redshifted peak frequency $f_{\text{peak}}^0$ with
$a$, $b$ giving the power law indices below and above the peak respectively. As above, in both cases the parameter $\widetilde{\Omega}$ sets the overall normalization of the spectrum.
In Tab.~\ref{tab:gwparameters} we provide the parameters obtained via numerical simulation of bubble collisions and sound waves. In the case of bubble collisions we separately quote the result of the envelope approximation~\cite{Kosowsky:1992rz,Kosowsky:1992vn,Huber:2008hg} and of the lattice simulation~\cite{Cutting:2020nla} which we denote as `thick-wall simulation' in the following.
\begin{table}[t]
\begin{center}
\begin{tabular}{|cccc|}
\hline
&&&\\[-4mm]
& $\quad\widetilde{\Omega}\quad$ & $\quad a\quad$ & $\quad b \quad$ \\
\hline
&&&\\[-4mm]
envelope & $0.077$ & $2.8$ & $1$ \\
thick-wall & $0.027$ & $0.7$ & $2.2$\\
sound waves & $0.16$ & $3$ & $4$\\ \hline
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{Parameters entering the gravitational wave spectrum from bubble collisions in a first order phase transition (Eq.~\ref{eq:gravityspectrum}) as determined in the envelope approximation (taken from~\cite{Huber:2008hg}) and in the thick-wall simulation~\cite{Cutting:2020nla}. Also quoted are the parameters entering the acoustic gravitational wave spectrum given in Eq.~\eqref{eq:gravityspectrumplasma}~\cite{Hindmarsh:2015qta}.}
\label{tab:gwparameters}
\end{table}
In the envelope approximation, the stress-energy is assumed to be located in a thin shell at the bubble wall which disappears upon collision. The gravitational radiation is sourced only by the uncollided envelope of the spherical bubbles, ignoring the interaction region. The envelope approximation is expected to apply to phase transitions in which the tunneling field becomes trapped temporarily in the false vacuum within the bubble collision region (which justifies the neglect of the shear stress after collision)~\cite{Jinno:2019bxw}. This has been shown to occur in the thin-wall regime of vacuum tunneling, i.e.\ when the energy difference between the false and the true vacuum is small compared to the potential barrier separating the two~\cite{Hawking:1982ga,Watkins:1991zt,Falkowski:2012fb}. However, in the opposite thick-wall regime, the tunneling field does not get trapped and rather undergoes oscillations around the true vacuum in the bubble overlap region. This leads to significant propagation of the shear stress after collision -- strongly violating the basic assumptions of the envelope approximation~\cite{Cutting:2020nla}. In the thick-wall case the gravitational wave spectrum was argued~\cite{Jinno:2019bxw} to follow more closely the predictions of the bulk flow model~\cite{Konstandin:2017sat} in which the shell of shear-stress continues to propagate after collision. This picture was qualitatively confirmed by a recent lattice simulation which included an explicit modeling of the field profile during the bubble collision stage assuming a quartic potential~\cite{Cutting:2020nla}. The parameters obtained there for the thick-wall case\footnote{The thick-wall case corresponds to the smallest $\bar{\lambda}$ simulated in~\cite{Cutting:2020nla}.} shown in Tab.~\ref{tab:gwparameters} are in reasonable agreement with the predictions of the bulk flow model.
A striking observation is that the gravity wave spectrum rises more steeply in the infrared and falls more softly in the ultraviolet region in the envelope approximation compared to the thick-wall simulation. This difference is not unexpected since both derivations describe different physical realities (thin-wall bubbles vs. thick-wall bubbles). Note, however, that in both cases the simulations were optimized to predict the gravity wave spectrum around the peak frequency and may not capture well the behavior in the far-infrared ($f\ll f_{\text{peak}}^0$) and far-ultraviolet ($f\gg f_{\text{peak}}^0$) regime. Causality considerations suggest a power law index $a\rightarrow 3$ for $f\ll H_*$ (see e.g.~\cite{Caprini:2009fx}) hinting at a transition to a steeper power law at very low frequency not resolved in the simulations.
\subsection{Phase Transition Parameters}
The time and the duration of a first order phase transition can be linked to to the false vacuum decay rate per volume $\Gamma$. In the microphysical realization, the latter corresponds to the transition rate of a scalar field between two minima of its potential. One finds~\cite{Coleman:1977py,Callan:1977pt,Linde:1980tt,Linde:1981zj}
\begin{equation}\label{eq:tunnelingrate}
\Gamma \simeq \text{max}\left[m^4 \left(\frac{S_4}{2\pi}\right)^2 e^{-S_4},\:
T^4 \left(\frac{S_3}{2\pi\,T}\right)^{3/2} e^{-S_3/T} \right]\,,
\end{equation}
where $S_4$ and $S_3$ stand for the 4- and 3-dimensional Euclidean actions of the bounce solution extrapolating between the two vacua, while $m$ is the mass of the scalar field (evaluated in the false vacuum).
The first term in~Eq.~\eqref{eq:tunnelingrate} corresponds to the quantum tunneling rate at zero temperature, while the second term is the thermally induced rate. In the absence of a preexisting plasma (i.e. if the phase transition occurs in vacuum), $\Gamma$ is given by the quantum tunneling rate. If a plasma with temperature $T$ is present, $\Gamma$ is determined by the faster of the two rates.
The probability $P(t)$ of finding a point in the false vacuum at the time $t$ can be determined by integrating $\Gamma$ over the past light cone of the point~\cite{Guth:1979bh,Guth:1981uk},
\begin{equation}\label{eq:prob}
P(t) = e^{-I(t)}\,,\qquad I(t)=\frac{4\pi}{3}\int\limits_0^t dt^\prime\, \Gamma(t^\prime) a^3(t^\prime) r_{\rm com}^3(t,t^\prime)\,,
\end{equation}
where $I(t)$ corresponds to the expected number of bubble nucleation sites in the past light cone. The time of the phase transition $t_*$ can be defined as the (mean) decay time of the false vacuum,
\begin{equation}\label{eq:Ieq1}
I(t_*)=1\,.
\end{equation}
In Eq.~\eqref{eq:prob} the comoving radius of the past light cone $r_{\rm com}$ is obtained as,
\begin{equation}
\label{eq:comovingradius}
r_{\rm com}(t,t^\prime)= \int\limits_{t^\prime}^t \frac{d\tilde{t}}{a(\tilde{t})}\,,
\end{equation}
where the scale factor $a(t)$ of a universe containing vacuum energy and radiation reads,
\begin{equation}
a(t)=a(t_0) \exp\left(\int\limits_{t_0}^t dt^\prime H(t) dt^\prime\right)\,,\qquad H(t)=\sqrt{\frac{\rho_{\text{vac}}+\rho_{\text{r}}(t)}{3\,M_{\text{P}}^2}}\,.
\end{equation}
The duration of the phase transition $\beta^{-1}$ depends on how quickly the false vacuum probability $P(t)$ decreases with time. A convenient definition is,
\begin{equation}\label{eq:beta}
\beta=-\left.\frac{\dot{P}}{P}\right|_{t=t_*} =\left.\dot{I}\,\right|_{t=t_*}\,.
\end{equation}
We can compute $\beta$ for the two cases of quantum tunneling at finite temperature and at zero temperature.
If the false vacuum decay rate $\Gamma$ in~Eq.~\eqref{eq:tunnelingrate} is set by the thermal transition rate, it exhibits a strong exponential time-dependence (through the temperature of the plasma). In this case the time dependence of $I(t)$ in Eq.~\eqref{eq:prob} is determined primarily by the exponential time-dependence of $\Gamma$ (rather than by the power-law time-dependence of the light cone volume). Hence $\beta \sim \dot{\Gamma} / \Gamma\big|_{t=t_*} $.
In contrast, if the field $\phi$ driving the phase transition is (almost) decoupled from the surrounding plasma or if the phase transition occurs in vacuum, $\Gamma$ is set by the zero-temperature quantum tunneling rate. The latter is time-independent in the simplest case, where tunneling is not affected by other interactions of the tunneling field. For cases of (nearly) constant $\Gamma$, the change of the four-volume of the past light cone in~Eq.~\eqref{eq:prob} determines $\beta$. Note, however, that vacuum tunneling does not generically imply $\Gamma=\text{const}$. This is because a strong exponential time-dependence of $\Gamma$ can also arise if the tunneling field couples to a spectator field with a time-dependent evolution. Hence, for vacuum tunneling, it depends on the underlying model whether the upper or lower expression in~Eq.~\eqref{eq:beta2} below applies.
In summary,
\begin{equation}\label{eq:beta2}
\beta\simeq
\begin{cases} \left.\frac{\dot{\Gamma}}{\Gamma}\right|_{t=t_*} & \Gamma\neq \text{const}\,,\\[3mm]
\frac{4\pi\Gamma}{a(t_*)}\int\limits_0^{t_*} dt^\prime\, a^3(t^\prime)\, r_{\rm com}^2(t_*,t^\prime)
& \Gamma\simeq \text{const} \,.\end{cases}
\end{equation}
The successful completion of a first order phase transition requires the bubbles of true vacuum to percolate such that the energy of the bubble walls can be transferred into radiation. It may naively seem that $\beta>0$ -- i.e.\ a decreasing probability of a point to stay in the false vacuum -- would automatically ensure percolation. However, this is not true since the physical volume of the false vacuum $V_{\text{false}}\propto a^3(t) P(t)$ may increase even for decreasing $P(t)$ due to $a(t)$ growing by the Hubble expansion~\cite{Ellis:2018mja}. Therefore, the relevant criterion for effective bubble percolation is that $V_{\text{false}}$ decreases around the time of the phase transition $t_*$~\cite{Turner:1992tz},
\begin{equation}\label{eq:percolation}
\left.\frac{d}{dt} (a^3 P)\right|_{t=t_*} < 0\quad\Longrightarrow\quad \beta > 3H_*\,.
\end{equation}
The above condition limits the amplitude of gravitational wave emission by a first order phase transition which scales with $(H_*/\beta)^2$.
In the special case $\Gamma=\text{const}$ the duration of the phase transition can be calculated explicitly from Eq.~\eqref{eq:beta2}. The resulting $\beta$ as a function of $\alpha$ is shown in Fig.~\ref{fig:betaalpha}. As a reminder, $\alpha$ is the ratio of the vacuum energy density to the energy density of the surrounding radiation plasma right before the transition (see Eq.~\eqref{eq:alpha}); for a single first order phase transition as in ``old inflation'', $\alpha \rightarrow \infty$. It can be seen that the bubble percolation condition imposes an upper limit $\alpha \lesssim 20$ by which vacuum energy dominates over the preexisting plasma in a successful phase transition with constant $\Gamma$.\footnote{A similar conclusion for cases with a slowly varying $\Gamma$ was drawn in~\cite{Ellis:2018mja}.} Note, however, that this constraint does not apply to cases with $\Gamma\neq \text{const}$ for which $\alpha$ can take any value (including $\alpha=\infty$ as for a phase transition in vacuum).
\section{Pulsar Timing Array Signal from a Phase Transition}\label{sec:nanograv}
The NANOGrav, PPTA and EPTA collaborations have reported strong evidence for a spectrally-similar low-frequency stochastic process which affects pulsar timing residuals~\cite{NANOGrav:2020bcs,Goncharov:2021oub,Chen:2021rqp}. Searches for the quadrupolar Hellings-Downs correlations~\cite{Hellings:1983fr} which would establish a gravitational wave origin are not yet conclusive due to limited statistics. However, the spectral properties of the signal are consistent with a stochastic gravitational wave background at frequencies $f\sim 1\:\text{yr}^{-1}$.
\subsection{Fitting the Pulsar Timing Signal}\label{sec:fitting}
Within the accessible frequency band, the observed power spectrum of the characteristic strain $h_c(f)$ is consistent with a power law,
\begin{equation}\label{eq:charstrain}
h_c(f) = A_{\text{CP}} \:\left(\frac{f}{\text{yr}^{-1}}\right)^{\alpha_{\text{CP}}}\,.
\end{equation}
The preferred regions in terms of the power law index $\alpha_{\text{CP}}$ and the normalization $A_{\text{CP}}$ obtained in the NANOGrav, PPTA and EPTA analyses~\cite{NANOGrav:2020bcs,Goncharov:2021oub,Chen:2021rqp} are shown in Fig.~\ref{fig:nanogravPL}.
The power spectrum of the characteristic strain is directly related to the gravity wave spectrum in terms of the critical density,
\begin{equation}\label{eq:omegahc}
\Omega_{\text{GW}} h^2(f) = \frac{2\pi^2}{3 (H_0/h)^2} f^2 \,h_c^2(f)\,,
\end{equation}
where $H_0 = 100\,h\:\text{km}\,\text{s}^{-1}\text{Mpc}^{-1}$ denotes the Hubble constant. If we model $\Omega_{\text{GW}} h^2(f)$ as a power law,
\begin{equation}\label{eq:omegapl}
\Omega_{\text{GW}} h^2(f) = \overline{\Omega} \left(\frac{f}{\text{yr}^{-1}}\right)^{\bar{\gamma}}\,,
\end{equation}
Eq.~\eqref{eq:omegahc} allows us to directly map the preferred region from the $\alpha_{\text{CP}}$-$A_{\text{CP}}$-plane into the $\bar{\gamma}$-$\overline{\Omega}$ plane as is also shown in Fig.~\ref{fig:nanogravPL}.
We now turn to the interpretation of the pulsar timing signals in terms of a first order phase transition. The corresponding gravitational wave spectrum follows a broken power law with the break (= a maximum) at the peak frequency $f_{\text{peak}}^0$ (see Eq.~\eqref{eq:gravityspectrum}). In most of the parameter space $f_{\text{peak}}^0$ falls outside the frequency band of the pulsar timing arrays which would only measure the rising or falling part of the spectrum and, hence, a single power law. This means that the analyses for the power law case as shown in Fig.~\ref{fig:nanogravPL} can directly be applied. In order to cover also cases with the peak of the gravity wave spectrum inside the experimental frequency bands, we use the following procedure to derive an `average power law':
\begin{enumerate}
\item we determine the power law index $\gamma_i$ and the normalization parameter separately for each of the measurement frequencies
\begin{equation}
\gamma_i=\left.\frac{d\log \Omega_{\text{GW}}}{ d\log f}\right|_{f=f_i}\,\qquad
\Omega_i=\left.\frac{\Omega_{\text{GW}} h^2}{ (f/\text{yr}^{-1})^{\gamma_i}}\right|_{f=f_i}
\end{equation}
\item we define the averaged power law index and normalization by weighting the $\gamma_i$ and $\Omega_i$ with the experimental sensitivity $w_i$ at each of the frequencies,
\begin{equation}
\bar{\gamma} = \sum\limits_{i=1}^5 w_i \gamma_i\,\qquad \log\overline{\Omega} = \sum\limits_{i=1}^5 w_i \log\Omega_i\,.
\end{equation}
The $w_i$ are approximated by the inverse error in the frequency bin normalized such that $\sum_i w_i =1$.
\end{enumerate}
Defining an averaged amplitude and power law index is only reasonable for a small number of measurement frequencies in a relatively narrow band. Therefore, we will only apply the described method to the NANOGrav and PPTA 5-frequency data sets and not to EPTA, for which only a 30-frequency analysis is available.\footnote{The 5-frequency PPTA analysis is presented in Fig.\ 1 (left panel) of~\cite{Goncharov:2021oub}. The error in each bin (which determines $w_i$) is extracted from the right panel of the same figure for PPTA and from the interactive version of Fig. 2 in~\cite{NANOGrav:2021flc} for NANOGrav.}
After matching the gravitational wave spectrum to the effective power law form we can apply the constraints from~\cite{NANOGrav:2020bcs} as shown in Fig.~\ref{fig:nanogravPL}. This allows us to estimate the NANOGrav and PPTA signal regions for a first order phase transition (see Fig.~\ref{fig:spectra}). The signal region for EPTA, which we do not explicitly derive (for the reason stated above), is expected to fall in a very similar range.
\subsection{Implications for a first order Big Bang Phase Transition}\label{sec:implications}
Our main focus is on cosmological scenarios in which the Big Bang occurs through a first order phase transition. This goes back to Guth's seminal idea that the universe was initially trapped in a metastable vacuum driving cosmic inflation~\cite{Guth:1980zm}. Quantum tunneling into the true vacuum then triggers a first order phase transition which was meant to terminate the inflationary epoch. Guth's original old inflation model, however, suffers from the empty-universe problem -- the phase transition is too slow to ever complete~\cite{Guth:1982pn}. True vacuum bubbles are formed so distantly that they never percolate and reheat the universe.
Yet, the failure of old inflation does not rule out a first order Big Bang phase transition which produces all (or most of) the matter and radiation in our present universe. Old inflation corresponds to a phase transition with $\Gamma=\text{const}$, $\alpha=\infty$. Relaxing any of these two assumptions -- i.e. considering a time-dependent tunneling rate and/or a (subdominant) preexisting radiation plasma -- can reconcile a Big Bang phase transition with the percolation condition. We will later present a number of well-motived cosmological scenarios with these properties. In order to capture a wide class of Big Bang phase transitions, we will thus consider the following two cases:
\begin{enumerate}
\item A phase transition in vacuum ($\alpha=\infty$) with a time-dependent vacuum decay rate $\Gamma\neq\text{const}$.
\item a phase transition within a preexisting plasma ($\alpha\neq \infty$) with a constant vacuum decay rate $\Gamma=\text{const}$.
\end{enumerate}
For Big Bang phase transitions we can focus on the gravitational wave spectrum from bubble collisions given in Eq.~\eqref{eq:gravityspectrum} (with $\kappa_\phi$ set to unity). In Fig.~\ref{fig:spectra} we present the range of ($T_*$, $\beta$) and ($T_*$, $\alpha$) for which the NANOGrav and PPTA signals can be explained by a Big Bang phase transition for the two cases described above. The signal regions (following from the derivation in Sec.~\ref{sec:fitting}) are depicted separately for the gravity wave emission predicted by the envelope approximation and by the thick-wall simulation (cf. Tab.~\ref{tab:gwparameters}). Also shown in the figure are the constraints imposed by bubble percolation (Eq.~\eqref{eq:percolation}) and by primordial nucleosynthesis (BBN). Successful BBN requires the phase transition to reheat the universe to a temperature $T_*>1.8\:\text{MeV}$~\cite{Hannestad:2004px,Hasegawa:2019jsa}.
A striking observation is that the favored phase transition temperature $T_*$ (or correspondingly the energy scale of the phase transition) strongly depends on the implemented gravitational wave spectrum. If the spectrum follows the envelope approximation, $T_*\lesssim 100\:\text{MeV}$ is required to fit the pulsar timing signal -- just barely consistent with BBN. In contrast a higher $T_*\simeq \text{MeV}-100\:\text{GeV}$ is preferred for the spectrum predicted by the thick-wall simulation. The origin of this discrepancy is easy to understand: the envelope approximation predicts the gravitational wave spectrum to rise with a power law index $a=2.8$ for $f<f_{\text{peak}}^0$ which is outside the NANOGrav, PPTA and EPTA $2\sigma$-windows independent of the amplitude (see Fig~\ref{fig:nanogravPL}). Therefore, in order to fit the pulsar timing signal in the envelope approximation, the peak frequency $f_{\text{peak}}^0$ must reside inside or below the covered frequency band ($f\simeq 1-10\:\text{nHz}$). This translates to the upper limit of $T_*$ in the MeV-range (cf.\ Eq.~\eqref{eq:peakf}).
The thick-wall simulation, on the other hand, predicts a much softer gravity wave spectrum in the infrared (power law index $a=0.7$) consistent with the pulsar timing signal. At the same time, the spectrum falls very quickly in the ultraviolet (power law index $b=2.2$) which strongly suppresses the signal above $f_{\text{peak}}^0$. Hence -- contrary to the envelope approximation -- the thick-wall simulation favors a peak frequency within or above the frequency band of the pulsar timing arrays. This general trend has already been noted in a previous analysis~\cite{NANOGrav:2021flc}.\footnote{The consistency of NANOGrav with $T_*> \text{GeV}$ is noted in the main text of~\cite{NANOGrav:2021flc}, but due to the specific priors not fully visible in Fig.\ 1 of this reference, which shows the preferred NANOGrav region for gravity waves from bubble collisions (in blue).}
Compared to NANOGrav, PPTA and EPTA have a stronger preference for a rising ($\bar{\gamma}\gtrsim 0$) gravitational wave spectrum at the measured frequencies (see Fig~\ref{fig:nanogravPL}). Therefore, some of the parameter space at low $T_*$ consistent with NANOGrav does not provide a good fit for the other two pulsar timing arrays (for PPTA this is directly visible in Fig.~\ref{fig:spectra}). Since most of the low-$T_*$-regime is, however, anyway excluded by BBN this difference is of minor importance.
\subsection{Properties of Potentials for First Order Transitions Required by Pulsar Timing Array Data}\label{sec:properties}
We can now ask what the implications of our main results illustrated in Fig.~\ref{fig:spectra} are for general properties of potentials that are responsible for first order phase transitions. Five examples of potentials will be discussed in the subsequent sections, but
we can already determine what the scales of the potentials must be in order to explain the pulsar timing array data.
For this purpose we use~Eq.~\eqref{eq:Tst} in order to obtain the preferred range of $\rho_{\text{vac}}$ for the signal regions shown in Fig.~\ref{fig:spectra}. For the gravitational wave spectrum from the envelope approximation and the thick-wall simulation we find
\begin{equation}
\rho_{\text{vac}}^{1/4}\simeq \begin{cases}
2\:\text{MeV} - 0.2 \:\text{GeV}& \text{(envelope approximation)\,,}\\
2\:\text{MeV} - 300 \:\text{GeV}& \text{(thick-wall simulation)\,,}
\end{cases}
\end{equation}
which is very similar to the allowed range in $T_*$ shown in Fig.~\ref{fig:spectra}. An ingredient in obtaining these scales
is that the number of e-foldings during the tunneling transition
must satisfy $(H_*/\beta) \sim 1/150- 1/3$ as shown in Fig.~\ref{fig:spectra}. The upper bound arises from the percolation condition in Eq.~\eqref{eq:percolation}, while the lower bound comes from requiring a large enough normalization of the gravitational wave signal.
In model realizations, the vacuum energy $\rho_{\text{vac}}$ corresponds to the energy density difference between the false and true vacuum in the potential of the tunneling field.
\section{Cosmological Scenarios with a first order Big Bang Phase Transition}\label{sec:scenarios}
Old inflation~\cite{Guth:1980zm} provides the best-known example of a first order phase transition associated with the Big Bang. While the original model fails the bubble percolation condition, simple modifications successfully reheat the universe in a Big Bang phase transition. Furthermore, well-motivated models of the early universe exist, in which the universe becomes vacuum-dominated for a second time after inflation and undergoes a ``late'' Big Bang phase transition. Below we will describe five complementary cosmological scenarios which feature a Big Bang phase transition consistent with the signal observed at NANOGrav, PPTA and EPTA.
\subsection{Double Field Inflation}\label{sec:doublefield}
Double-field inflation~\cite{Adams:1990ds,Linde:1990gz} is a successful model of the early universe, in which inflation ends through a first order phase transition. Just as in old inflation the exponential expansion of space is driven by a scalar field which is initially trapped in a false vacuum. However, double-field inflation evades the empty-universe problem through the inclusion of a second scalar field which introduces a time-dependence in the vacuum decay rate $\Gamma(t)$. At the beginning, $\Gamma$ is suppressed -- thus permitting enough e-folds of inflation -- but later it becomes so large that the phase transition completes rapidly in a Hot Big Bang.
The bubble collisions during the phase transition induce gravitational radiation which can potentially be probed by interferometers and pulsar timing arrays~\cite{Lopez:2013mqa}. We will consider the phase transition which ends inflation as the origin of the NANOGrav, PPTA and EPTA signal (see~\cite{Ashoorioon:2022raz} for a related idea\footnote{In~\cite{Ashoorioon:2022raz} a phase transition after slow-roll inflation is considered as the origin of the NANOGrav signal. Instead, we will focus on the complementary case of double field inflation, in which the phase transition itself terminates the inflationary epoch.}).
From our estimate in Sec.~\ref{sec:properties} it follows that fitting the pulsar timing signals (by a phase transition at the end of inflation) requires an inflation scale $\lesssim 100\:\text{GeV}$. While slow-roll inflation at such a low scale would (typically) require extreme fine-tuning, this is not the case in double-field inflation. The tuning in low-scale slow-roll inflation is linked to the challenge that an extremely flat potential ($M_{\text{P}}\,V'/V\lesssim 10^{-30}$ with $M_{\text{P}}$ denoting the reduced Planck mass) is required for the density fluctuations to match the Cosmic Microwave Background (CMB) amplitude.\footnote{In rolling models, CMB normalization requires
$A_s = V /(24 \pi^2 M_{\text{P}}^4 \epsilon)
= 2.1 \times 10^{-9}$
so that the slow-roll parameter must satisfy
$\epsilon \equiv (M_{\text{P}}^2/2) (V'/V)^2 \sim 5.7 \times 10^{-76} \, [V / (10\:\text{MeV})^4] . $}
However, in double-field inflation a constant contribution to the potential during inflation is provided by the energy density of the false vacuum. Hence, an effectively very flat potential -- as needed for low-scale inflation -- can more naturally be realized.
The basic mechanism of double field inflation is illustrated in Fig.~\ref{fig:potential3d} which depicts the two-field potential. Initially, the inflaton field $\phi$ is displaced far from its minimum. The tunneling field $\chi$ is held in a false vacuum through its coupling to the inflaton. While $\phi$ slowly rolls down its potential, a deeper (=true) minimum in $\chi$-direction appears, and simultaneously the barrier between the two minima becomes shallower. The tunneling rate of $\chi$ into the true minimum increases with time. Once the inflaton reaches a critical field-value $\phi_*$ -- roughly when one true-vacuum bubble is formed per Hubble patch -- inflation ends in a first order phase transition with $\chi$ tunnelling into the true vacuum. The vacuum bubbles collide quickly and reheat the universe successfully.
The idea of double-field inflation can be implemented in a plethora of model realizations. As a simple example, we consider the following two-field Lagrangian,
\begin{equation}\label{eq:doubleL}
\mathcal{L}= \frac{1}{2}\left(1-\frac{\phi^2}{\Lambda^2}\right)^{-2}\partial_\mu \phi\partial^\mu \phi + \frac{1}{2}\partial_\mu \chi\partial^\mu \chi - V(\phi,\chi)\,,
\end{equation}
with
\begin{equation}\label{eq:doubleV}
V(\phi,\chi)= \frac{m_{\phi}^2}{2}\phi^2 + \kappa \phi^2\chi^2 + V_0+\frac{m_\chi^2}{2}\chi^2 - \mu \chi^3 + \lambda^2 \chi^4\,.
\end{equation}
The potential exhibits a metastable minimum with energy density $V_0$ at $\chi=0$, while the global minimum is located at $\phi=0$, $\chi=(3\mu+\sqrt{9\mu^2-16\lambda^2 m_\chi^2})/(8\lambda^2)$.\footnote{We assumed $\mu > 4\lambda m_\varphi/3$.} We chose $V_0$ such that the potential energy vanishes in the true minimum.
Double field inflation with the potential in Eq.~\eqref{eq:doubleV} but with canonical kinetic terms has previously been discussed in~\cite{Copeland:1994vg,Cortes:2009ej}. Since this minimal realization is now in tension with CMB constraints\footnote{We note that in general models with convex potentials for rolling fields are no longer a good fit to the data.} one needs to slightly modify the original scheme. As a simple possibility we considered in Eq.~\eqref{eq:doubleL} is a non-canonical kinetic term of the inflaton as motivated in the context of $\alpha$-attractor inflation~\cite{Kallosh:2013hoa}.\footnote{The resulting double field inflation model also bears some resemblance to hybrid $\alpha$-attractor inflation recently proposed in~\cite{Kallosh:2022ggf}.}
The following field redefinition allows us to express the Lagrangian in terms of the canonically normalized inflaton field $\hat{\phi}$,
\begin{equation}
\phi = \Lambda \tanh\left(\frac{\hat{\phi}}{\Lambda}\right)\,.
\end{equation}
During inflation $\chi$ is trapped in the metastable minimum at $\chi=0$ and the potential in inflaton direction (in the canonically normalized basis) is given as
\begin{equation}\label{eq:infpotdf}
V = V_0 +\frac{m_\phi^2 \Lambda^2}{2} \tanh^2\left(\frac{\hat{\phi}}{\Lambda}\right)\,.
\end{equation}
Successful inflation can be realized for any value of $V_0$. However, if $V_0$ is subdominant we would essentially be left with a slow-roll inflation model, which is not the focus of this work. Instead we will concentrate on the regime of ``true'' double-field inflation, where the energy density during inflation is dominated by the false vacuum energy $V_0$.
The inflaton is initially displaced from its minimum, thereby contributing to the effective mass of the tunneling field,
\begin{equation}\label{eq:effmass}
m_{\chi,\text{eff}}^2 = m_\chi^2 + 2 \kappa \Lambda^2 \tanh^2\left(\frac{\hat{\phi}}{\Lambda}\right)\,.
\end{equation}
For large $\hat{\phi}$ the $\chi$-field is strongly stabilized at $\chi=0$. However, as the inflaton rolls down its potential, a second minimum in $\chi$-direction develops which eventually becomes energetically favorable (see Fig.~\ref{fig:potential3d}). The universe still remains in the false vacuum for some time due to the potential barrier separating the two minima. But eventually $\chi$ tunnels into the true minimum and inflation ends in a first order phase transition.
Given that $V_0$ is dominant compared to all other energy scales in the problem, the vacuum transition occurs mostly in $\chi$-direction. In order to obtain the tunneling rate we can thus employ the analytic approximation for single-field tunneling in a quartic potential~\cite{Adams:1993zs},
\begin{equation}\label{eq:tunneling_doublefield}
\Gamma \simeq m_{\chi,\text{eff}}^4 \left(\frac{S_4}{2\pi}\right)^2 e^{-S_4}\,,\qquad
S_4 = \frac{\pi^2\mu^6}{24\lambda^2(\mu^2-2\lambda^2 m_{\chi,\text{eff}}^2)^3}\sum\limits_{i=1}^3 A_i \left(\frac{\lambda m_{\chi,\text{eff}}}{\mu}\right)^{2i}\,,
\end{equation}
with $A_1=55.328$, $A_2=- 173.104$ and $A_3=132.896$.
The duration of the phase transition $\beta^{-1}$ is obtained from Eq.~\eqref{eq:beta2}. We can approximate
\begin{equation}\label{eq:betadoublefield}
\beta \simeq \left.\frac{\dot{\Gamma}}{\Gamma}\right|_{t=t_*}
\simeq -\left.\dot{S_4}\right|_{t=t_*}\simeq \left.\frac{1}{3H}\frac{\partial S_4}{\partial\hat{\phi}}\frac{\partial V}{\partial \hat{\phi}}\right|_{\hat{\phi}=\hat{\phi}_*} \,,
\end{equation}
where we used that the time-dependence of $\Gamma$ dominantly arises from the time-dependence of the Euclidean action of the bounce. Furthermore, we employed the equation of motion $3H\dot{\hat{\phi}}+\partial V/\partial \hat{\phi}\simeq 0$ in the last step.
In order to derive the critical inflaton field-value $\hat{\phi}_*$ at which the tunneling is triggered, we need to determine the time of the phase transition $t_*$. The latter is defined by the condition $I(t_*)=1$ with the integral $I$ from Eq.~\eqref{eq:prob}. We note that $I(t_*)$ is strongly dominated by times around $t_*$.
Thus we can replace $a(t^\prime) r_{\rm com}(t,t^\prime)$ by $(t-t^\prime)$ in the integral. Furthermore, expanding the bounce action in the exponent of Eq.~\eqref{eq:tunneling_doublefield}
around $t=t_*$ and using Eq.~\eqref{eq:betadoublefield} we approximate $\Gamma(t) \simeq \Gamma(t_*) e^{\beta(t-t_*)}$. We obtain $I(t_*)=8\pi \Gamma(t_*)/\beta^4 = 1$ and, hence,
\begin{equation}\label{eq:Gammatstar}
\Gamma(t_*) = \frac{\beta^4}{8\pi}\,.
\end{equation}
Plugging Eq.~\eqref{eq:tunneling_doublefield} and~\eqref{eq:betadoublefield} into~Eq.~\eqref{eq:Gammatstar} yields an implicit equation for $\hat{\phi}_*$ which can be solved numerically.
Let us now turn to the cosmological predictions of double field inflation. The perturbations seeding the CMB anisotropies are generated by quantum fluctuations of $\phi$ during the slow-roll phase before the phase transition. Therefore, CMB observables are calculated in the slow-roll formalism. Defining the slow roll parameters,
\begin{equation}
\epsilon=\left.\frac{M_{\text{P}}^2}{2}\left(\frac{\partial V/\partial \hat{\phi}}{V}\right)^2\right|_{\chi=0}\,,
\quad\eta =\left.M_{\text{P}}^2\,\frac{\partial^2 V/\partial \hat{\phi}^2}{V}\right|_{\chi=0}\,,
\end{equation}
we can employ the standard expressions for the normalization $A_s$ and the spectral index $n_s$ of the scalar power spectrum. Taking into account that the energy density during inflation is dominated by $V_0$, we arrive at,
\begin{equation}\label{eq:observables}
A_s\simeq \left.\frac{V_0}{24\pi^2\,M_{\text{P}}^4\,\epsilon}\right|_{aH=k_{\text{pivot}}}\,,\qquad
n_s\simeq 1 - 6 \epsilon + 2\eta\Big|_{aH=k_{\text{pivot}}}\,,\qquad
r \simeq 16\epsilon\Big|_{aH=k_{\text{pivot}}}
\, .
\end{equation}
The quantities above are evaluated at horizon crossing of the Pivot scale
$k_{\text{pivot}}=0.05\:\text{Mpc}$ of density fluctuations observable in the CMB.
The number of e-foldings between the horizon crossing of the pivot scale and the end of inflation is given by
\begin{equation}\label{eq:Nk1}
N(k_{\text{pivot}}) = \int\limits_{\hat{\phi}_*}^{\hat{\phi}_{\text{pivot}}} \frac{d\hat{\phi}}{\sqrt{2\epsilon}}\,.
\end{equation}
where we defined $\hat{\phi}_{\text{pivot}}$ as the field value at which the inflaton resides when $aH=k_{\text{pivot}}$. The critical field value $\hat{\phi}_*$ at which tunneling is triggered determines the end of inflation.
This is in contrast to conventional slow-roll inflation where the lower boundary of the integral (the end of inflation) is set by the field-value at which the slow-roll conditions are violated. $N(k_{\text{pivot}})$ is fixed by the energy scale of inflation
\begin{equation}\label{eq:Nk2}
N(k_{\text{pivot}}) = \log\left(\frac{a_*}{a_{\text{pivot}}}\right)
= \log\left(\frac{a_* H_*}{k_{\text{pivot}}}\right)\simeq 19.2 - \frac{1}{12}\log\left(g_{\text{eff}}(T_*)\right)+ \log \left( \frac{V_0^{1/4}}{\text{GeV}} \right)\,,
\end{equation}
where we denoted the scale factor and Hubble scale at horizon crossing of the Pivot scale during inflation by $a_{\text{pivot}}$ and $H_{\text{pivot}}$. In the second step we approximated $H$ as being constant throughout the epoch of inflation so that $H_{\text{pivot}} \simeq H_*$. Since $V_0$ dominates the inflaton potential in Eq.(\ref{eq:infpotdf}) (by many of orders of magnitude),
this approximation is very accurate.
Here $T_*$ is the reheating temperature (= the temperature of the radiation plasma directly after the phase transition),
\begin{equation}
T_* =\left(\frac{30\,V_0}{\pi^2 g_{\text{eff}}(T_*)}\right)^{1/4}\,.
\end{equation}
Since the phase transition completes within a small fraction of a Hubble time, we approximated it as instantaneous for deriving $N(k_{\text{pivot}})$ above. The corresponding error on $N(k_{\text{pivot}})$ is negligible.
The inflaton-field value $\hat{\phi}_{\text{pivot}}$ can now be obtained by combining Eq.~\eqref{eq:Nk1} and Eq.~\eqref{eq:Nk2}.
The inflaton potential in Eq.~\eqref{eq:infpotdf} is suitable for low-scale inflation required to fit the signal observed by pulsar timing arrays. The plateau in $\hat{\phi}$-direction resulting from the pole in the kinetic term amounts to an inflationary attractor even if the initial energy density of the universe strongly exceeds $V_0$. In order to arrive at a viable model we impose the correct normalization of the scalar power spectrum $A_s=2.1\times 10^{-9}$~\cite{Planck:2018jri} (cf.~Eq.~\eqref{eq:observables}) and the e-fold condition~\eqref{eq:Nk2} which allows us to eliminate $m_\phi$ and $\hat{\phi}_{\text{pivot}}$. The spectral index is then determined by $V_0$ and $\Lambda$ (or more conveniently $\hat{\phi}_{\text{pivot}} / \Lambda$) as depicted in Fig.~\ref{fig:nsplot}.\footnote{The spectral index also exhibits a mild dependence on $\hat{\phi}_*$ which we fixed to $\hat{\phi}_{\text{pivot}}/10$ in Fig.~\ref{fig:nsplot}.} It can be seen that fitting the NANOGrav signal, while simultaneously fulfilling the CMB constraints on $n_s$~\cite{Planck:2018jri}, requires $\hat{\phi}_{\text{pivot}} / \Lambda\simeq 0.7 -0.8$.
We emphasize, however, that the tensor-to-scalar ratio is highly suppressed in double field inflation models which can fit the pulsar timing signals. By using~Eq.~\eqref{eq:observables} and imposing again the correct normalization of the scalar power spectrum, we obtain
\begin{equation}\label{eq:r}
r \simeq 9.1\times 10^{-67} \:\left(\frac{V_0}{\text{GeV}^4}\right)\,.
\end{equation}
For the range of scales favored by the NANOGrav signal (green band in Fig.~\ref{fig:nsplot}) we find $r=10^{-78}-10^{-56}$. Hence, we can conclude that whenever the gravitational waves from the phase transition at the end of double field inflation are observable (with pulsar timing arrays), tensor modes from inflation are completely negligible.
Fig.~\ref{fig:nsplot} also shows the original rolling $\alpha$-attractor inflation model (dashed line). In this case there is only a single scalar field. For the purposes of this figure we take the y-axis (labeled as $V_0$) to represent the scale of inflation for that model, i.e. only the tanh term in the potential in Eq.(\ref{eq:infpotdf}). One can see that the original slow roll $\alpha$-attractor inflation fails to reproduce the observed $n_s$ in CMB data for potentials at low energy scales.
Further, since it is a slow roll model of inflation, there are no bubbles produced and hence no gravitational waves capable of explaining pulsar timing data.
On the other hand, $\alpha$-attractor variants at low inflation scales can succeed in two field models.
The double-field model presented here
(that uses the non-canonical kinetic term of $\alpha$-attractor models but ends in a first order phase transition) can be successful for potentials of any energy scale, including the range $V_0 \sim$ MeV - 100 GeV that is required by NANOGrav data. Secondly, hybrid $\alpha$-attractor inflation~\cite{Kallosh:2022ggf} can also give rise to low energy inflation, although there are no bubble collisions and hence the model cannot explain pulsar timing data.
In these two field models, the potential in Eq.(\ref{eq:infpotdf}) is dominated by the $V_0$ term set by the second field (the tunneling field in the double field inflation case), a term not
present in single field $\alpha$-attractor inflation.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|ll|ll|}
\hline
&&&\\[-4mm]
\multicolumn{2}{|c|}{Input Parameters}& \multicolumn{2}{c|}{Inflation/ CMB} \\
\hline
&&&\\[-4mm]
$\Lambda$~[meV] & $3.2$ & $V^{1/4}(\hat{\phi}_{\text{pivot}})$~[GeV] & $13.7$\\[1mm]
$m_\phi$~[$\mu$eV] & $0.028$ & $N(k_{\text{pivot}})$ & $21.4$\\[1mm]
$m_\chi$~[$\mu$eV] & $36.2$ & $A_s$ & $2.1\times 10^{-9}$\\[1mm]
$\mu$~[$\mu$eV] & $53.6$ & $n_s$ & $0.965$\\[1mm]
$\lambda$ & $1.7\times 10^{-10}$ & $r$ & $3.2\times 10^{-62}$\\
\cline{3-4}
&&&\\[-4mm]
$\kappa$ & $0.02$ & \multicolumn{2}{c|}{Phase Transition} \\
\hline
&&&\\[-4mm]
\multicolumn{2}{|c|}{Derived Parameters} & $T_*$~[GeV] & $5.9$ \\
\cline{1-2}
&&&\\[-4mm]
$\hat{\phi}_{\text{pivot}} / \Lambda$ & $0.76$ & $\beta/H_*$ & $4.7$ \\[1mm]
$\hat{\phi}_* / \hat{\phi}_{\text{pivot}}$ & $0.1$ & $\alpha$ & $\infty$\\
\hline
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{Benchmark point for the double field model containing the inflaton $\phi$ and the tunneling field $\chi$ as defined in Eq.~\eqref{eq:doubleL}. The benchmark point can explain the tentative gravitational wave signal observed at several pulsar timing arrays. Input parameters and predictions for the scale and e-foldings of inflation, CMB observables and phase transition parameters are shown. The corresponding gravitational wave spectrum is depicted in Fig.~\ref{fig:spectra2} (labelled by P1).}
\label{tab:dfiparameters}
\end{table}
In Tab.~\ref{tab:dfiparameters} we provide a benchmark point for successful double field inflation ending in a Big Bang phase transition. The corresponding gravitational wave spectrum is shown together with the NANOGrav data in Fig.~\ref{fig:spectra2} (labelled by P1). The location of the benchmark point in the thick-wall regime of vacuum tunnelling suggests to employ the spectrum of the thick-wall simulation (left panel of the figure). As can be seen a good fit to the NANOGrav signal is obtained. Since only the lower tail of the signal falls into the frequency band of pulsar timing arrays, the measured spectrum is well described by a single power law with amplitude $\overline{\Omega}=3\times 10^{-9}$ and index $\bar{\gamma}=0.7$. Fig~\ref{fig:nanogravPL} immediately reveals that such a power law spectrum also well describes the PPTA and EPTA data. Hence, double field inflation is a good candidate for generating the tentative gravitational wave signal observed by the pulsar timing arrays.
\subsection{Chain Inflation}\label{sec:chaininflation}
Chain Inflation~\cite{Freese:2004vs,Freese:2005kt,Ashoorioon:2008pj} is another well-motivated model of the early universe with a first order phase-transition origin of matter and radiation. In contrast to old inflation, chain inflation features a series of consecutive first order phase transitions instead of a single one. Each individual transition proceeds rapidly within a small fraction of a Hubble time such that the bubble percolation condition is easily satisfied. And yet -- due to the presence of many individual vacua -- inflation can easily last for the $15-60$ e-folds required to resolve the horizon problem.
Radiation, matter and gravity waves are generated at each of the phase transitions along the chain -- there are thus many consecutive Hot Big Bangs. However, since matter and radiation produced early during inflation are quickly redshifted away, it is the last few Big Bangs which generate the energy content observed in our present universe.
In order to fit the pulsar timing array signals in chain inflation we will again be drawn to a low inflation scale in the sub-TeV regime. In this light, it is important to point out that low-scale chain inflation can be realized without parameter tuning. This is different from low-scale slow roll inflation which requires an extremely flat (typically tuned) potential in order to match the observed CMB amplitude. The advantage of chain inflation arises due to the origin of the CMB anisotropies which (in contrast to slow roll inflation) is not linked to quantum fluctuations of the inflaton -- the latter are suppressed by the inflaton mass in each of the vacua. Rather, the probabilistic nature of tunneling -- different patches of the universe undergo tunneling at slightly different times -- causes density perturbations in the primordial plasma which later manifest as the anisotropies in the CMB. As we will see below, the CMB amplitude in chain inflation is determined by the tunneling rate normalized to the Hubble rate. Hence, no particular requirements on the flatness of the potential arise in low-scale chain inflation.
The CMB observables of chain inflation have recently been derived through dedicated simulations in~\cite{Winkler:2020ape}. In the following, we denote the vacuum in which the universe resides during horizon crossing of the the Pivot scale of the CMB by $n=0$, the next vacuum in the chain by $n=1$, the next-to-next vacuum by $n=2$ and so on. An index $n$ indicates that a quantity is evaluated in the $n$th vacuum.
This choice of definition of $n=0$ at horizon crossing of the pivot scale has been made for convenience of notation, since this is the scale at which $n_s$ and $r$ are determined from CMB observations. We note, however, that chain inflation began earlier, with the vacuum residing in higher values of the potential; i.e. in the current notation chain inflation began already at negative values of $n$. Indeed CMB observables at the largest length scales arise from these earlier phase transitions, which would be relevant for determining e.g the running of the spectral index.
In our notation, the scalar power spectrum and the scalar spectral index are given as
\begin{equation}\label{eq:chainobservables}
A_s \simeq 0.06 \left(\frac{\Gamma_0^{1/4}}{H_0} \right)^{-5/3}\,,\qquad
n_s \simeq 1+ 0.58\,\left(\frac{\Gamma_0^{1/4}}{H_0} \right)\,\left( \frac{2\Delta V_0}{V_0} - \frac{\Delta \Gamma_0}{\Gamma_0} \right)\,,
\end{equation}
where $\Delta \Gamma_n=\Gamma_{n+1}-\Gamma_n$ and $\Delta V_n=V_{n+1}-V_n$. Note that in the above expression $H_0$ stands for the Hubble rate in the $0$th vacuum and not for the Hubble constant today.
A prime candidate for the inflaton in chain inflation is an axion in a quasi-periodic potential. We consider the following simple realization
\begin{equation}\label{eq:chainmodel}
V = \Lambda^4 \cos\left( \frac{\phi}{f} \right) - \mu^3 \phi + V_{\text{stop}}\,,
\end{equation}
where the parameters $\Lambda$, $\mu$ and $f$ are chosen such that the potential exhibits a series of metastable minima (which implies $\Lambda^4 > f\mu^3$). During inflation $\phi$ tunnels along the minima of the tilted cosine. The last term is irrelevant for tunneling during inflation, but ensures that the inflaton stops in a (quasi)stable minimum once the vacuum energy has been dissipated. A possible choice -- familiar from the relaxion mechanism~\cite{Graham:2015cka} -- is\footnote{This model has recently been considered as a realization of early dark energy~\cite{Freese:2021rjq}.}
\begin{equation}\label{eq:relax}
V_{\text{stop}} = (M_1^2 - M_2 \phi)\chi^2 + {\Lambda^\prime}^2 \chi^2\cos\frac{\phi}{f} + \lambda\chi^4+\text{const} \,.
\end{equation}
The auxiliary field $\chi$ is initially stabilized at $\chi=0$ and decouples from inflation. But once the inflaton passes the critical field value $\phi_c \simeq M_1^2/M_2$ the $\chi$-field gets displaced. Thereby it raises the potential barriers in $\phi$-direction and quickly stops the tunneling in a minimum with vanishing vacuum energy (the latter is ensured through appropriate choice of the constant in Eq.~\eqref{eq:relax}). The inflaton potential with $\chi$ set to its minimum is depicted in Fig.~\ref{fig:chainplot}.
At $\phi<\phi_c$ the inflaton potential is a pure tilted cosine and the tunneling rate remains constant.\footnote{This strictly holds if temperature effects on the tunneling rate can be neglected which is justified if the coupling between $\phi$ and the radiation generated by earlier phase transitions is sufficiently suppressed. We emphasize, however, that the absence of temperature effects on $\Gamma$ is not crucial for realizing chain inflation. We merely avoided the additional model-dependence in the presence of temperature effects for the sake of a simple discussion.} But once the stopping mechanism is triggered, the tunneling rate decreases exponentially due to the exponential dependence of $\Gamma$ on the Euclidean action of the bounce. (cf.\ Eq.~\eqref{eq:tunnelingrate}). We can parametrize $\Gamma_n$ in the following way\footnote{The Euclidean action of the bounce scales approximately as $S_4\propto (\Lambda^4+{\Lambda^\prime}^2 \chi^2 )^2 $ for the stopping potential in Eq.~\eqref{eq:relax}. Expanding $S_4$ around $\chi=0$ and taking into account $\chi\propto (n-n_c)$ suggests a quadratic dependence $S_4= S_{4,0} + S^\prime (n-n_c)^2$ on $n$ after the inflaton passes the critical field value.},
\begin{equation}\label{eq:Gamma0}
\Gamma_n = \begin{cases}
\Gamma_0 &\;\; n\leq n_c\,,\\
\Gamma_0 \, e^{-S^\prime (n-n_c)^2} &\;\; n> n_c\,,
\end{cases}
\end{equation}
where $n_c$ is the number of the vacuum in which the stopping mechanism is triggered, i.e.\ the vacuum corresponding to the field value $\phi_c$. If $\phi_c$ lies between two minima in the potential $n_c$ becomes a non-integer number (e.g.\ $n_c=1000.5$ if $\phi_c$ lies in the middle between the 1000th and 1001th minimum of the potential)\footnote{We note that the field always resides in a minimum of the potential, corresponding to an integer value of $n$. However, the tunneling rate may be different at two adjacent minima due to the fact that $\phi_c$ lies in between these two minima.}.
The parameters specifying the tunneling rate in Eq.~\eqref{eq:Gamma0} can easily be linked to the potential parameters through the expressions for the tunneling rate in a quasi-periodic potential as given in Ref.~\cite{Winkler:2020ape}. In the following -- due to the absence of strong theoretical priors on the potential parameters -- we avoid this step and simply define our chain inflation model in terms of $\Gamma_0$, $S^\prime$, $n_c$ and the scale of inflation $V_0$. This choice is most convenient for the comparison with observation.
After imposing the correct normalization of the power spectrum $A_s=2.1\times 10^{-9}$ we use Eq.~\eqref{eq:chainobservables} to relate the spectral index to the total number of transitions $n_{\text{tot}}$ after horizon crossing of the pivot scale in the CMB $n_{\text{tot}} \simeq V_0/\Delta V_0$. Since only a very small number of transitions occurs after the inflaton passes the critical field-value we can set $n_{\text{tot}}\simeq n_c$. We, hence, obtain
\begin{equation}\label{eq:nc}
n_c = \frac{3.45\times 10^4}{1-n_s} \simeq (0.8-1.3)\times 10^6\,.
\end{equation}
In the last step we imposed the Planck $2\sigma$-constraint $n_s=0.956-0.973$~\cite{Planck:2018jri}. CMB constraints thus require chain inflation to feature a relatively large number of vacuum transitions in the range of $10^6$ (as was previously noted in~\cite{Winkler:2020ape,Freese:2021noj}).
The number of e-foldings between horizon crossing of the pivot scale in the CMB and the time $t_c$ when the stopping mechanism is triggered (at $\phi=\phi_c$) is given as~\cite{Winkler:2020ape}
\begin{equation}\label{eq:Nkchain}
N_c \simeq 0.7 \sum\limits_{n=1}^{n_c} \frac{H_n}{\Gamma_n^{1/4}}\simeq 2.4\times 10^{-5} \sum\limits_{n=1}^{n_c}\sqrt{\frac{V_n}{V_0}}\simeq 2.4\times 10^{-5} \sum\limits_{n=1}^{n_c} \sqrt{1-\frac{n}{n_c}}
\simeq 16\,\left(\frac{n_c}{10^6}\right)\,,
\end{equation}
where we again employed the normalization of the scalar power spectrum. Furthermore, we neglected the radiation contribution to $H_n$. A refined estimate taking into account the radiation plasma increases $N_c$ by $\sim 1/2$ compared to Eq.~\eqref{eq:Nkchain}. Plugging Eq.~\eqref{eq:nc} into Eq.~\eqref{eq:Nkchain} and including the small correction yields
\begin{equation}\label{eq:Nc}
N_c =13-21\,,
\end{equation}
for the spectral index in the Planck-observed range. The number $N_c$ is very similar but not identical to the total number of e-folds during observable inflation $N(k_{\text{pivot}})$. As a reminder, we take $N(k_{\text{pivot}})$ to be the number of e-folds between the horizon crossing of the CMB pivot scale and the end of inflation i.e. the onset of the radiation-dominated epoch (shown in Fig.~\ref{fig:rhochain} at
the point where the vacuum (blue) and radiation (yellow) lines cross). On the other hand we take $N_c$ to be the number of e-folds between the horizon crossing of the CMB pivot scale and the trigger of the stopping mechanism at $t_c$ (shown in Fig.~\ref{fig:rhochain} at the point where the vacuum (blue) line takes a 90$^{\circ}$ turn). During chain inflation a radiation background with energy density $\rho_r\sim V_0/N_c$ is present since it takes $\sim 1$ e-fold to redshift away radiation from earlier phase transitions. Hence, radiation domination starts about one Hubble time before $t_c$.
The temperature of the universe $T_c$ at the time $t_c$ can be obtained by summing up the contributions to the radiation density from all previous phase transitions (taking into account their redshift). We find
\begin{equation}\label{eq:Tcchain}
T_c = \left(\frac{30}{\pi^2} \frac{\rho_r(T_c)}{g_{\text{eff}}(T_c)}\right)^{1/4}\quad\text{with}\quad
\rho_r(T_c)\simeq 0.7\, \frac{V_0}{N_c}\,.
\end{equation}
After $t_c$ the universe undergoes a small number of ever slower vacuum transitions before it settles in a quasistable vacuum for its remaining lifetime. While the universe is (strongly) radiation-dominated at $t_c$ it may become vacuum-dominated for a second time if the last vacuum transition occurs sufficiently late. The second vacuum-domination -- if it occurs -- can only last a fraction of an e-fold since the percolation condition would otherwise be violated (reintroducing the empty universe problem of old inflation). For relating the number of e-folds to the scale of inflation we can, hence, neglect this small episode and obtain an expression similar to Eq.~\eqref{eq:Nk2},
\begin{equation}\label{eq:NcV0}
N_c =\log\left(\frac{a_c H_c}{k_{\text{pivot}}}\right)\simeq 19.2 + \log \left( \frac{\rho_r^{1/4}(T_c)}{g_{\text{eff}}^{1/12}(T_c)\:\text{GeV}} \right)\simeq 19.1 + \log \left( \frac{V_0^{1/4}}{N_c^{1/4}\,g_{\text{eff}}^{1/12}(T_c)\:\text{GeV}} \right)\,,
\end{equation}
where we used Eq.~\eqref{eq:Tcchain} in the last step. As previously found, the correct normalization and spectral index of the scalar power spectrum imposes $N_c=13-21$. The corresponding scale of inflation derived from Eq.~\eqref{eq:NcV0} is
\begin{equation}\label{eq:V0range}
V_0^{1/4} = 5\:\text{MeV}- 20\:\text{GeV}\,.
\end{equation}
We can thus conclude that the simple chain inflation model defined in Eq.~\eqref{eq:chainmodel} is consistent with all cosmological constraints for a low inflation scale in the MeV-GeV-range. Such a low inflation scale immediately suggests a gravitational wave signal in the frequency band of pulsar timing arrays.
In Fig.~\ref{fig:rhochain} we depict the evolution of the vacuum and radiation densities for the benchmark parameter point in Tab.~\ref{tab:chainiparameters}. It can be seen that the vacuum energy initially dominates and drives the rapid expansion of space. Before the inflaton reaches the critical field value at $t=t_c$, the density $\rho_{\text{vac}}$ decreases linearly with time due to the constant tunneling rate (which looks almost like a step-function in the figure due to the log-log-scale). After $t_c$ the barriers in the inflaton potential start to increase, thereby stopping the tunneling after a few more transitions in a vacuum whose lifetime exceeds the age of the universe. For the benchmark point only two transitions occur after $t_c$. The first one is too close to $t_c$ to be resolved in the figure, whereas the last transition causes the second step in $\rho_{\text{vac}}$ at the time $t_*\simeq 0.01\:\text{s}$.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|ll|ll|ll|}
\hline
&&&&&\\[-4mm]
\multicolumn{2}{|c|}{Input Parameters}& \multicolumn{2}{c|}{Inflation/ CMB}& \multicolumn{2}{c|}{Phase Transition} \\
\hline
&&&&&\\[-4mm]
$\Gamma_0^{1/4}$ & $1.2\times 10^9\:\text{s}^{-1}$ & $V^{1/4}(\phi_{\text{pivot}})$~[GeV] & $0.33$& $T_*$~[MeV] & $9.4$\\[1mm]
$n_c$ & $1.1\times 10^6$ & $N(k_{\text{pivot}})$ & $18.5$& $\beta/H_*$ & $6.7$\\[1mm]
$S^\prime$ & $63.5$ & $A_s$ & $2.1\times 10^{-9}$& $\alpha$ & $0.6$\\[1mm]
& & $n_s$ & $0.969$& &\\
\hline
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{Chain inflation model parameters entering Eq.~\eqref{eq:Gamma0} and predictions for inflation, CMB observables. Also given are the parameters characterizing the final Hot Big Bang phase transition. The corresponding gravitational wave spectrum (P2 in Fig.~\ref{fig:spectra2}) is consistent with the NANOGrav signal.}
\label{tab:chainiparameters}
\end{table}
Each phase transition along the chain generates new vacuum bubbles which seed radiation and gravity waves upon collision. Also shown in Fig.~\ref{fig:rhochain} is the radiation density which remains approximately constant at $\rho_{\text{r}}\sim V_0/N_c$ during inflation. This is because the continuous increase of $\rho_{\text{r}}$ by bubble collisions cancels with the decrease of $\rho_{\text{r}}$ by redshifting. Shortly before $t_c$ the vacuum energy drops below $V_0/N_c$ and the universe becomes radiation-dominated. However, since the transitions after $t_c$ -- in particular the last one -- take longer, a second era of vacuum significance occurs (falling slightly short of vacuum-domination). This era ends by the last vacuum transition, where $\phi$ tunnels into its present minimum. Until today, the universe stays in this vacuum and evolves according to the cosmological standard model. The time of the final transition $t_*$ is set by $S^\prime$ which determines how quickly the life-time increases from vacuum to vacuum after the stopping mechanism is triggered (cf.\ Eq.~\eqref{eq:Gamma0}).
As noted, in chain inflation there is a large number of Hot Big Bangs in the sense that all phase transitions can contribute to the radiation and matter density of the universe. However it is the last Big Bang phase transition which yields the largest contribution to today's radiation density since it is the least affected by redshifting. The gravitational wave signal for the evolution shown in Fig.~\ref{fig:rhochain} is even entirely dominated by the last phase transition.
This can be understood due to two factors in Eq.(~\eqref{eq:gravityspectrum}) for the gravitational wave amplitude.
First, $\Omega_{GW} \propto \bigl( {H_* / \beta} \bigr)^2$, i.e. the gravitational wave amplitude decreases as the square of the inverse of the number of phase transitions per e-fold.
During most of the phase transitions in chain inflation with a tilted cosine, we have seen that matching CMB data requires
$10^6$ transitions per e-fold, leading to strong suppression ($10^{-12}$) of the gravitational wave amplitude. Only in the last phase transition, which is much slower due to the stopping mechanism, is there a substantial gravitational wave amplitude produced.
Secondly, the gravitational wave amplitude scales with the fraction of the total energy participating in the phase transition which is maximized for the last transition.
Concentrating on the gravity waves from the last Big Bang, the problem effectively reduced to a single-phase-transition case with a radiation background as discussed in Sec.~\ref{sec:implications}. In order to derive the gravitational wave spectrum we simply need to determine the vacuum and radiation densities $\rho_{\text{vac}}$, $\rho_r(T_n)$ right before the phase transition. While $\rho_{\text{vac}}\simeq V_0/n_c$, $\rho_r(T_n)$ is obtained by adding the contributions from previous phase transitions taking into account their redshift (see Fig.~\ref{fig:rhochain}). The ratio $\alpha = \rho_{\text{vac}}/\rho_r(T_n)$ then also fixes the duration of the phase transition $H_*/\beta$ (see Fig.~\ref{fig:betaalpha}) and the reheating temperature $T_*$ through Eq.~\eqref{eq:Tst}. The corresponding gravitational wave spectrum follows from Eq.~\eqref{eq:gravityspectrum}.
The gravitational wave spectrum for the benchmark point in Tab.~\ref{tab:chainiparameters} is shown in Fig.~\ref{fig:spectra2} for the thick-wall simulation and the envelope approximation (P2 in the figure). For both cases, the predicted spectrum is compatible with the NANOGrav signal.\footnote{In the envelope approximation the amplitude of the predicted spectrum is slightly above the NANOGrav measurement.} As can be seen in Fig.~\ref{fig:spectra} the temperature of the benchmark point is slightly below the PPTA-preferred region. We have checked, however, that a good fit to PPTA can be obtained if one e.g.\ increases the scale of inflation by a factor of a few compared to the benchmark point. Hence, we can conclude that the last Big Bang phase transition at the end of chain inflation could well be the origin of the tentative stochastic gravitational wave background seen at pulsar timing arrays.
More generally, other variants of chain inflation could also produce gravitational waves consistent with the pulsar timing array data. As described above, in this paper we have considered the case of a constant (time-independent) tunneling rate (here via a tilted cosine potential), together with a relaxion stopping mechanism that slows the tunneling down. Another alternative, that could also explain the pulsar timing signals, would be the case of a potential that gives rise to a time-dependent tunneling rate $\Gamma \equiv \Gamma(t)$, but this latter case is the purview of future work.
\subsection{Kination-Induced Big Bang}\label{sec:kinBB}
In a standard cosmological evolution, the universe enters the radiation-dominated era once inflation has completed. However, there also exist well-motivated alternative cosmologies, in which the expansion history is altered. A prime example of non-standard evolution is an epoch of kination in which the universe is dominated by the kinetic energy of a scalar field.
Here we propose a model of ``Kination-Induced Big Bang'', in which a period of kination domination ends via a first order
phase transition that reheats into the ordinary radiation-dominated early history of our universe.
The occurrence of kination is predicted, for example, in models of quintessential inflation which offer a unified explanation of inflation and dark energy. Since the inflationary expansion is driven by the potential energy of a scalar field, there has long been speculation that the same could be true for the accelerated expansion of our present universe. Scalar field models of dark energy -- which predict small deviation of the dark energy equation-of-state parameter from $w=-1$ -- go under the name of quintessence~\cite{Wetterich:1987fm,Ratra:1987rm,Caldwell:1997ii}. The idea of quintessential inflation is to unify the description of inflation and quintessence dark energy in terms of a single scalar field $\phi$. The required potential $V(\phi)$ is depicted in Fig.~\ref{fig:quintV}. Inflation occurs while $\phi$ slowly rolls along the plateau on the left side of the figure. Once it reaches the steeper part of the potential inflation ends and the potential energy is converted into kinetic energy of $\phi$. In contrast to standard slow-roll inflation, $\phi$ does not oscillate around a minimum, but rather continues to `shoot' along the flat bottom of the potential on the right side of the figure. The universe becomes kinetic-energy dominated for some time. However, the kinetic energy redshifts quickly with the sixth power of the scale factor and therefore eventually becomes subdominant to other forms of energy existing in the universe~\cite{Spokoiny:1993kt}. This is when the kination epoch ends. Much later, when (virtually) all kinetic energy has been dissipated, the (tiny) potential energy of $\phi$ once again dominates the energy content of the universe commencing the era of quintessence.
In the following, we will assume that the universe went through an epoch of kination. While we consider quintessential inflation as a prime motivation for kination, let us note that the following discussion can apply to any cosmological scenario running through a kination phase.
A common assumption in kination cosmologies is that the Hot Big Bang occurs prior to the kination phase. If reheating is caused by gravitational particle production at the end of inflation~\cite{Ford:1986sy}, the so-produced plasma is initially subdominant to the kinetic energy, but dominates at a later time due to its slower redshift. However, gravitational reheating has been found to be too inefficient to comply with BBN constraints (see e.g.~\cite{Figueroa:2018twl}) and alternative more complicated mechanisms have been considered.
In this section we propose a new model which we call a Kination-Induced Big Bang. Here, a first order phase transition triggered by the kination field $\phi$ is able to successfully reheat the universe after the kination stage. This new model provides a mechanism for successful reheating that was hard to achieve via gravitational particle production in quintessential inflation,
but (as mentioned above) applies more generally to any cosmological scenario with a kination period. A kination-induced Hot Big Bang can be realized through a derivative coupling of $\phi$ to an auxiliary scalar $\chi$ (=the tunneling field). Such a derivative coupling is particularly attractive if the kination field is identified with the quintessence field in the late universe (as in quintessential inflation), since it avoids strong fifth-force constraints on quintessence.
We consider the effective two-field Lagrangian,
\begin{equation}\label{eq:Lquint}
\mathcal{L} = \left( \frac{1}{2} + \frac{\chi^2}{M^2} \right)\partial_\mu \phi\partial^\mu \phi
+\frac{1}{2} \partial_\mu \chi\partial^\mu \chi - V(\phi) -\frac{m_\chi^2}{2}\chi^2 + \mu \chi^3 - \lambda^2 \chi^4 + V_0\,,
\end{equation}
valid below the scale $M$. Since we are mostly interested in the Hot Big Bang phase transition after kination, it is sufficient to describe this period in the effective theory given here (there is no need to consider an explicit ultraviolet completion). As initial condition we can set $\dot{\phi}=M^2$ and then follow the evolution of $\dot{\phi}$ through its equation of motion. During kination we can neglect $V(\phi)$ since the potential energy of the kination field must be subdominant (otherwise it would not be kination).
The potential of the auxiliary field features a metastable minimum at $\chi=0$ and a global minimum at $\chi=(3\mu+\sqrt{9\mu^2-16\lambda^2 m_\chi^2})/(8\lambda^2)$. We fix $V_0$ such that the potential energy of $\chi$ vanishes in the true minimum. For this choice, $V_0$ is equal to the false vacuum energy density. During kination the coupling between $\chi$ and $\partial_\mu \phi$ increases the effective mass of the auxiliary field,
\begin{equation}\label{eq:mchikination}
m_{\chi,\text{eff}}^2 = m_\chi^2 + 2 \frac{\dot{\phi}^2}{M^2}\,,
\end{equation}
which stabilizes $\chi$ in the metastable minimum. This bears resemblance to double field inflation, where a direct coupling to the inflaton was used to stabilize the auxiliary field in a false vacuum (see Sec.~\ref{sec:doublefield}). However, we emphasize that the mechanism described above does not operate during inflation, but rather during the kination stage.
At the beginning of kination -- when $\dot{\phi}$ is maximal -- the minimum at $\chi=0$ is energetically favorable due to the large effective mass of the auxiliary field. Even if $\chi$ was displaced during inflation it quickly settles in this minimum once kination starts. Subsequently, the Hubble friction reduces $\dot{\phi}$, $m_{\chi,\text{eff}}$ and the second deeper minimum at $\chi\neq 0$ starts showing up. For some time, the universe still remains in a metastable state until $\dot{\phi}$ falls below a critical value $\dot{\phi}_c$ at which the universe tunnels into the true minimum of $\chi$. This critical moment $t_*$ is defined in terms of the tunneling rate by Eq.~\eqref{eq:Gammatstar}, where the tunneling rate is determined by Eq.~\eqref{eq:tunneling_doublefield} (with $m_{\chi,\text{eff}}$ taken from Eq.~\eqref{eq:mchikination}). The phase transition leads to the formation of true vacuum bubbles which collide and reheat the universe in a Hot Big Bang. The energy density of the universe at the time of the phase-transition can be estimated as
\begin{equation}
\rho_{\text{tot}}(t_*) = \frac{\dot{\phi}_c^2}{2} + V_0\,.
\end{equation}
We note that -- depending on the parameter choice -- the phase transition may occur during kination or shortly after kination. In the second case, the universe undergoes a second period of vacuum domination driven by $V_0$. The second vacuum-domination -- if it occurs -- can, however, only last very briefly. Otherwise $\dot{\phi}$ would be completely redshifted away making it implausible that the evolution of $\dot{\phi}$ triggers the phase transition.
Finally, the duration of the phase transition $\beta^{-1}$ is given as (cf. Eq.~\eqref{eq:beta2}),
\begin{equation}
\beta \simeq \left.\frac{\dot{\Gamma}}{\Gamma}\right|_{t=t_*}
\simeq -\left.\dot{S_4}\right|_{t=t_*}\simeq \left. 3H\dot{\phi}\,\frac{\partial S_4}{\partial\dot{\phi}}\right|_{\dot{\phi}=\dot{\phi}_c}\,,
\end{equation}
with $S_4$ again taken from Eq.~\eqref{eq:tunneling_doublefield}. In the last step we employed the equation of motion $\ddot{\phi} + 3 H \dot{\phi}\simeq 0$, where we used that $V(\phi)$ is negligible at the time of the phase transition. Only in the late universe, long after the Big Bang phase transition, $V(\phi)$ starts to become important again (potentially playing the role of dark energy if $\phi$ is identified with the quintessence field).
\begin{table}[htp]
\begin{center}
\begin{tabular}{|ll|ll|}
\hline
&&&\\[-4mm]
\multicolumn{2}{|c|}{Input Parameters}& \multicolumn{2}{c|}{Phase Transition} \\
\hline
&&&\\[-4mm]
$M$~[GeV] & $1.8$ & $V_0^{1/4}$~[MeV] & $27.3$\\[1mm]
$m_\chi$~[MeV] & $0.12$ & $T_*$~[MeV] & $19.6$\\[1mm]
$\mu$~[MeV] & $0.048$ & $\beta/H_*$ & $38.4$\\[1mm]
$\lambda$ & $0.01$ & $\alpha$ & $1070$\\
\hline
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{Parameter example yielding a kination-induced Big Bang consistent with the gravitational wave signal at several pulsar timing arrays. Input parameters entering the Lagrangian defined in Eq.~\eqref{eq:Lquint} are shown on the left side, predictions for the phase transition parameters on the right side ($\alpha$ in this case was defined as the ratio of vacuum energy to kinetic energy). The gravitational wave spectrum is shown in Fig.~\ref{fig:spectra2} (labelled by P3).}
\label{tab:quparameters}
\end{table}
The gravitational wave spectrum from the phase transition is derived from Eq.~\eqref{eq:gravityspectrum}. In Tab~\ref{tab:quparameters} we provide a parameter example for a kination-induced Big Bang. The corresponding gravitational wave signal is depicted in Fig.~\ref{fig:spectra2} (labelled by P3). As can be seen, a good fit to the signal of the NANOGrav experiment is obtained. The same is true for PPTA as visible in Fig.~\ref{fig:spectra}, where the benchmark point is also indicated by P3.
We also wish to point out that our model, of kination ending in a first order phase transition, can be used as a mechanism to reheat Quintessential Inflation.
\subsection{Supercooled Big Bang}\label{sec:supercooledBB}
The Hot Big Bang is commonly identified with the reheating process at the end of inflation. However, there exist attractive cosmological scenarios in which the early universe went through a second (short) period of vacuum domination, lasting for e.g. 1 - 10 e-folds. Whereas the earlier epoch of inflation is required to solve the cosmological and horizon problems as
well as generate the density perturbations for the CMB, this much shorter second phase of vacuum domination may serve to dilute unwanted fields (e.g. the moduli problem) as well as give rise to a second period of reheating of the universe (see e.g.~\cite{Lyth:1995hj,Lyth:1995ka}). A prime example of a second vacuum domination consists in a strongly supercooled first order phase transition (see e.g.~\cite{Barreiro:1996dx}). The latter often occurs in connection with the breaking of gauge symmetries. While supercooling does not arise for the electroweak phase transition, simple and well-motivated gauge extensions of the Standard Model can trigger a supercooled phase transition (see e.g.~\cite{Jaeckel:2016jlh,Jinno:2016knw,Addazi:2017gpt,Hashino:2018zsi,Croon:2018erz,Marzo:2018nov,Breitbach:2018ddu,Baratella:2018pxi,Azatov:2019png,Lewicki:2020azd}). In the regime of strong supercooling the latter reheats the universe a second time and releases great amounts of entropy which dilute the preexisting plasma. In the language of Eq.~\eqref{eq:alpha}, any model with $\alpha\gg 1$ reheats the universe when the vacuum energy is converted to radiation. Subsequently, there may be some residual radiation from before the phase transition, but most of the radiation content of the universe arises as a result of the reheating from the supercooled transition. The Hot Big Bang in this case is associated with the supercooled phase transition rather than with the end of inflation.
The reheating temperature $T_*$ after the supercooled phase must be high enough for BBN to take place, i.e., we again require $T_*>1.8\:\text{MeV}$~\cite{Hannestad:2004px,Hasegawa:2019jsa}.
In addition, there arises a CMB constraint that the second vacuum domination should last $\lesssim 10$ e-folds. This constraint ensures that the scales observable in the CMB exited the horizon during standard inflation, and not during the second vacuum domination (which would be a problem since the perturbations generated during the second vacuum domination have a very different spectrum compared to what is observed in the CMB, see e.g.~\cite{Lewicki:2021xku}). In the specific example we consider below, the second vacuum domination lasts only $\sim 1$~e-fold such that the CMB constraint is easily satisfied.\footnote{Because of the additional e-folds due to the second period of vacuum domination, the production of perturbations on CMB observable scales occurs at a later point in inflation, farther down the inflaton potential, compared to the standard inflationary scenario (where there is no second vacuum-dominated epoch). However, in the example considered below, this shift is very small.}
As a simple example we consider a U(1)-gauge extension of the Standard Model commonly referred to as the Abelian Higgs model. The Lagrangian containing the complex charged scalar field $\Phi$ and the U(1) vector field $A_\mu$ -- the dark photon -- reads\footnote{The Abelian Higgs model without an explicit mass term has also been considered in the context of the NANOGrav signal~\cite{Lewicki:2021xku}},
\begin{equation}\label{eq:abelianhiggs}
\mathcal{L} = - \frac{1}{4} F_{\mu\nu} F^{\mu\nu} + \left|D_\mu \Phi \right|^2 - V(\Phi)\,,\qquad
V(\Phi) = - \mu^2 | \Phi|^2 + \lambda| \Phi|^4 + V_0\,,
\end{equation}
with $V_0 = \mu^4/(4\lambda)$. Here we employed the standard definitions of the field tensor $F^{\mu\nu}=\partial^\mu A^\nu - \partial^\nu A^\mu$ and the gauge covariant derivative $D_\mu = \partial^\mu - i g A^\mu$ with $g$ denoting the gauge coupling. For convenience we can express the complex scalar field in terms of the real scalar $\phi=\sqrt{2|\Phi|}$ (and a phase field).
In addition to the Lagrangian terms in Eq.~\eqref{eq:abelianhiggs} we invoke a (small) coupling between the Abelian Higgs sector and the Standard Model (e.g.\ through the Higgs and/or vector portal of the Standard Model). The latter establishes thermal equilibrium between both sectors in the early universe.
At zero temperature, the potential in Eq.~\eqref{eq:abelianhiggs} features a minimum with vanishing vacuum energy at $\phi = \mu/\sqrt{\lambda} \equiv v$. In this minimum the gauge symmetry is broken and the dark photon and the scalar receive masses of $m_A = g v$ and $m_\phi= \sqrt{2} \mu$ respectively. However, in the hot early universe, the induced thermal potential $\Delta V_{\text{thermal}}$ stabilizes the scalar field at $\phi=0$ and thereby restores the gauge symmetry. As the universe cools down and temperature effects decrease, $\phi$ either rolls or tunnels into its symmetry-breaking minimum in a crossover or a phase transition. Today, the universe resides in the broken phase.
Considering the full thermal potential of the Abelian Higgs model~\cite{Dolan:1973qd,Dine:1992wr,Arnold:1992rz} it turns out that a supercooled first order phase transition arises if the transition temperature $T_n$ is small compared to the dark photon mass in the true vacuum $T_n \ll m_A$~\cite{Niedermann:2021ijp,Niedermann:2021vgd}. As shown in these references as well as illustrated below (see the discussion following Eq.(\ref{eq:rat})), this situation is realized if $\lambda \ll g^4$ -- a relatively mild constraint given $g$ is of order unity. In the low-temperature/ high-mass regime the thermal potential can be written as~\cite{Niedermann:2021vgd}
\begin{equation}
\Delta V_{\text{thermal}}(\phi)\simeq 3 T^4 K(g\phi/T) e^{-g\phi/T}\,,
\end{equation}
where we skipped field-independent terms. The function $K$ is approximated by the following fit~\cite{Niedermann:2021vgd},
\begin{equation}
K(x)\simeq -0.1134 (1+x) - 0.113 x^2 + 4.32\times 10^{-6} \log(x) x^{3.58} + 0.0038 e^{-x(x-1)}\,.
\end{equation}
In Fig.~\ref{fig:vthermal} we depict the full potential $V_{\text{tot}}= V(\Phi)+ \Delta V_{\text{thermal}}(\phi)$ including the zero-temperature and thermal parts at different temperatures. In the hot early universe the global minimum is located at $\phi=0$. But as the universe cools down the minimum at $\phi\neq 0$ shows up and eventually becomes energetically preferred. The thermal transition rate from the symmetry-preserving into the symmetry-breaking minimum is given by (cf. Eq.~\eqref{eq:tunnelingrate})
\begin{equation}
\Gamma = T^4 \left(\frac{S_3}{2\pi\,T}\right)^{3/2} e^{-S_3/T}\,.
\end{equation}
The Euclidean action $S_3$ needs to be determined numerically by solving the differential equation of the bounce. For simplicity we consider the case $\lambda\ll 1$ for which $S_3$ becomes independent of $\lambda$. In this regime, we find that the following fit function agrees well with the full numerical result,
\begin{equation}\label{eq:S3Tfit}
\frac{S_3}{T} \simeq \frac{1}{g^3}\left[ 603.4 \left(\frac{g\,T}{2\mu}-1\right)^{1.8}+344.3 \left(\frac{g\,T}{2\mu}-1\right)^3\right]\,.
\end{equation}
The (inverse) duration of the phase transition is obtained from Eq.~\eqref{eq:beta2},
\begin{equation}
\label{eq:dur}
\frac{\beta}{H_*} \simeq \left.\frac{d(S_3/T)}{H_*\,dt}\right|_{t=t_*} = \left.\frac{d(S_3/T)}{H_*\,dT}\dot{T}\right|_{T=T_n} \simeq T_n \left.\frac{d(S_3/T)}{dT}\right|_{T=T_n}\,.
\end{equation}
Here we used that the time-dependence of $\Gamma$ dominantly arises through the temperature-dependence of $S_3/T$.
The radiation temperature $T_n$ right before the phase transition is fixed by Eq.~\eqref{eq:Gammatstar}. Combining Eqs.(\ref{eq:S3Tfit}) and (\ref{eq:dur}), we see that $\beta/H_*$ decreases monotonically for growing $g$. Imposing a perturbative gauge coupling strength $g^2<4\pi$, therefore, leads to the constraint
\begin{equation}\label{eq:betaHmin}
\frac{\beta}{H_*} \gtrsim 500\,.
\end{equation}
As a reminder, the minimal value of $\beta/H_*$ (here corresponding to the maximal value of gauge coupling strength) corresponds to the largest gravitational wave amplitude, see Eq.~\eqref{eq:gravityspectrumplasma}.
Once $T_n$ is known, the ratio of vacuum-to-radiation energy follows from Eq.~\eqref{eq:alpha},
\begin{equation}
\label{eq:rat}
\alpha = \frac{V_0}{\rho_{\text{r}}(T_n)} = \frac{\mu^4/(4\lambda)}{(\pi^2/30)g_{\text{eff}}(T_n)\,T_n^4}\,.
\end{equation}
Independent of the coupling choice we find that $T_n=(2-10)\times(\mu/g)$ which implies $\alpha=\mathcal{O}(g^4/\lambda)$. This confirms that the regime of strong supercooling ($\alpha\gg 1$) is indeed accessed for $\lambda \ll g^4$.
In Tab.~\ref{tab:scparameters} we provide an example parameter choice yielding $\alpha = 14.5$. For this large value of $\alpha$, the universe was strongly vacuum-dominated just before the phase transition: $V_0$ made up 94\% of the energy density of the universe and $\rho_r$ made up 6\%.
The phase transition then converts $V_0$ to a new (dominant) component of radiation.
Thus most of the radiation density of the present universe is produced by the supercooled phase transition which hence plays the role of the Hot Big Bang.
\footnote{We note the actual phase transition
was virtually instantaneous, with duration $H_*/\beta = 1/720 = 0.014$ e-folds.}
The number of e-foldings of the scale factor during the vacuum-dominated epoch is approximately given by ${\rm log} (T_*/T_n)$. If the transition time is short (compared to the Hubble time) we can approximate Eq.~\eqref{eq:Tst} as
$T_* \simeq (1+\alpha)^{1/4}\,T_n$.
For the choice $\alpha=14.5$
the epoch of vacuum domination (during the supercooling stage)
produces roughly one e-fold of
expansion.
The addition to the Lagrangian of even a small portal coupling (of the dark sector to the Standard Model) will be sufficient to ensure that -- in the symmetry breaking vacuum -- the Abelian Higgs fields decay promptly into electromagnetic radiation (given the decay to electron-positron pairs is kinematically accessible). Therefore, we can assume that the phase transition also reheats the visible sector which subsequently evolves according to the cosmological standard model.\footnote{The condition $m_\phi > 2m_e$ also ensures that $\phi$ does not (significantly) alter the number of relativistic species during BBN.} Baryons and dark matter may either be produced in the phase transition, or may stem from the preexisting plasma.\footnote{Baryons and dark matter present in the preexisting plasma also get diluted by the phase transition. However, in their case, the entropy production can be compensated by enhancing the baryon/ dark matter fraction prior to the phase transition.}
In the supercooled regime the true-vacuum bubbles propagate (virtually) at the speed of light. However, since the phase transitions involves the breaking of a gauge symmetry, the bubble walls experience a pressure which grows linearly with their Lorentz boost~\cite{Bodeker:2017cim}. Unless the supercooling is extremely strong (which would require $\alpha\gg 10^5$) the bubble walls do not reach the runaway regime in which they carry most of the energy density upon collision. Instead most of the available energy gets converted into plasma bulk motion and thermal energy. Hence, the gravitational wave signal from bubble collisions is suppressed. On the other hand, the interactions of the bubble walls with the plasma induce sound waves which themselves source gravitational waves. The corresponding spectrum is determined by Eq.~\eqref{eq:gravityspectrumplasma} with $\kappa_v\simeq 1$ for $\alpha\gg 1$~\cite{Espinosa:2010hh}. Notice that, in contrast to the gravitational waves from bubble collisions, the peak amplitude is only suppressed by one power of $H_*/\beta$. Therefore, the range of $\beta$ consistent with the pulsar timing signals is slightly extended in the case of acoustic gravitational waves.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|ll|ll|}
\hline
&&&\\[-4mm]
\multicolumn{2}{|c|}{Input Parameters}& \multicolumn{2}{c|}{Phase Transition} \\
\hline
&&&\\[-4mm]
$g$~[GeV] & $2$ & $T_*$~[MeV] & $5.0$\\[1mm]
$\lambda$ & $2\times 10^{-4}$ & $\beta/H_*$ & $720$\\[1mm]
$\mu$~[MeV] & $1.2$ & $\alpha$ & $14.5$\\
\hline
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{Example choice of couplings and mass in the Abelian Higgs model (cf.~Eq.~\eqref{eq:abelianhiggs}) resulting in a supercooled phase transition. The phase transition parameters are also given. See Fig.~\ref{fig:spectra2} (line P5) for the corresponding gravitational wave spectrum.}
\label{tab:scparameters}
\end{table}
In Fig.~\ref{fig:spectra2} (line P5) we depict the acoustic gravitational wave spectrum of the Abelian Higgs model with the parameter choice of Tab.~\ref{tab:scparameters}. We have chosen a large value of the gauge coupling in order to minimize the suppression of the peak amplitude by $ H_*/ \beta$ (see discussion around Eq.~\eqref{eq:betaHmin}). The obtained spectrum falls in the right range to explain the pulsar timing signals with the normalization a bit low in the first NANOGrav bin (and similar for the other pulsar timing arrays). We note, however, that the fit can potentially be further improved by including the gravitational waves from magneto-hydrodynamic turbulence induced by bubble collisions. While the magnitude of this contribution is somewhat uncertain it is expected to soften the infrared tail of the spectrum which is favorable for fitting the NANOGrav, PPTA and EPTA signals. We also emphasize that further parameter space can be accessed in gauge extensions beyond the Abelian Higgs model. In this light a supercooled Big Bang phase transitions provides an attractive explanation for the pulsar timing signals.
\subsection{A Dark Big Bang}\label{sec:darkbigbang}
We have so far described a number of cosmological scenarios featuring a Hot Big Bang phase transition consistent with the NANOGrav, PPTA and EPTA signals. In this section we will turn to a complementary case, in which visible radiation/ matter and dark matter are of different origin. While the Hot Big Bang at the end of inflation generates the Standard Model plasma, dark matter is only produced much later in a `Dark Big Bang' -- a first order phase transition in the dark sector. In the following we will consider the Dark Big Bang (rather than the Hot Big Bang) as the explanation for the signals observed by the pulsar timing array experiments. Related ideas of linking dark phase transitions, dark radiation and pulsar timing signals have appeared in~\cite{Schwaller:2015tja,Nakai:2020oit,Addazi:2020zcj,Ratzinger:2020koh,Borah:2021ocu,Lewicki:2021xku}.
Below, we will assume that inflation and reheating to the visible sector has already taken place at an earlier epoch in the Universe, prior to the Dark Big Bang phase transition described here.
In a minimal realization, the dark sector is comprised of the tunneling field $\varphi$, the dark matter field $\psi$ and one/ several massless (or very light) degrees of freedom $\xi_i$ playing the role of dark radiation. The particle nature of $\psi$ is irrelevant for the following discussion, but for concreteness we will take $\psi$ to be a Majorana fermion. Furthermore, we assume that the dark sector is decoupled from ordinary matter (other than through gravity). The dark sector Lagrangian reads,
\begin{align}\label{eq:LDS}
\mathcal{L}_{\text{DS}}&= \frac{1}{2}\partial_\mu \varphi\partial^\mu \varphi - V(\varphi)
+ \frac{i}{2}\bar{\psi}\cancel\partial\psi - \frac{m_\psi}{2} \bar{\psi}\psi- \kappa\, \varphi \bar{\psi}{\psi}+ \mathcal{L}_{\text{DR}}\,,\nonumber\\
V(\varphi)&= \frac{m_\varphi^2}{2}\varphi^2 - \mu \varphi^3 + \lambda^2 \varphi^4 + V_0\,,
\end{align}
where $\mathcal{L}_{\text{DR}}$ contains kinetic and interaction terms of the dark radiation fields (self-interactions as well as interactions with the other dark sector fields). We left this Lagrangian part unspecified since it merely enters the early universe dynamics by fixing the annihilation cross section $\langle \sigma v \rangle_\psi$ of dark matter into dark radiation. In the following, we will simply take $\langle \sigma v \rangle_\psi$ to be a free parameter. The potential exhibits a false vacuum with energy density $V_0$ at $\varphi=0$ and the true vacuum at $\varphi=(3\mu+\sqrt{9\mu^2-16\lambda^2 m_\chi^2})/(8\lambda^2)$.\footnote{We assumed $\mu > 4\lambda m_\varphi/3$.} We chose $V_0$ such that the potential energy vanishes in the true minimum.
Let us now turn to the cosmological evolution. We assume a standard inflationary epoch followed by the Hot Big Bang. The latter creates a thermal plasma of Standard Model particles, while reheating to dark sector particles is taken to be absent (or suppressed).\footnote{This is a natural choice since comparable reheating of both sectors would require a very non-generic choice of inflaton couplings.} Due to the absence of couplings to visible matter the dark sector remains cold for some time. The universe is assumed to populate the metastable minimum of $\varphi$ after inflation.\footnote{This situation is realized if inflation blows up a false vacuum patch to contain the entire observable universe}. The false vacuum energy is negligible at the beginning of the radiation-dominated epoch, but becomes more significant with time due to its slower redshift.
Long after the Hot Big Bang, at the time $t_*$, $\varphi$ tunnels into the true vacuum in a first order phase transition. We call this instant the `Dark Big Bang' since it creates a hot plasma of dark sector fields. Henceforth subscript $*$ refers to the time right after the Dark Big Bang phase transition, $T$ refers to the temperature of the visible sector, and $T_d$ refers to the temperature of the dark sector. Since the phase transition is fast compared to the Hubble time (which we will show below) we can estimate the dark sector temperature $T_{d*}$ right after the Dark Big Bang by setting
\begin{equation}
\rho_{\text{vac}}= V_0 = \frac{\pi^2}{30}g_d(T_{d*})T_{d*}^4\,,
\end{equation}
where $g_d$ counts the relativistic dark sector degrees of freedom which include $\xi_i$, $\psi$ and possibly $\varphi$ (depending on its mass). At the same time, the phase transition does not cause any entropy transfer from the dark to the visible sector due to the absence of any direct couplings. Hence, the temperature $T$ of the Standard Model plasma is not affected by the Dark Big Bang (implying $T_*=T_n$), where $T_n$ was the temperature just before the phase transition.).
Parametrizing the ratio of vacuum to visible-radiation density at the phase transition by $\alpha$ as in Eq.~\eqref{eq:alpha} we can relate $T_{d*}$ and $T_*$,
\begin{equation}\label{eq:TdsTs}
\frac{T_{d*}}{T_*}=\alpha^{1/4}\left(\frac{g_{\text{eff}}(T_*)}{g_d(T_{d*})}\right)^{1/4}\,.
\end{equation}
During the subsequent evolution of the universe the entropies of visible and dark sector are separately conserved. Therefore, the temperature ratio remains approximately constant up to changes in the effective number of degrees of freedom,
\begin{equation}\label{eq:TdT}
\frac{T_{d}}{T} = \left(\frac{g_{\text{eff}}(T)}{g_{\text{eff}}(T_*)}\right)^{1/3}\left(\frac{g_d(T_{d*})}{g_d(T_d)}\right)^{1/3}\frac{T_{d*}}{T_*}\,.
\end{equation}
It is convenient to express the dark radiation density as an extra contribution to the effective neutrino number. Employing~Eq.~\eqref{eq:TdsTs} and~\eqref{eq:TdT} one finds (see also~\cite{Nakai:2020oit}),
\begin{equation}
\Delta N_{\text{eff}} = 0.63\times \left(\frac{\alpha}{0.1} \right)\left(\frac{10}{g_{\text{eff}}(T_*)} \right)^{1/3}\left(\frac{g_{d}(T_{d*})}{g_{d}(T_{d})} \right)^{1/3}\,.
\end{equation}
Planck data combined with local measurements of the Hubble constant suggest $\Delta N_{\text{eff}}=0.22\pm 0.15$. While a small dark radiation contribution to $N_{\text{eff}}$ is allowed (and even marginally preferred), the latter should not exceed $\Delta N_{\text{eff}} = 0.5$. For a phase transition at the MeV-GeV scale (i.e.\ in the frequency band of pulsar timing arrays) we, therefore, obtain the constraint
\begin{equation}\label{eq:alphamax}
\alpha \lesssim 0.1\,.
\end{equation}
We can conclude that the universe needs to be radiation-dominated at the time of the Dark Big Bang.
In order to determine the gravitational wave signal from the Dark Big Bang we need to express the phase transition parameters $\alpha$, $T_*$ and $\beta$ in terms of the Lagrangian parameters in Eq.~\eqref{eq:LDS}. Since we are considering a quartic potential of the tunneling field, we can use Eq.~\eqref{eq:tunneling_doublefield} (with $m_{\chi,\text{eff}}$ replaced by $m_\varphi$) to derive the tunneling rate $\Gamma$. The latter then fixes the time of the phase transition by the condition $I(t_*)=1$ with the integral $I$ as defined in Eq.~\eqref{eq:prob}. We can pull $\Gamma$ out of the integral since it has no time dependence. In evaluating the integral, we can approximate the time-dependence of the scale factor by $a\propto t^{1/2}$, since the Dark Big Bang occurs during radiation domination (cf.\ Eq.~\eqref{eq:alphamax}). Thus the condition $I(t_*)=1$ implies
\begin{equation}\label{eq:tstar}
t_*\simeq\left(\frac{105}{8\pi\,\Gamma}\right)^{1/4}\,.
\end{equation}
Employing the time-temperature relation of radiation-domination we, furthermore, obtain
\begin{equation}
T_* \simeq \left(\frac{45\,M_{\text{P}}^2}{2\pi^2\,g_{\text{eff}}(T_*)\,t_*^2}\right)^{1/4}\,,
\end{equation}
and
\begin{equation}
\alpha\simeq \frac{4 \,t_*^2 V_0}{3 M_{\text{P}}^2}\,.
\end{equation}
For a given $\alpha$, the duration of the phase transition is obtained from Fig.~\ref{fig:betaalpha}.
During the Dark Big Bang phase transition bubbles of true vacuum are formed. Since the dark sector is decoupled from the Standard Model plasma, the expansion of the bubbles is not affected by the surrounding plasma. Therefore, the bubble walls can reach the runaway regime in which the entire gravitational wave signal stems from the bubble collisions (while acoustic gravitational waves are absent). The gravitational wave spectrum from the Dark Big Bang is thus determined by Eq.~\eqref{eq:gravityspectrum} with $\kappa_\phi=1$.
The dark matter abundance in the Dark Big Bang scenario can be set by a thermal freeze-out in the dark sector~\cite{Feng:2008mu}.
After the bubble walls have collided, the dark sector quickly reaches a thermal state with temperature $T_{d*}$ given by~Eq.~\eqref{eq:TdsTs}.\footnote{The evolution of a universe with decoupled visible and dark sectors at different temperatures has been studied in the context of asymmetric reheating~\cite{Hodges:1993yb,Berezhiani:1995am,Adshead:2016xxj}.}
The dark plasma contains the dark radiation degrees of freedom $\xi_i$ and the dark matter field $\psi$ (which we assume to be lighter than $T_{d*}$).\footnote{Quanta of the tunneling field $\phi$ may initially also be contained in the plasma, but decay away quickly to other dark sector particles. Since $m_{\phi}$ is typically of the same order as $T_{d*}$, the $\phi$ particles are nonrelativistic after the phase transition and their abundance is suppressed.} Reactions $\psi\psi \leftrightarrow \xi_i\xi_i$ keep dark matter in thermal equilibrium (approximately) until the Hubble rate of expansion drops below the dark matter annihilation rate. At this moment $\psi$ freezes out and the total number of $\psi$ particles remains fixed. We denote the freeze-out dark sector temperature by $T_{d,f}$.
It is convenient to introduce the abundance as the ratio of $\psi$ number density over dark entropy density, $\Upsilon_\psi= n_\psi/s_{\text{dark}}$. Employing dark entropy conservation, the Boltzmann equation for $\Upsilon_\psi$ takes the form~\cite{Lee:1977ua},
\begin{equation}\label{eq:boltzmann}
\frac{d\Upsilon_\psi}{dx}=-\frac{(\sigma v)_\psi \,s_{\text{dark}}}{Hx}\left(\Upsilon_\psi^2-\Upsilon_{\psi,eq}^2\right)\,,
\end{equation}
where we introduced $x=m_\psi/T_d$. Notice that the only way the visible sector enters Eq.~\eqref{eq:boltzmann} is by contributing to the Hubble expansion rate.
The equilibrium abundance $\Upsilon_{\psi,eq}$ can be obtained by integrating the Fermi-Dirac distribution. In the following we focus on a freeze-out in the non-relativistic regime ($x_{f}=m_\psi/T_{d,f}\gtrsim 3$) which allows us to approximate
\begin{equation}
\Upsilon_{\psi,eq} = \frac{45}{4\pi^4} \frac{g_\psi\, x^2\,K_2(x)}{g_d(x)}\,,
\end{equation}
where $g_\psi$ counts the internal degrees of freedom ($g_\psi=2$ for a Majorana fermion) and $K_2$ stands for the second modified Bessel function of the second kind.
The solution to Eq.~\eqref{eq:boltzmann} initially follows the equilibrium abundance before smoothly turning into a constant at the time of freeze-out. The terminal abundance $\Upsilon_{\psi}(\infty)$ can be found by solving Eq.~\eqref{eq:boltzmann} numerically. The corresponding relic density of $\psi$-particles reads
\begin{equation}\label{eq:omegah2dm}
\Omega_{\psi} h^2= \frac{m_\psi \,\Upsilon_{\psi}(\infty)\,s_{\text{dark}}(T_{d,0})}{3 (H_0/h)^2 M_{\text{P}}^2} = 2.74 \times 10^5 \:\left( \frac{m_\psi}{\text{MeV}} \right)\,\alpha^{3/4} \left(\frac{g_d(T_{d*})}{g_{\text{eff}}(T_*)}\right)^{1/4}\: \Upsilon_{\psi}(\infty)\,,
\end{equation}
where $H_0/h=100\:\text{km}/(\text{s}\text{Mpc})$. In the second step, we employed Eq.~\eqref{eq:TdT} and today's visible sector temperature $T=2.73\:\text{K}$ to obtain the dark entropy. In a viable dark sector freeze-out scenario $\Omega_{\psi} h^2$ needs to match the observed dark matter relic density $\Omega_{\text{DM}} h^2 = 0.1198 \pm 0.0012$~\cite{Planck:2018vyg}. This imposes a constraint on the dark matter annihilation cross section $\langle\sigma v\rangle_\psi$.\footnote{By $\langle \sigma v \rangle_\psi$ we denote the thermally averaged cross section at the time of freeze-out.} For sizeable $\alpha$ (say $\alpha \gtrsim 10^{-3}$) we find that the required cross section is of order $\langle \sigma v \rangle_\psi=\mathcal{O}(10^{-26}\text{cm}^3/s)$ -- similar as for a standard WIMP (i.e. visible sector freeze-out) scenario. This is not surprising since the dark sector temperature is not too different from the visible sector temperature in this case.
An important distinction, however, is that the dark freeze-out scenario can successfully be implemented with dark matter masses $m_\psi<\text{MeV}$. Such low masses imply that dark matter contributes to the number of relativistic species at the time of BBN. If $\psi$ was part of the visible sector it would add (at least) a full degree of freedom\footnote{A relativistic particle in equilibrium with the Standard Model plasma increases $g_{\text{eff}}(T)$ by the number of internal degrees of freedom (multiplied by $7/8$ in the case of a fermion).} which is in conflict with BBN constraints. However, as $\psi$ resides in a colder dark sector its contribution to the total energy density is reduced by $\alpha$. Hence, a relativistic $\psi$ (and additional relativistic dark radiation) at the time of BBN is viable as long as $\alpha$ is sufficiently small. Since CMB bounds already impose $\alpha\lesssim 0.1$ (cf.~Eq.~\eqref{eq:alphamax}) BBN does not provide an additional constraint.
The fact that dark matter in this model receives the correct adiabatic density perturbations required by CMB observations will be shown in a followup paper. Clearly the usual production of DM perturbations does not take place during inflation since the DM does not yet exist. Instead, perturbations in the visible sector that are produced during inflation can later be transmitted gravitationally to the dark matter.
\begin{table}[htp]
\begin{center}
\begin{tabular}{|ll|ll|}
\hline
&&&\\[-4mm]
\multicolumn{2}{|c|}{Input Parameters}& \multicolumn{2}{c|}{Cosmology} \\
\hline
&&&\\[-4mm]
$m_\varphi$~[MeV] & $26.04$ & $\Omega_\psi h^2$ & $0.119$\\[1mm]
$\mu$~[MeV] & $40.72$ & $\Delta N_{\text{eff}}$ & $0.40$\\
\cline{3-4}
&&&\\[-4mm]
$\lambda$ & $1$ & \multicolumn{2}{c|}{$\;\;$Phase Transition$\;\;$}\\
\cline{3-4}
&&&\\[-4mm]
$m_\psi$~[MeV] & $0.200$ & $T_*$~[MeV] & $20$\\[1mm]
$g_d(T_d*)$ & $6.75$ & $\beta/H_*$& $7.8$\\[1mm]
$\langle \sigma v \rangle_\psi$~[$cm^3/s$]$\;\;$ & $1.74\times 10^{-26}\;$ & $\alpha$ & $0.06$\\
\hline
\end{tabular}
\end{center}
\vspace{-0.4cm}
\caption{Parameter example in the Dark Big Bang scenario containing the tunneling field $\varphi$, the dark matter field $\psi$ and light dark radiation fields (the model is defined in Eq.~\eqref{eq:LDS}). The resulting predictions for the dark matter relic density, dark radiation density (expressed in terms of $\Delta N_{\text{eff}}$) and phase transition parameters are shown on the right side. The resulting gravitational wave spectrum is depicted in Fig.~\ref{fig:spectra2} (line P4).}
\label{tab:dbbparameters}
\end{table}
In Tab.~\ref{tab:dbbparameters} we provide a parameter example for the Dark Big Bang model defined in Eq.~\eqref{eq:LDS}. The example point features a Dark Big Bang phase transition at $T_*=20\:\text{MeV}$ which converts the dark vacuum energy into a hot dark plasma of $\xi_i$ and $\psi$ particles. In Fig.~\ref{fig:darkfreeze} we depict the evolution of the visible radiation, dark radiation ($\xi_i$) and dark matter ($\psi$) energy densities after the Dark Big Bang. Both radiation densities decrease as $T^4$ until the present epoch.\footnote{Slight deviations from $\rho_{\text{r}}\propto T^4$ occur due to changes in $g_{\text{eff}}(T)$.} The dark matter density $\rho_{\text{DM}}$ evolves parallel to the radiation densities as long as the $\psi$-particles are highly relativistic. But once $T_d \lesssim m_\psi$ the Boltzmann suppression sets in and $\rho_{\text{DM}}$ starts to decrease exponentially with $m_\psi/T_d$. Later, at $T_d\sim m_\psi/10$, annihilations become inefficient and the number of dark matter particles remains fixed. After the freeze-out $\rho_{\text{DM}}$ decreases with $T^3$ as in standard cold dark matter scenarios. For the example point, the relic density of $\psi$-particles agrees with the observed dark matter density.
Apart from the dark plasma, the Dark Big Bang generates strong gravitational radiation. In Fig.~\ref{fig:spectra2} (line P4) we depict the gravitational wave spectrum for the parameter point in Tab.~\ref{tab:dbbparameters}. Since the benchmark point resides close to the thin-wall regime of vacuum tunnelling we expect the spectrum to follow approximately the prediction of the envelope approximation (left panel of the figure). As can be seen, a good fit to the NANOGrav signal is obtained. The benchmark point is also indicated in Fig.~\ref{fig:spectra} (P4 in the right panel), where one can see that it is also consistent with the PTA signal. Intriguingly, the Dark Big Bang explanation of the NANOGrav signal simultaneously predicts a non-negligible dark radiation density in the universe ($\Delta N_{\text{eff}}\sim 0.4$) which will be tested by future CMB experiments.
\section{Conclusion}\label{sec:conclusion}
The origin of the Hot Big Bang remains one of the big mysteries in cosmology. In this work we provided strong motivation that the Big Bang occurred through a strong first order phase transition. In this scenario the universe is initially trapped in a false vacuum which eventually decays through quantum tunneling. The latter triggers the formation of true vacuum bubbles in the sea of false vacuum. Bubble collisions generate a hot plasma of particles heralding the entrance into the radiation-dominated era.
A common feature of all Big Bang first order phase transition cosmologies is the presence of strong gravitational radiation which is formed by the collision of true-vacuum bubbles.
In this work we investigated, whether the Hot Big Bang could be responsible for the tentative observation of a stochastic gravitational wave background by the NANOGrav, PPTA and EPTA pulsar timing array experiments. By performing a fit to the pulsar timing array data we identified the range of phase transition temperatures, durations and strengths compatible with the signal (see Fig.~\ref{fig:spectra}). In particular, we found that the pulsar timing signals can be explained if the reheating temperature of the Hot Big Bang, and correspondingly the energy scale of the false vacuum, falls in the range $T_* \sim \rho_{{\rm vac}}^{1/4} =\text{MeV}-100\:\text{GeV}$.
The idea of a first-order Big Bang phase transition originally emerged within Guth's ``old inflation'' proposal. While the original model fails because of the empty-universe problem, modifications can reconcile the vacuum transition picture with cosmological data and --at the same time -- support the low vacuum-energy scale required to fit the pulsar timing signals. In Sec.~\ref{sec:scenarios} we present a number of well-motivated cosmologies with a successful Big Bang first order phase transition which reheats the universe -- either at the end of inflation, after a period of kination, or after a second period of vacuum-domination long after inflation:
\begin{itemize}
\item In Double Field Inflation (Sec.~\ref{sec:doublefield}) the tunneling field is coupled to a rolling field which catalyzes a very rapid first order phase transition (resolving the empty universe problem). We showed that the low inflation scale required to fit the pulsar timing signals can be accessed without running into the fine-tuning problems plaguing low-scale slow roll inflation. A low-scale double field version of $\alpha$-attractor inflation is introduced which satisfies all cosmological constraints.
\item Chain Inflation (Sec.~\ref{sec:chaininflation}) features a Universe tunneling through a series of ever lower vacuum energies. Each individual transition completes quickly within a fraction of a Hubble time (avoiding the empty universe problem), while all transitions together support sufficient e-foldings of inflation. Since the origin of CMB perturbations in chain inflation consists in the probabilistic nature of tunneling (rather than quantum fluctuations of the inflaton as in slow roll inflation) the low inflation scales favored by the pulsar timing arrays is shown to be accessed without the requirement of an extremely flat (i.e.\ tuned) potential (in contrast to slow roll inflation).
\item The proposed ``Kination-Induced Big Bang'' (Sec.~\ref{sec:kinBB}) corresponds to a strong first-order phase transition after a period of kinetic-energy domination of the universe. Such a kination period is predicted e.g.\ by quintessential inflation for which the Kination-Induced Big Bang provides a new reheating mechanism.
\item A ``Supercooled Big Bang'' (Sec.~\ref{sec:supercooledBB}) refers to a strongly supercooled thermal first-order phase transition. We present an example model in which the latter occurs after a short second period of vacuum-domination long after inflation and reheats the universe a second time.
\item Finally in Sec.~\ref{sec:darkbigbang}, we proposed that the Hot Big Bang at the end of inflation generates the Standard Model plasma, but dark matter is only produced much later in a ``Dark Big Bang'' – a first order phase transition in the dark sector.
\end{itemize}
For the five complementary models with a Big Bang phase transition we derived the spectrum of gravitational waves and compared them to the pulsar timing signal (see Fig.~\ref{fig:spectra2}). In all cases we found parameter examples featuring a gravitational wave signal in agreement with the pulsar timing arrays. We concluded that a Big Bang phase transition provides an attractive explanation for the NANOGrav, PPTA and EPTA results.
Nevertheless, there is still a long way to establish the detection of a Big Bang first order phase transition. First, the unambiguous discovery of a stochastic gravitational wave background by NANOGrav, PPTA, EPTA or any other pulsar timing array experiment requires the measurement of the quadrupolar spatial correlations predicted by General Relativity. In the optimistic case -- since pulsar timing arrays are continuously improving their statistics -- the detection of the quadrupolar correlations could be just around the corner. If a gravitational wave signal is confirmed the Big Bang origin must be discriminated against other astrophysical and cosmological gravitational wave sources. In this respect it will be crucial to further improve the prediction of the gravitational wave spectrum from phase transitions beyond the simplified assumptions entering e.g.\ the envelope approximation. Moreover, it will be important to investigate complementary cosmological probes of a Big Bang phase transition. Such probes could include an increased $\Delta N_{\text{eff}}$ (see Sec.~\ref{sec:darkbigbang}), correlations between inflationary and phase transition observables -- the chain inflation scenario of Sec.~\ref{sec:chaininflation} e.g.\ correlates $n_s$ and $T_*$ -- as well as other impacts on BBN and CMB observables.
The search for gravitational wave signals from a first order phase transition is not limited to pulsar timing arrays (see~\cite{Caprini:2019egz} for a recent review). With future space- and ground-based interferometers there is hope to detect a stochastic gravitational wave background in the $\text{mHz} - \text{kHz}$-regime. Simple estimates based on~Eq.~\eqref{eq:vacuumomega} and~\eqref{eq:vacuumf} suggest that (e)LISA can potentially probe a Big Bang first order phase transition with an energy density of the false vacuum $\rho_{\text{vac}}^{1/4}\sim (10^2-10^5)\:\text{GeV}$, while the next stage of LIGO-Virgo-KAGRA~\cite{Harry:2010zz,VIRGO:2014yos,Somiya:2011np} (or possibly next-generation experiments like Einstein Telescope~\cite{Punturo:2010zz} and Cosmic Explorer~\cite{Reitze:2019iox}) could access $\rho_{\text{vac}}^{1/4}\sim (10^8-10^9)\:\text{GeV}$. All the first order phase transition models presented in this paper can also produce gravitational waves detectable in these upcoming searches.
Our findings motivate a dedicated experimental and theoretical program to test a Big Bang first order phase transition through the associated gravitational radiation. Needless to say that the prospect of directly probing the Hot Big Bang through its gravitational wave signature is extremely exciting.
\section*{Acknowledgements}
K.F.\ is Jeff \& Gail Kodosky Endowed Chair in Physics at the
University of Texas at Austin, and K.F.\ and M.W.\ are grateful for
support via this Chair. K.F.\ and M.W.\ acknowledge support by
the Swedish Research Council (Contract No. 638-2013-8993).
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics program under Award Number DE-SC-0002424.
\bibliography{nanobib}
\bibliographystyle{h-physrev}
|
Title:
Liger at Keck Observatory: Design of Imager Optical Assembly and Spectrograph Re-Imaging Optics |
Abstract: Liger is an adaptive optics (AO) fed imager and integral field spectrograph
(IFS) designed to take advantage of the Keck All-sky Precision Adaptive-optics
(KAPA) upgrade for the W.M. Keck Observatory. We present the design and
analysis of the imager optical assembly including the spectrograph Re-Imaging
Optics (RIO) which transfers the beam path from the imager focal plane to the
IFS slicer module and lenslet array. Each imager component and the first two
RIO mechanisms are assembled and individually aligned on the same optical
plate. Baffling suppresses background radiation and scattered light, and a
pupil viewing camera allows the imager detector to focus on an image of the
telescope pupil. The optical plate mounts on an adapter frame for alignment of
the overall system. The imager and RIO will be characterized in a cryogenic
test chamber before installation in the final science cryostat.
| https://export.arxiv.org/pdf/2208.07936 |
\keywords{Imager, Integral Field Spectrograph, Adaptive Optics, Pupil Viewing Camera, Re-Imaging Optics, Baffling, Infrared, Cryostat}
\section{INTRODUCTION}
\label{sec:1} %
Liger is an adaptive optics (AO) fed imager and integral field spectograph (IFS) for the Keck I telescope at the W.M. Keck Observatory atop Maunakea in Hawaii. Liger will have larger fields of view, higher spectral resolving power ($R\sim 8,000-10,000$), and wider wavelength coverage ($0.8-2.4$ $\mu$m) than any current IFS \cite{Wright_2022}. Taking advantage of the improved observational capabilities provided by the Keck All-sky Precision Adaptive-optics (KAPA) upgrade to the current AO system, Liger will be a revolutionary instrument for a broad range of science cases\cite{Wizinowich_2020}\cite{Lu_2020}.
Liger makes use of a sequential imager and IFS design. The optical path for the spectrograph uses pick-off mirrors placed above the imager detector that allow the spectrograph and imager to be used concurrently in every mode\cite{Wright_2022}. The imager is the first major subsystem in the overall Liger instrument, placed directly after the cryostat entrance window. It refocuses light onto the imager detector and pick-off mirrors as well as contains the pupil mask, filter wheel, and pupil viewing camera for the full system\cite{Cosens_2020}\cite{Cosens_2022}. Because the imager feeds the IFS subsystem, fabrication and alignment is crucial not just for the performance of the imager, but for the Liger instrument as a whole.
Imager optical components and spectrograph Re-Imaging Optics (RIO) are housed on the same optical plate. After all imager components are assembled and aligned, baffling is placed that blocks background radiation and scattered light from being seen by the imager detector and IFS subsystems. This baffling also provides a field stop at the AO focal plane which limits the field of view entering the optical system. A pupil viewing camera is placed 545.5mm away from the pupil mask which allows the imager detector to focus on the telescope pupil plane for alignment purposes. The mechanism for this camera flips a single lens in-and-out of the beam path. The optical plate rests on an adapter frame that aligns the overall assembly within the imager test cryostat and integration into the science cryostat.
The RIO refocuses the beam from the imager focal plane to the IFS slicer and lenslet subsystems. The RIO consists of three mechanisms for rotating air-spaced doublets in-and-out of position and five fold mirrors along the beam path. Two of the three RIO mechanisms are mounted directly above the imager detector on the optical plate. The RIO allows selection of IFS plate scales for the slicer and lenslet subsystems, and in addition serves as to block light to each IFS assembly. The RIO allows selection between the 14 and 31 mas modes for the lenslet and the 75 and 150 mas modes for the slicer.
A vibrationally suppressed, cryogenically cooled, vacuum chamber was designed to characterize the Liger imager and RIO. This chamber was enlarged from a previous design \cite{Wiley_2020} to fit both the imager optical assembly and first two RIO mechanisms. The chamber consists of a steel body and aluminum lid and contains an aluminum cold shield separated from the body of the chamber with G10 A-frames. A custom made cart houses the vacuum, cryogenic, and electrical systems that support this experimental setup.
The remainder of this paper is organized as follows: \S\ref{sec:2} covers the Liger imager optical assembly and focuses on the design and analysis of the baffling and pupil viewing camera. The RIO is explained in more detail in \S\ref{sec:3}, and updates to the Liger imager test chamber are covered in \S\ref{sec:4}. A summary and future work section is given in \S\ref{sec:5}.
\section{Imager Optical Assembly}
\label{sec:2}
The Liger imager is custom designed but takes heritage from Keck OSIRIS \cite{Larkin_2006}. It is optimized for low wavefront error and high throughput\cite{Wright_2020}. It provides a 20.5"x20.5" field of view with 10 mas spatial sampling. Thanks to Liger's sequential design, the imager is used in parallel with the IFS in all observing modes. It is the first major optical assembly along the beam path in the Liger instrument and is located directly behind the entrance window. The simple optical design uses two Off-Axis Parabolic (OAP) mirrors to transfer the beam path from the AO focal plane to the imager detector.
The imager optical assembly consists of two off-axis parabolas (OAPs), a flat mirror, a pupil wheel, a filter wheel, a pupil viewing camera, and Teledyne Hawaii-2RG detector. These optical assemblies are mounted to the same light-weighted optical plate as the first two RIO mechanisms. Each component can be aligned on the optical plate individually. An adapter frame provides adjustment for the optical platform as a whole. For a more detailed overview of the pupil wheel and filter wheel see Cosens et al. 2020 \cite{Cosens_2020}, and for a more detailed overview of the detector mount see Cosens et al. 2022 \cite{Cosens_2022}. A rendering of the full imager optical assembly is given in Figure \ref{fig:1} showing the first OAP mount, the pupil wheel, filter wheel, detector mount, baffling, and adapter frame.
The adapter frame rests on the cold shield base and has three pedestals for mounting to the optical plate. The optical plate uses shims for height adjustment and lowers on to canoe spheres that rest on top of the adapter pedestals that will allow for positional repeatability. The adapter frame has two tabs that allow for $\pm 4$mm of movement which is within the tolerance stack-up of the imager test cryostat. Fine threaded M5 set screws are used for this adjustment which gives a precision of a quarter screw turn of 125$\mu$m.
After installation of each major optical component, baffling is installed to cover all optics and the imager beam path. The baffling consists of eight major components: four baffle boxes, two baffle tubes, and two shrouds that cover the OAPs. Baffling bolts to the imager optical plate and sufficiently suppresses scattered light. \S\ref{sec:2.1} shows the baffling design in more detail and presents the scattered light analysis performance.
The pupil viewing camera mechanism moves in-and-out of the beam path, which allows the imager detector to focus on the telescope pupil for alignment. This is achieved with a single lens placed 545.5mm away from the pupil mask. This lens is mounted on a flip mechanism similar to the RIO. \S\ref{sec:2.2} shows the preliminary design and analysis in Zemax for the pupil viewing camera.
\subsection{Baffling}
\label{sec:2.1}
The baffling is a critical component of the overall imager design as it blocks scattered light and background radiation from the imager detector and sequential IFS. The current design uses 8 gauge 6061 T6 aluminum sheet metal that is folded and welded together. There is a 0.5" wide lip along the bottom of the baffling that will provide stability and be used to fasten the baffling assembly to the optical plate. Bolting to the optical plate provides sufficient positional tolerance.
The baffling covers all optical components on the imager with only small gaps after the filter wheel, between the baffling boxes, and before the detector. Baffling holes are cut in the sheet metal for the beam path which are small enough to block outside light while large enough to avoid vignetting on the detector. Multiple faces inside the baffling suppress specular reflection. The baffling will be painted a special black that is sufficient for cryogenic operation to further suppress reflections.
Only two of the baffle boxes contain components under them. This allows for easier access to the imager components by removing a smaller baffle box rather than larger baffling as a whole. This also minimizes interferences with other components on the imager optical assembly and in the science cryostat as there are already necessary cutouts on parts of the baffling.
The first baffle tube bolts to the first baffle box and contains the field stop for the assembly. The second baffle tube bolts to the second baffle box and allows the baffling to be placed closer to the filter wheel exit. The two baffle boxes contain a bolt hole pattern and PEM nuts are pressed into these holes. The first OAP and flat mirror are covered by the first baffle box and the second OAP and the pupil viewing camera are covered by the large, third baffle box.
A simplified version of the baffling including the imager optical plate and the pupil and filter wheel was included in a Zemax model of the imager to analyze the scattered light seen by the detector. Figure \ref{fig:2} shows this simplified model and the ray trace analysis. The baffling allows the full field of view to reach the detector without vignetting. The resultant ghosting is due to internal reflections off the cryostat entrance window and can only be mitigated with a high quality window and Anti-Reflection (AR) coating. The detector is tilted $1^{\circ}$ and the filters are tilted $3^{\circ}$ to suppress reflections, the remainder are suppressed by the baffling.
\subsection{Pupil Viewing Camera}
\label{sec:2.2}
The pupil viewing camera is a necessary component of the Liger imager optical assembly. It is located between the pupil wheel and second OAP and focuses an image of the telescope pupil onto the imager focal plane. The pupil viewing camera accomplishes this by simply moving a single calcium fluoride (CaF$_2$) lens in and out of the beam path. The magnification of this lens and the OAP combined must be less than 1.412 to avoid overfilling the imager detector.
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{0.15in}
\begin{center}
\includegraphics[width=0.43\textwidth]{Figures/RIO.png}
\end{center}
\caption[Figure4]{SolidWorks rendering of the first two RIO mechanisms attached to the bracket for alignment on the optical plate. This view does not show the fold mirrors between the two mechanisms.}
\label{fig:4}
\end{wrapfigure}
An optical model for the pupil viewing camera was created in Zemax and an optimization routine was run to find the ideal placement for the lens at a wavelength of 2$\mu$m. For a CaF$_2$ lens at a temperature of 77 K, it was found the ideal placement is 545.5mm from the pupil plane. The shaded model of this is shown in Figure \ref{fig:3} as well as a spot diagram detailing the optical performance of the pupil viewing camera at each of five field points on the imager detector. The RMS spot size of the pupil viewing camera is about half a pixel or 12$\mu$m in radius at the center of the field and 12 pixels or 250$\mu$m in radius at the edges of the imager detector. Further analysis will be completed to determine whether this is sufficient or if this spot size can be reduced.
While the pupil viewing camera has not been fully designed, it will use the same flip mechanism as the RIO to switch the lens in-and-out of position. The lens does not need to be as precisely placed as the other optical components, so a simple mount design will suffice. The pupil viewing camera is located in the large baffle box on the imager optical plate and the design may change slightly to accommodate the camera. The pupil viewing camera will be used during alignment of the optical components of the imager, and when the imaging camera is integrated with the IFS in the science cryostat.
\section{Re-Imaging Optics}
\label{sec:3}
The spectrograph RIO houses three mechanisms that use simple re-imaging optics, two of which are placed directly above the imaging detector, to preserve the pristine image quality provided by KAPA\cite{Wright_2020}. It transfers the beam from the imager focal plane to either the lenslet or slicer subsystems. As well as choosing between the lenslet and slicer, the RIO also allows choosing between the 14 and 31 mas mode for the lenslet and the 75 and 150 mas mode for the slicer.
In total, the RIO contains six air-spaced doublets which are mounted on the three separate mechanisms as well as up to five fold mirrors along the beam path. For the lenslet modes, the beam passes through a single doublet, and both slicer modes pass through two doublets. The first mechanism is placed vertically above the detector and contains the doublet for the 14 mas lenslet mode. The first RIO mechanism also blocks the beam from entering the not chosen spectrograph path. After it, fold mirrors reflect light horizontally into the second mechanism which contains doublets for the 31 mas lenslet mode and the 75 and 150 mas slicer modes.
The third mechanism is further down the beam path and contains a second pair of doublets for the 75 and 150 mas slicer modes only. Fold mirrors send light from the third mechanism to the image slicer. For the lenslet path, fold mirrors feed the light to the lenslet array after the second RIO mechanism.
The first two of the three RIO mechanisms are aligned relative to each other on a bracket that is then aligned on the imager optical plate. These two mechanisms will be tested along with the imager in the test cryostat. Figure \ref{fig:4} shows a SolidWorks rendering of the first two RIO mechanisms mounted on this bracket. The view shown does not include the two fold mirrors between the first and second RIO mechanisms.
\section{Experimental Setup}
\label{sec:4}
The experimental setup, as shown in Figure \ref{fig:5}, consists of an AISI 304 stainless steel vacuum chamber that houses a 6061 T6 cold shield. The optical assembly rests and is adjusted on the base of the cold shield. The cold shield is thermally isolated from the chamber base with G10 A-frames. Multi-Layer Insulation (MLI) is used between the cold shield and chamber walls to reduce heat transfer through radiation. The top shell of the cold shield lowers over the optical assembly and bolts to guides along the edges. The vacuum chamber rests on a Newport vibration isolation system with a passive damped optical table on active pneumatic isolators. A custom made cart houses the cryogenic, vacuum, and electrical components of the setup. The experimental setup operates below a pressure of $10^{-5}$ Torr and temperature of 77 K with a maximum deflection between two points due to vibration of 1 $\mu$m.
The interior working dimensions of the setup is 1053$\times$1053$\times$670mm which is large enough to fit the full imager optical assembly as well as the first two RIO mechanisms. This gives a minimum half an inch of clearance around the full assembly when installed. The cold head raises up into the chamber, and a copper extension rises through a hole in the middle of the cold shield. Cold straps clamp to the extension and the cold shield base. Getters are placed near the cold straps to absorb condensation during cool down.
A previous version of the vacuum chamber was designed and analyzed in Wiley et al. 2020 \cite{Wiley_2020}. This chamber was enlarged to fit the first two RIO mechanisms in addition to the imager optical assembly. The larger chamber has a 1” thick base and lid to withstand the larger force on it from the vacuum. The walls of the chamber are 5/8" thick and welded together and onto the base. A top flange, 3” wide and 1" thick, is welded on top of the chamber walls to allow the lid to lower down and form a vacuum seal against its O-ring surface. From a SolidWorks simulation, the maximum stress in the chamber is 104 MPa which is about a factor of two below the yield strength of AISI 304, and the maximum deflection is 1.4mm.
The vacuum chamber is positioned on the Newport optical bench with guides and bolted in place with brackets after installation. Earthquake straps may be used to further secure the chamber to the optical platform if needed. The vacuum and cryogenic lines are connected over bellows and vibration isolation pads to the chamber to reduce induced vibrations. Electrical feedthroughs, vacuum gauges, and other components are installed on NW-100 and NW-160 flanges along the chamber sides. The vacuum line is connected over a NW-160 flange and a gate valve. The CaF$_2$ entrance window is mounted on a NW-100 flange. The imager will be aligned inside the test cryostat to a telescope simulator which will be mounted on the optical bench in front of the entrance window.
The experimental setup will be located on a fourth floor laboratory on the University of California San Diego campus. It will require a crane that can lift up to one ton and a clean room for assembling the components that are installed in vacuum. After characterization of the imager, it will be moved to the University of California Los Angeles where it will be installed in the final science cryostat. Analysis has been performed to ensure the assembly survives shipping between these two locations and the final location on Maunakea.
\section{Summary and Future Work}
\label{sec:5}
This paper describes the design of the imager optical assembly focusing on the baffling and pupil viewing camera as well as the spectrograph RIO system and the experimental test setup. The analysis included shows the baffling successfully suppresses internal reflections and background radiation. The optimal position for the pupil viewing camera is 545.5mm from the pupil plane for a CaF$_2$ lens at 77 K and provides a sufficient magnification and spot size at that location. The assembly as a whole will be tested in a vibration suppressed, cryogenic, vacuum chamber before being assembled in the final science cryostat.
Each major component of the imager optical assembly: the two OAPs, flat mirror, pupil wheel, filter wheel, detector, and pupil viewing camera, rest on a light-weighted optical plate. The first two RIO mechanisms also mount to this plate. After alignment of the individual components, baffling is then installed. This optical assembly is lowered onto an adapter frame that allows the full optical system to then be aligned.
Future work for the baffling includes incorporating the Re-Imaging Optics into the overall design as the RIO rests directly above the detector and needs baffling of its own. Future work also includes finishing the design for the pupil viewing camera mechanism which will move the CaF$_2$ lens in and out of the beam path to allow the detector to focus on an image of the telescope pupil. The current design of this mechanism fits within the large baffling box.
The Liger imager optical assembly and RIO meet the requirements for the overall Liger system. The Liger imager and RIO sequentially feeds the Liger IFS and serves as its own unique science case for AO imaging.
\acknowledgments
The Liger instrumentation program is supported by the Heising-Simons Foundation, the Gordon and Betty Moore Foundation, University of California Observatories, and W. M. Keck Observatory.
\bibliography{report}
\bibliographystyle{spiebib}
|
Title:
Characterizing the Daytime Sextantids Meteor Shower and Unveiling the Nature of the Phaethon-Geminid Stream Complex |
Abstract: The Daytime Sextantids meteor shower, part of the Phaethon-Geminid Stream
Complex (PGC), is closely related to the Geminids, currently the strongest
meteor shower visible at the Earth. The DSX share a similar orbit to asteroid
2005 UD, but the nature of the association remains unclear. From optical data
we find that DSX meteors ablate similarly to Geminids, suggesting that they are
also high density and consistent with a common origin. From radar data we have
isolated 19,007 DSX orbits through application of a novel convex hull approach
to determine stream membership. We find at the peak the mean semi-major axis is
near 1 AU, eccentricity is 0.86 and that both decrease as a function of solar
longitude. The inclination averages 25 degrees at the peak but increases over
time. Noticeable DSX activity extends from solar longitude 173-196$^{\circ}$
with a flux plateau between 186 - 189$^{\circ}$. The peak flux is $2 \pm 0.05
\times 10^{-3}$ km$^{-2}$ hr$^{-1}$, equivalent to a ZHR of 20. We estimate a
true differential mass index for the shower of $s = 1.64 \pm 0.06$ at the time
of peak and an average of $1.70 \pm 0.07$ for days surrounding the peak. The
mass of the DSX stream is estimated to be $10^{16}$ g, the same order as 2005
UD, suggesting the stream is too massive to have been created by recent
meteoroid production from 2005 UD. We propose that the DSX and 2005 UD were
created in the same break-up event that created 3200 Phaethon.
| https://export.arxiv.org/pdf/2208.03521 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\appendix
\section{Literature Comparison}\label{app:literature_appendix}
This appendix section contains two tables (Table \ref{literature_table} and Table \ref{literature_table_2}) that compare the radiant and orbital elements for the DSX in the literature with the values calculated in this study.
\begin{table*}
\begin{adjustbox}{angle=90}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| }
\hline
& \textbf{$\lambda_{max}$ (deg)} & \textbf{$\alpha_R$ (deg)} & \textbf{$\delta_R$ (deg)} & \textbf{$V_g$ (km/s)} & \textbf{$a$ (AU)} & \textbf{$e$} & \textbf{$i$ (deg)} & \textbf{$\omega$ (deg)} & \textbf{$\Omega$ (deg)} \\
\hline
\textbf{Weiss (1960)} & 187 & 155 $\pm$ 8 & 0 $\pm$ 10 & - & - & - & - & - & - \\
\hline
\textbf{Nilsson (1964)} & 183.6 & 151.7 $\pm$ 0.9 & -0.1 $\pm$ 1.5 & 32.2 $\pm$ & 0.89 $\pm$ 0.03 & 0.87 $\pm$ 0.01 & 21.8 $\pm$ 2.3 & 213.2 $\pm$ 2.1 & 3.6\\
\hline
\textbf{Sekanina (1976)} & 195 & 156.5 $\pm$ 0.9 & -8.3 $\pm$ 0.8 & 29.7 & 0.936 & 0.816 $\pm$ 0.011 & 31.1 $\pm$ 1.0 & 212.3 $\pm$ 1.0 & 15.1 $\pm$ 0.1 \\
\hline
\textbf{Jopek et al. (1999)} & 183 & 152 & 3 & 32 & - & 0.88 & 19 & 211 & 3 \\
\hline
\textbf{Galligan \& Baggaley (2002)} & 186.1 & 154.5 $\pm$ 2.7 & -1.5 $\pm$ 0.5 & 31.2 $\pm$ 1.6 & 1.04 $\pm$ 0.023 & 0.855 $\pm$ 0.023 & 23.1 $\pm$ 3.9 & 212.5 $\pm$ 3.0 & 6.1 $\pm$ 0.0 \\
\hline
\textbf{Brown et al. (2008)} & 187 & 154.6 & -1.4 & 31.84 & - & - & - & - & - \\
\hline
\textbf{Younger et al. (2009)} & 188.1 & 155.7 & -3.9 & 32.7 & 1.09 & 0.858 & 23.9 & 326.1 & 8.6 \\
\hline
\textbf{SonotaCo (2009)} & 189.2 & 156.3 & -2.9 & 31.2 & - & - & - & - & - \\
\hline
\textbf{Brown et al. (2010)} & 186 & 154.3 & -1 & 31.3 & 1.07 & 0.858 & 22.0 & 212.99 & 6.0 \\
\hline
\textbf{Rudawaska et al. (2015)} & 187.9 & 155.0 $\pm$ 1.5 & -1.4 $\pm$ 1.5 & 31.7 $\pm$ 1.2 & 1.0 & 0.9 & 23.4 & 211.4 & 7.9 \\
\hline
\textbf{Jenniskens et al} & 188 & 156.6 & -2.4 & 32.9 & 1.14 & 0.874 & 24.3 & 214.3 & 6.4 \\
\hline
\textbf{Pokorn{\'y} et al. (2017)} & 187 & 155.4 & -1.6 & 31.4 & 1.08 $\pm$ 0.08 & 0.858 $\pm$ 0.022 & 22.2 & 213.6 & 7 \\
\hline
\makecell{\textbf{Bruzzone et al. (2020) } \\ \textbf{(CAMS)}} & 191 & 157.59 & -3.64 & 32.8 & 1.11 $\pm$ 0.02 & 0.878 $\pm$ 0.003 & 27 $\pm 1$ & 211.7 $\pm$ 0.9 & 11 \\
\hline
\makecell{\textbf{Bruzzone et al. (2020)} \\ \textbf{(SAAMER-OS)}} & 187 & 153.93 & -1.65 & 32.1 & 1.055 $\pm$ 0.009 & 0.872 $\pm$ 0.002 & 25.8 $\pm$ 0.5 & 210.8 $\pm$ 0.4 & 7 \\
\hline
\textbf{Kipreos et al. (2022)} & 186 & 153.06 & -0.61 & 30.91 $\pm$ 2.33 & 0.98 $\pm$ 0.13 & 0.85 $\pm$ 0.03 & 22.57 $\pm$ 0.06 & 211.14 $\pm$ 0.05 & 6.36 $\pm$ 0.01 \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Measurements of the Daytime Sextantids meteor shower made by previous groups, along with the calculations made in this study. The DSX measurements including in this table are the solar longitude at the peak of the shower ($\lambda_{max}$), right ascension ($\alpha_R$), declination ($\delta_R$), geocentric velocity ($V_g$), semi-major axis ($a$), eccentricity ($e$), inclination ($i$), argument of perihelion ($\omega$), and longitude of the ascending node ($\Omega$).}
\label{literature_table}
\end{table*}
\begin{table*}
\begin{adjustbox}{angle=90}
\begin{tabular}{ |c|c|c|c|c|c|c|c|c|c| }
\hline
& \textbf{Year(s) of observation} & \textbf{Number of observations} & \textbf{Type of observations} & \textbf{Location} \\
\hline
\textbf{Weiss (1960)} & 1956 - 1956 & - & radar & - \\
\hline
\textbf{Nilsson (1964)} & 1961 & 9 & radar & Adelaide, Australia \\
\hline
\textbf{Sekanina (1976)} & 1968 - 1969 & 10 & radar & Illinois, USA \\
\hline
\textbf{Jopek et al. (1999)} & 1960 - 1961 and 1968 - 1969 & 14 & radar & Adelaide, Australia \\
\hline
\textbf{Galligan \& Baggaley (2002)} & 1995 - 1999 & 410 & radar & Adelaide, Austrailia \\
\hline
\textbf{Brown et al. (2008)} & 2001 - 2006 & - & radar & Tavistock, Ontario\\
\hline
\textbf{Younger et al. (2009)} & 2006 - 2007 & - & radar & Davis Station Antarctica and Darwin, Australia\\
\hline
\textbf{SonotaCo (2009)} & 2007 - 2009 & 4 & optical & Japan (SonotaCo Network)\\
\hline
\textbf{Brown et al. (2010)} & 2001 - 2008 & 1292 & radar & Tavistock, Ontario\\
\hline
\textbf{Rudawaska et al. (2015)} & 2001 - 2014 & 14 & optical & Europe (EDMOND database)\\
\hline
\textbf{Jenniskens et al. (2016)} & 2010 - 2013 & 14 & optical & Global (CAMS)\\
\hline
\textbf{Pokorn{\'y} et al. (2017)} & 2012 - 2015 & - & radar & Rio Grande, Argentina \\
\hline
\makecell{\textbf{Bruzzone et al. (2020)} \\ \textbf{(CAMS)}} & 2011 - 2017 & 25 & optical & multiple\\
\hline
\makecell{\textbf{Bruzzone et al. (2020)} \\\textbf{(SAAMER-OS)}} & 2012 - 2019 & 2255 & radar & Rio Grande, Argentina\\
\hline
\textbf{Kipreos et al. (2022)} & 2002 - 2020 & 19,007 & radar & Tavistock, Ontario\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Measurements of the Daytime Sextantids shower made by other research groups. The information contained in this table includes the years that the observations were taken, the number of total observations, the type of observation, and the location of the radar or camera system. The number of observations for our study is the number of total meteors located in the convex hull for the duration of the Daytime Sextantids meteor shower.}
\label{literature_table_2}
\end{table*}
\section{Convex Hull Results}\label{app:Convex Hull Results}
This table contains figures (Figures \ref{ch_1}, \ref{ch_2}, \ref{ch_3}, and \ref{ch_4}) of the convex hull results for each solar longitude of the DSX shower.
\section{An Alternative, More Robust Convex Hull Method}
\label{alt_convex_hull_method}
An assumption built into the Convex Hull meteor selection method, discussed in the main paper, is that the meteor radiants can be modeled as individual points in the radiant space. In reality, each meteor echo observed by CMOR has an uncertainty associated with its velocity and radiant measurement. Therefore each meteor echo is more realistically modeled by a three-dimensional Gaussian probability distribution in radiant space. This section explores whether this more computationally complex modeling method produces noticeably different results than the method discussed in Section 3.4.1 and whether it is a necessary modification.
To model the meteor radiants as three-dimensional Gaussian probability distributions, we use three separate two-dimensional Gaussian probability distributions in each radiant space parameter: ($\lambda - \lambda_{\odot}$), $\beta$, and $V_g$. The center of each distribution is the value measured by CMOR, and measurement uncertainty represents one standard deviation from the mean per echo estimated using the Monte Carlo approach as described in \citet{WerykBrown2012}. We note that this approach ignores any non-diagonal covariance terms, however radiant covariances have been poorly explored so far \citep{vida2020}.
The main difference in modeling the meteor radiants as three-dimensional Gaussian probability distributions instead of points in the radiant space is that the DSX and average background number density matrices must be calculated differently. The remaining steps in the Convex Hull meteor selection method remain the same.
The extent of the Gaussian probability distribution of a meteor echo in three-dimensional radiant space can be very small, especially in the ecliptic longitude and latitude dimensions. For high-quality meteor echoes, the extent of this distribution is much smaller than the 8$\times$8$\times$8 voxels used in Section 3.4.1 to create the 3D number density matrix. Therefore more voxels are required to capture the scale of the 3D echo distributions. To increase the number of voxels, each 8$\times$8$\times$8 voxel is split into 100$\times$100$\times$100 sub-voxels, meaning that there are 800$\times$800$\times$800 sub-voxels in total.
Each meteor echo is modeled as a 3D Gaussian probability distribution. Instead of counting the number of whole meteors in each voxel, we calculate the probability that the meteor is located in each sub-voxel of radiant space. Each sub-voxel contains a small range of ($\lambda - \lambda_{\odot}$), $\beta$, and $V_g$ values. The Gaussian probability functions are used to determine the probability that a meteor is located within a given voxel, using the mean of each of the sub-voxel's ($\lambda - \lambda_{\odot}$), $\beta$, and $V_g$ ranges. A meteor's probability is set to zero if any of the parameter values are more than two standard deviations from the mean of a given distribution. After the probability calculations are completed in all applicable sub-voxels, the sum of the probabilities for a given meteor is normalized to one.
In our application to the DSX, there are 512 million sub-voxels in total, so to make this process less computationally expensive, we reject any meteor with a maximum or minimum radiant value (using the CMOR measurement uncertainties) outside the ($\lambda - \lambda_{\odot}$), $\beta$, and $V_g$ radiant cuts. This rejection reduces the number of sub-voxels that need to be evaluated. For the DSX peak day, located at solar longitude 186$^{\circ}$, this rejection reduced the number of meteors from 1342 to 953.
After the 3D number density matrix is created, the 800$\times$800$\times$800 matrix is converted into an 8$\times$8$\times$8 matrix. This conversion is done by adding all values in the set of sub-voxels contained within each larger voxel. Once the number density matrix is recombined into an 8$\times$8$\times$8 size matrix, the rest of the analysis is identical to the process described in Section 3.4.1, except that this sub-voxel method is also used to create the average background density matrix. The complete set of convex hulls for the duration of the DSX shower is located in Appendix \ref{sec:Alt Convex Hull Results}.
\subsection{Calculating the DSX Orbital Elements from Radar Data} \label{orbital_element_section}
The convex hull results, calculated in the above sections, identify the set of individual DSX meteors with 95\% confidence for each solar longitude bin. Once the DSX meteor set has been isolated, the mean radiant and orbital elements are calculated using the method described in \citep{Jopek2006}. This method calculates the mean values using the least-squares method to average the heliocentric vectorial elements.
\subsubsection{Comparing Results}
The alternate convex hull method is more rigorous, but much more computationally intensive, so we compare the results to determine whether the more complex method yields significantly different results.
The mean orbital element and radiant values over the duration of the Daytime Sextantids shower are shown in Figures 11 and 12 for the computationally simple convex hull method described in Section 3.4.1. Figures \ref{fig:orbital_elements_alt} and \ref{fig:DSX radiant alt} contain the mean orbital elements and radiant results for the alternate method described in Section \ref{alt_convex_hull_method}. Note that the alternate convex hull creation method only detected the shower from a solar longitude range of 175$^{\circ}$ to 196$^{\circ}$. In contrast, the computationally simple convex hull method described in Section 3.4.1 detected the shower from 173$^{\circ}$ to 196$^{\circ}$.
Comparing the orbital element results in Figures \ref{fig:orbital_elements_alt}, and the radiant results in Figures \ref{fig:DSX radiant} and \ref{fig:DSX radiant alt}, we find that there is no significant difference in the radiant and orbital elements results. Figures \ref{fig:uncertainty orbital elements} and \ref{fig:uncertainty radiant} show the uncertainty in the results for each solar longitude for both convex hull methods. The uncertainties of the Jopek results per solar longitude are similar during the peak days for the two methods. The uncertainties produced by the alternate convex hull method are similar to the computationally simple method near the shower's peak but are larger around the wings of the shower. This effect is likely due to the lower number statistics in the alternate convex hull method due to the rejection of meteors with high uncertainties, which were not removed in the computationally simple method.
We have found that while the alternate convex hull method is a more robust method, the computationally simple convex hull method produces results similar enough that it is acceptable for the meteors to be modeled as points in radiant space instead of 3D Gaussian probability distributions.
Figures 11, \ref{fig:orbital_elements_alt}, \ref{fig:DSX radiant}, and \ref{fig:DSX radiant alt} compare our convex Hull and wavelet-based results of orbital element variations with solar longitude to those of previous work. Where past work measured orbits for a single solar longitude day of the shower their results are displayed on the corresponding solar longitude while if over a range of solar longitudes results are plotted at the reported DSX peak. A detailed summary of these past results can be found in Appendix \ref{app:literature_appendix}, table \ref{literature_table} and Table \ref{literature_table_2}.
\section{Alternate Method Convex Hull Results}\label{sec:Alt Convex Hull Results}
\section{Additional Optical DSX Meteor Information}
\label{optical_appendix}
\begin{table*}
\begin{adjustbox}{angle=90}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Time (UTC)} & \textbf{Shower} & \textbf{$\alpha_g$} & \textbf{$\delta_g$} & \textbf{$V_g$} & \textbf{$z_R$} & \textbf{$H_{\mathrm{b}}$} & \textbf{$\rho_b$} \\
\hline
\textbf{2019-09-28 11:14:28} & DSX & $146.43 \pm 0.445$ & $0.227 \pm 0.078$ & $33.91 \pm 0.129$ & $81.61 \pm 0.362$ & $101.56 \pm 0.310$ & $4.25 \times 10^{-10}$ \\
\hline
\textbf{2019-10-03 11:58:38} & DSX & $155.37 \pm 0.294$ & $-1.44 \pm 0.114$ & $34.00 \pm 0.238$ & $77.64 \pm 0.290$ & $97.09 \pm 0.082$ & $9.80 \times 10^{-10}$ \\
\hline
\textbf{2021-09-28 16:50:33} & DSX & $153.010 \pm 0.047$ & $-2.216 \pm 0.110$ & $35.39 \pm 0.017$ & $83.25 \pm 0.111$ & $99.72 \pm 0.071$ & $6.15 \times 10^{-10}$ \\
\hline
\textbf{2021-09-29 04:35:04} & DSX & $153.77 \pm 0.017$ & $0.62 \pm 0.020$ & $34.51 \pm 0.010$ & $86.74 \pm 0.023$ & $102.35 \pm 0.016$ & $3.72 \times 10^{-10}$ \\
\hline
\textbf{2021-09-30 12:22:06} & DSX & $153.05 \pm 0.115$ & $-1.93 \pm 0.057$ & $36.94 \pm 0.207$ & $79.38 \pm 0.095$ & $101.94 \pm 0.118$ & $3.87 \times 10^{-10}$ \\
\hline
\textbf{2021-10-01 03:48:48} & DSX & $156.29 \pm 0.065$ & $-0.235 \pm 0.125$ & $35.99 \pm 0.043$ & $81.14 \pm 0.084$ & $99.84 \pm 0.033$ & $5.80 \times 10^{-10}$ \\
\hline
\textbf{2021-10-01 04:41:16} & DSX & $155.20 \pm 0.029$ & $-0.603 \pm 0.015$ & $35.00 \pm 0.006$ & $80.68 \pm 0.029$ & $102.88 \pm 0.025$ & $3.25 \times 10^{-10}$ \\
\hline
\textbf{2021-10-01 12:16:03} & DSX & $155.39 \pm 0.039$ & $-1.37 \pm 0.069$ & $35.16 \pm 0.024$ & $79.38 \pm 0.071$ & $102.23 \pm 0.036$ & $3.65 \times 10^{-10}$ \\
\hline
\textbf{2021-10-02 01:41:41} & DSX & $154.83 \pm 0.040$ & $-0.80 \pm 0.352$ & $34.28 \pm 0.078$ & $83.80 \pm 0.240$ & $101.34 \pm 0.054$ & $4.43 \times 10^{-10}$ \\
\hline
\textbf{2021-10-02 11:43:31} & DSX & $152.71 \pm 0.091$ & $0.141 \pm 0.050$ & $33.06 \pm 0.107$ & $79.28 \pm 0.073$ & $98.34 \pm 0.038$ & $7.83 \times 10^{-10}$ \\
\hline
\textbf{2021-10-02 11:55:25} & DSX & $155.32 \pm 0.114$ & $-1.58 \pm 0.313$ & $34.46 \pm 0.104$ & $78.11 \pm 0.304$ & $102.54 \pm 0.165$ & $3.43 \times 10^{-10}$ \\
\hline
\textbf{2021-10-04 05:03:19} & DSX & $156.10 \pm 0.027$ & $-1.72 \pm 0.015$ & $34.12 \pm 0.009$ & $83.82 \pm 0.027$ & $98.64 \pm 0.022$ & $7.25 \times 10^{-10}$ \\
\hline
\textbf{2021-10-02 11:59:52} & DSX & $156.41 \pm 0.019$ & $-1.11 \pm 0.026$ & $35.43 \pm 0.008$ & $77.99 \pm 0.026$ & $103.77 \pm 0.019$ & $2.70 \times 10^{-10}$ \\
\Xhline{5\arrayrulewidth}
\textbf{2020-12-14 02:54:44} & GEM & $109.95 \pm 0.263$ & $32.75 \pm 0.074$ & $36.20 \pm 0.011$ & $77.40 \pm 0.16$ & $101.57 \pm 0.096$ & $3.97 \times 10^{-10}$ \\
\hline
\textbf{2019-12-14 17:50:47} & GEM & $111.53 \pm 0.032$ & $33.45 \pm 0.043$ & $36.30 \pm 0.021$ & $78.16 \pm 0.051$ & $105.31 \pm 0.047$ & $2.11 \times 10^{-10}$ \\
\hline
\textbf{2020-12-13 17:31:10} & GEM & $111.94 \pm 0.046$ & $33.90 \pm 0.024$ & $35.90 \pm 0.023$ & $83.84 \pm 0.038$ & $103.32 \pm 0.027$ & $3.05 \times 10^{-10}$ \\
\hline
\textbf{2020-12-14 02:33:39} & GEM & $111.87 \pm 0.040$ & $33.17 \pm 0.056$ & $35.80 \pm 0.012$ & $80.42 \pm 0.038$ & $102.69 \pm 0.029$ & $3.24 \times 10^{-10}$ \\
\hline
\textbf{2019-12-14 17:06:07} & GEM & $111.85 \pm 0.024$ & $33.97 \pm 0.033$ & $35.76 \pm 0.011$ & $82.90 \pm 0.030$ & $102.33 \pm 0.023$ & $3.68 \times 10^{-10}$ \\
\hline
\textbf{2019-12-14 18:08:00} & GEM & $110.81 \pm 0.892$ & $34.12 \pm 0.282$ & $37.48 \pm 0.641$ & $74.22 \pm 0.811$ & $99.20 \pm 0.281$ & $6.66 \times 10^{-10}$\\
\hline
\textbf{2019-12-15 02:20:12} & GEM & $111.81 \pm 0.173$ & $32.40 \pm 0.296$ & $35.98 \pm 0.030$ & $81.81 \pm 0.278$ & $99.40 \pm 0.108$ & $6.16 \times 10^{-10}$\\
\hline
\textbf{2020-12-12 02:29:53} & GEM & $109.72 \pm 0.196$ & $34.34 \pm 0.058$ & $36.79 \pm 0.183$ & $79.28 \pm 0.187$ & $98.93 \pm 0.062$ & $6.72 \times 10^{-10}$\\
\hline
\textbf{2020-12-14 02:25:50} & GEM & $112.12 \pm 0.044$ & $32.59 \pm 0.087$ & $37.02 \pm 0.027$ & $81.87 \pm 0.077$ & $102.47 \pm 0.057$ & $3.38 \times 10^{-10}$\\
\hline
\textbf{2019-12-14 17:41:47} & GEM & $111.68 \pm 0.047$ & $33.62 \pm 0.061$ & $36.21 \pm 0.005$ & $78.74 \pm 0.065$ & $102.21 \pm 0.040$ & $3.78 \times 10^{-10}$\\
\hline
\textbf{2020-12-11 17:46:30} & GEM & $109.29 \pm 0.082$ & $33.54 \pm 0.040$ & $35.62 \pm 0.228$ & $79.11 \pm 0.082$ & $100.88 \pm 0.037$ & $4.85 \times 10^{-10}$\\
\hline
\textbf{2020-12-13 17:45:54} & GEM & $111.50 \pm 0.073$ & $33.39 \pm 0.041$ & $35.83 \pm 0.222$ & $80.80 \pm 0.054$ & $103.66 \pm 0.022$ & $2.86 \times 10^{-10}$\\
\hline
\textbf{2020-12-14 02:58:56} & GEM & $112.53 \pm 0.184$ & $32.91 \pm 0.057$ & $36.96 \pm 0.056$ & $79.09 \pm 0.157$ & $100.29 \pm 0.075$ & $5.10 \times 10^{-10}$\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Detailed atmospheric trajectory data for the 13 optical Daytime Sextantid meteors analyzed in this paper and the 13 optical Geminid meteors used to compare their relative compositions. $\alpha_g$ (deg), $\delta_g$ (deg), and $V_g$ (km/s) are the geocentric radiant and velocity, $H_b$ (km) is the begin height, $\rho_b$ is the air mass density at the meteor begin point.}
\label{optical_table_extra}
\end{table*}
\begin{table*}
\begin{adjustbox}{angle=90}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Time (UTC)} & \textbf{Shower} & \textbf{$a$} & \textbf{$e$} & \textbf{$i$} & \textbf{$\omega$}\\
\hline
\textbf{2019-09-28 11:14:28} & DSX & $0.95 \pm 0.006$ & $0.87 \pm 0.002$ & $29.97 \pm 1.002$ & $208.28 \pm 0.443$\\
\hline
\textbf{2019-10-03 11:58:37} & DSX & $1.07 \pm 0.008$ & $0.86 \pm 0.004$ & $24.12 \pm 0.678$ & $212.67 \pm 0.499$\\
\hline
\textbf{2021-09-28 16:50:33} & DSX & $1.18 \pm 0.002$ & $0.88 \pm 0.0004$ & $22.98 \pm 0.24$ & $212.90 \pm 0.091$\\
\hline
\textbf{2021-09-29 04:35:04} & DSX & $1.16 \pm 0.001$ & $0.87 \pm 0.0001$ & $23.24 \pm 0.029$ & $214.50 \pm 0.039$\\
\hline
\textbf{2021-09-30 12:22:06} & DSX & $1.26 \pm 0.01$ & $0.89 \pm 0.002$ & $30.87 \pm 0.626$ & $213.26 \pm 0.133$\\
\hline
\textbf{2021-10-01 03:48:48} & DSX & $1.27 \pm 0.003$ & $0.89 \pm 0.001$ & $23.70 \pm 0.193$ & $214.46 \pm 0.161$\\
\hline
\textbf{2021-10-01 04:41:16} & DSX & $1.17 \pm 0.002$ & $0.874 \pm 0.0001$ & $23.95 \pm 0.026$ & $213.96 \pm 0.050$\\
\hline
\textbf{2021-10-01 12:16:03} & DSX & $1.19 \pm 0.005$ & $0.87 \pm 0.0001$ & $24.73 \pm 0.097$ & $214.49 \pm 0.132$\\
\hline
\textbf{2021-10-02 01:41:41} & DSX & $1.10 \pm 0.010$ & $0.865 \pm 0.001$ & $24.51 \pm 0.590$ & $213.09 \pm 0.428$\\
\hline
\textbf{2021-10-02 11:43:31} & DSX & $0.96 \pm 0.004$ & $0.86 \pm 0.001$ & $23.55 \pm 0.220$ & $209.16 \pm 0.105$\\
\hline
\textbf{2021-10-02 11:55:25} & DSX & $1.11 \pm 0.006$ & $0.87 \pm 0.003$ & $24.81 \pm 0.150$ & $213.36 \pm 0.600$\\
\hline
\textbf{2021-10-04 05:03:19} & DSX & $1.06 \pm 0.001$ & $0.86 \pm 0.001$ & $26.13 \pm 0.016$ & $212.50 \pm 0.046$\\
\hline
\textbf{2021-10-02 11:59:52} & DSX & $1.19 \pm 0.002$ & $0.88 \pm 0.001$ & $24.18 \pm 0.042$ & $213.61 \pm 0.047$\\
\Xhline{5\arrayrulewidth}
\textbf{2020-12-14 02:54:44} & GEM & $1.44 \pm 0.016$ & $0.89 \pm 0.0002$ & $21.06 \pm 0.323$ & $321.89 \pm 0.318$\\
\hline
\textbf{2019-12-14 17:50:47} & GEM &$1.37 \pm 0.004$ & $0.89 \pm 0.0002$ & $22.77 \pm 0.070$ & $323.85 \pm 0.086$\\
\hline
\textbf{2020-12-13 17:31:10} & GEM & $1.29 \pm 0.001$ & $0.89 \pm 0.0004$ & $23.44 \pm 0.054$ & $324.82 \pm 0.080$\\
\hline
\textbf{2020-12-14 02:33:39} & GEM & $1.29 \pm 0.002$ & $0.89 \pm 0.0004$ & $22.72 \pm 0.093$ & $324.44 \pm 0.072$\\
\hline
\textbf{2019-12-14 18:08:00} & GEM & $1.57 \pm 0.115$ & $0.91 \pm 0.007$ & $25.16 \pm 0.745$ & $321.86 \pm 1.498$\\
\hline
\textbf{2019-12-14 17:06:07} & GEM & $1.30 \pm 0.002$ & $0.89 \pm 0.001$ & $22.89 \pm 0.067$ & $324.18 \pm 0.045$\\
\hline
\textbf{2019-12-15 02:20:12} & GEM & $1.30 \pm 0.011$ & $0.89 \pm 0.001$ & $21.28 \pm 0.422$ & $324.92 \pm 0.499$\\
\hline
\textbf{2020-12-12 02:29:53} & GEM & $1.39 \pm 0.028$ & $0.90 \pm 0.002$ & $25.84 \pm 0.184$ & $323.77 \pm 0.309$\\
\hline
\textbf{2020-12-14 02:25:50} & GEM & $1.38 \pm 0.003$ & $0.90 \pm 0.001$ & $23.52 \pm 0.112$ & $325.58 \pm 0.147$\\
\hline
\textbf{2019-12-14 17:41:47} & GEM & $1.35 \pm 0.003$ & $0.89 \pm 0.001$ & $23.04 \pm 0.087$ & $323.93 \pm 0.113$\\
\hline
\textbf{2020-12-11 17:46:30} & GEM & $1.28 \pm 0.022$ & $0.89 \pm 0.003$ & $21.72 \pm 0.315$ & $324.58 \pm 0.132$\\
\hline
\textbf{2020-12-13 17:45:54} & GEM & $1.30 \pm 0.006$ & $0.89 \pm 0.001$ & $22.18 \pm 0.079$ & $324.35 \pm 0.106$\\
\hline
\textbf{2020-12-14 02:58:56} & GEM & $1.36 \pm 0.006$ & $0.90 \pm 0.001$ & $24.53 \pm 0.151$ & $325.48 \pm 0.314$\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Detailed orbital data for the 13 optical Daytime Sextantid meteors and the 13 optical Geminid meteors used to compare their relative compositions. $a$ (AU) is the semi-major axis of the stream, $i$ (deg) is the inclination angle of the stream, $e$ is the eccentricity of the stream, and $\omega$ (deg) is the argument of the perihelion of the stream}
\label{optical_table_extra}
\end{table*}
\bibliographystyle{mnras}
\bibliography{bibliography}
\bsp %
\label{lastpage} |
Title:
Numerical Estimations of the Distribution of the Lifetime of Bubbles Emerging from First Order Cosmological Phase Transitions |
Abstract: We present a mathematical framework to produce a numerical estimation to the
distribution of the lifetime of bubbles emerging from first order cosmological
phase transitions. In a precedent work, we have implemented the Sound Shell
model to predict the power spectra of gravitational waves arising from the
decay of scalar fields. The model depends on the lifetime distribution of
bubbles before collision, which in turn depends on the transition rate $\beta$
and the speed of the bubble wall $v$. Empirical exponential laws were used to
describe the lifetime distribution and the resultant power spectra. For
detonations, the results show a good agreement with simulations where the
bubbles have nucleated simultaneously with a mean separation distance. However,
for deflagrations, the results show that the amplitude of gravitational waves
is higher at longer wavelength than simultaneous nucleation, indicating the
importance of having a more accurate description of the lifetime distribution
of bubble lifetime.
| https://export.arxiv.org/pdf/2208.10636 |
\title{{\Large Numerical Estimations of the Distribution of the Lifetime of Bubbles Emerging from First Order Cosmological Phase Transitions}\\[3mm] }
\author{Mulham Hijazi$^{\,a}\,$\footnote{E-mail address: {\tt [email protected]}}}
\affiliation{\vspace{2mm}${}^a$Department of Physics and Astronomy,
University of Manchester}
\vspace{10mm}
\section{Introduction}
The standard model of particle physics predicts that massive particles gain their mass through the Higgs mechanism, as the value of their masses is proportional to the vacuum expectation value of the scalar field \cite{Goldstone:1962es,Gleiser:1998kk,Bailin:2004zd,Linde:1978px}. Thermal field theory predicts that the potential of the scalar field is affected by the temperature of the Universe. The effective potential is of the form:
\begin{align}
V_\text{eff}(\phi,T)= \frac{D}{2}(T^2-T_0^2)|\phi|^2-\frac{E}{3}T|\phi|^3+\frac{\lambda}{4}|\phi|^4,
\end{align}
where $D$, $T_0$, $E$, and $\lambda$ are constants \cite{Laine:2016hma,Linde:1983px,Vainshtein:1981wh,Linde:1981zj}. We can see that above the critical temperature, $T_c=T_0/\sqrt{1-2E^2/9\lambda D}$, the potential becomes symmetric and the minimum of the potential occurs only at $\phi=0$. Thus, we expect that standard model particles were massless at early times of the history of the Universe and the fundamental forces of nature were unified.
However, as the Universe cools down below the critical temperature, the potential shapes up to have a second minima, a true vacuum of the theory. Quantum mechanical laws allow for the Universe to tunnel through the potential barrier to reside in the true vacuum, as energetics favour the tunneling to the new lowest ground state of the vacuum.
The mathematical foundations describing the process of vacuum decay were first laid out by Coleman, where he introduced the so-called the thin wall solution to the equation of motion corresponding to a potential where the difference between the false and true vacuum is small compared to the height of the barrier between them. The tunneling rate per unit volume is given by \cite{Coleman:1977py,Coleman:1978ae,Rajaraman:1982is,Shifman:1994ee,Rubakov:1984bh,Callan:1977pt,Coleman:1980aw}
\begin{align}
\Gamma/V=A e^{-B},
\end{align}
where $A$ is a prefactor, and the exponent $B$ is the difference between the Euclidean action $S_E$ of the bounce and false vacuum solutions. The mechanism of which vacuum decay occurs is through the nucleation of bubbles which grow and fill up the entire spacetime continuum, with the interior of these bubbles residing in the true vacuum. The decay of the vacuum leads to cosmological phase transitions as standard model particles gain mass, breaking gauge invariances in the process. Moreover, studying the stability of the vacuum helps us set constraints on physical constants in particle physics models \cite{Branchina:2018xdh, Linde:2007fr,Frampton:1976pb,Weinberg:2012pjx,Hijazi:2019fme}. Signatures of such decays may manifest themselves in the form of resultant gravitational waves which we aim to detect in the future. Thus, several papers have aimed to predict the shape of the power spectra of these gravitational waves \cite{Hindmarsh:2015qjv,Kamionkowski:1993fg,Jinno:2016vai,Weir:2016tov,Hindmarsh:2017gnf}.
In a precedent work, we have implemented the Sound shell model to predict the shape of the power spectra of the gravitational waves resulting from these cosmological phase transitions \cite{Hindmarsh:2016lnk,Hindmarsh:2019phv}. These gravitational waves are sourced by the explosive growth of bubbles of the true vacuum, governed by the hydrodynamics occurring at the bubble walls. The velocity profiles of the cosmological fluids surrounding the bubbles are classified into detonantions, deflagrations, and hybrids \cite{Espinosa:2010hh,Sopena:2010zz,Ignatius:1993qn}.
The shape of the power spectra is affected by the lifetime distribution of these bubbles before they collide. An empirical exponential law was used to describe this distribution and the results were compared to the power spectra predicted from simulations where simultaneous nucleation of bubbles with a fixed separation distance was assumed. The results were in good agreement for detonations, however for deflagrations the results differed, showing a higher amplitude at longer wavelengths \cite{Hindmarsh:2019phv}, indicating that a more accurate description of the lifetime of the vacuum is needed. Although it was argued in another paper that the source of the discrepency in results for deflagrations was due to the reduction of kenitic energy due to the integration of the sound shells \cite{Cutting:2019zws}.
In this paper, we present a mathematical framework in order to numerically estimate the distribution of the lifetime of bubbles before their collision. In Section II, we will lay out the theoretical background needed to understand the nature of the problem, and then proceed to derive mathematical expressions to estimate the lifetime of nucleating bubbles. In Section III, we will present our results for a range of values for the transition rate $\beta$ and bubble wall speeds $v$, and compare our results to bubble lifetime distributions resulting from simulations of randomly generated periodic Universes described by a unit cell with a fixed number of bubbles. At last, Section IV will discuss the results and summarise our conclusions.
\section{Theoretical Background}\label{sec:Theory}
The difficulty in calculating an exact distribution of the lifetime of bubbles lies in the fact that vacuum decay is a probabilistic process, which entails that these bubbles nucleate at random locations and times. Moreover, bubbles only nucleate in the metastable phase which means that the space available for these bubbles to nucleate shrinks over time as more bubbles nucleate and grow and eventually fill up the entire space.
However, an estimate to the number of bubbles nucleated $N(t)$, and the fraction of space which resides in the false vacuum $h(t)$, as a function of time was worked out by Enqvist \textit{et al}\cite{Enqvist:1991xw}, as we quote their results:
\begin{align}
h(t)=\exp(-e^{\beta(t-t_f)}),\\
N(t)=V\frac{\beta^3}{8\pi v^3} (1-h(t)),
\end{align}
where $\beta$ is the transition rate, $v$ is the bubble wall velocity, and $t_f$ is some arbitrary time chosen such that $h(t_f)=1/e$. The value of $t_f$ is irrelevant to our discussion since we are only interested in time differences between the time of nucleation and the time of collision.
The function $h(t)$ is an exponentially decaying function with the property that $h(\infty)=0$ as bubbles grow to fill the entire spacetime continuum. Hence, the total number of bubbles nucleated at the end of the phase transition is given by
\begin{align}
N_{\text{tot}}\equiv N(\infty)=V\frac{\beta^3}{8\pi v^3}.
\end{align}
In our analysis we will discretise the time of nucleation of each bubble as
\begin{align}
\label{eq:tn}
t_n\equiv\frac{1}{\beta}\ln\bigg(-\ln\bigg(1-\frac{n}{N_{\text{tot}}}\bigg)\bigg) +t_f,
\end{align}
where $t_n$ is the time of nucleation of the $n$th bubble. We then define the function $R_n(t)$ describing the radius of the $n$th bubble as a function of time as
\begin{align}
R_n(t)=v(t-t_n),
\end{align}
where $t>t_n$. We denote the time of which the bubble, say $b_i$, has its first collision with another bubble, say $b_j$, as $t_*$. For this to occur, two conditions should be met:
1- Any bubble $b_k$ ($k\neq j$) nucleated at time $t_k <t_*$ should nucleate at a distance from $b_i$ greater than the sum of their radii at $t_*$, otherwise $b_k$ would collide with $b_i$ before $b_j$. This implies that for each bubble $b_k$ there is a corresponding ``Forbidden volume'' of the metastable phase for which the bubble could not have nucleated within. For bubbles which have nucleated before $b_i$, $t_k<t_i$, the forbidden volume is described by a sphere with a radius defined by the minimum allowed distance between them $R_i(t_*)+R_k(t_*)$. For bubbles which have nucleated after $b_i$, $t_k>t_i$, the forbidden volume is given by the volume of the sphere enclosed by the minimum allowed distance subtracted by the volume of the bubble $b_i$ at $t_k$, since the volume of $b_i$ at that time resides in the true vacuum.
2- The bubble $b_j$ should nucleate on the surface of the sphere parameterised by $R(t_*)\equiv R_i(t_*)+R_j(t_*)$, with $dR=2vdt_*$.
From that we infer that the probability that $b_j$ is the first bubble to collide with $b_i$, and that the collision occurs at $t_*$ is given by
\begin{align}
\label{eq:prob}
P_{ij}(t_*)dt_*=N_i& \prod_{k=1,k\neq j}^{k=i-1} \bigg(1-\frac{\frac{4\pi}{3} (R_i(t_*)+R_k(t_*))^3}{h(t_k)V}\bigg)\nonumber\\
& \prod_{k=i+1,k\neq j}^{k=m} \bigg(1-\frac{\frac{4\pi}{3} [(R_i(t_*)+R_k(t_*))^3-R_i(t_k)^3]}{h(t_k)V}\bigg)\nonumber\\
&\times \frac{8\pi R^2(t_*)vdt_*}{h(t_j)V} \times \theta[h(t_j)V-\frac{4\pi}{3} R^3(t_*)],
\end{align}
where $m$ is defined such that $t_m = \text{max}\{t_i; t_i<t_*\}$. The first two lines of \eqref{eq:prob} correspond to the product of the probability that all bubbles $b_k$ ($k\neq j, t_k<t_*$) have nucleated in their corresponding allowed regions of space. The last line gives the probability that the bubble $b_j$ have nucleated on the surface defined by a sphere enclosed by $R$. The $\theta$ function ensures that the probability of a (first) collision happening vanishes if the sphere enclosed by the distance between the two bubbles is larger than the volume of the false vacuum at $t_j$. The normalisation constant $N_i$ is fixed by the condition
\begin{align}
\sum_j \int^\infty_{\text{max}\{t_i,t_j\}} P_{ij}(t_*)dt_*=1.
\end{align}
Now we can write down an expression for the average lifetime $\bar{T}_i$ of the bubble $b_i$ as
\begin{align}
\bar{T}_i= \bigg[ \sum_j \int^\infty_{\text{max}\{t_i,t_j\}} t_* P_{ij}(t_*)dt_* \bigg] - t_i.
\end{align}
Finally, after computing the lifetime of each bubble, we can fit a distribution using the histogram of lifetimes of all bubbles. In the next section we will numerically find the lifetime distribution of bubbles corresponding to a range of values for the transition rate $\beta$ and various bubble wall speeds $v$.
\section{Results}
In our analysis, we consider a unit volume $V= 1 \ L^3$, and fix the value of $t_f= 5 \ L$ arbitrarily. We run our computations for transition rate ranging between $\beta = 5-10 \ L^{-1}$, bubble wall speeds ranging between $v=0.25-0.5 $, and number of bubbles in a unit volume within the range $N_{\text{tot}}=54-318$.
We proceed to numerically fit the distribution of bubble lifetimes $\nu_{T}(\bar{T}_i)$ as a function of the average life time $\bar{T}_i$ for each bubble. We compute the average lifetime for all bubbles $\bar{T}$ and the standard deviation $\sigma_{\bar{T}}$. We then define
\begin{align}
\bar{R}_i= v\bar{T}_i,
\end{align}
as the average radius of bubble $b_i$ when it collides for the first time. Consequently we find the average radius at the time of collision for all bubbles $\bar{R}=v\bar{T}$, and the standard deviation $\sigma_{\bar{R}}= v\sigma_{\bar{T}}$.
Furthermore, we define
\begin{align}
R_{\text{uni}} =\hf n^{-1/3}_{\text{tot}} \equiv \hf \bigg(\frac{N_{\text{tot}}}{V}\bigg)^{-1/3}
\end{align}
as the radius of bubbles which have nucleated simultaneously and uniformly at the time of their collision. This expression was used in simulations where gravitational waves were generated from simultaneous nucleation of bubbles \cite{Hindmarsh:2015qjv}.
\begin{table}[!ht]
\setlength{\tabcolsep}{10pt} %
\renewcommand{\arraystretch}{1.5} %
\centering
\begin{tabular}{cccccccc}
\hline
$\beta[\text{L}^{-1}]$&$v$&$\lfloor N_{\text{tot}}\rfloor$ &$ \bar{T}[\text{L}]$&$\sigma_{\bar{T}}[\text{L}]$&$\bar{R}[\text{L}]$&$\sigma_{\bar{R}}[\text{L}]$& $R_{\text{uni}}[\text{L}]$\\
\hline
5&0.25&318&0.0994&0.0832&0.0249&0.0208&0.0732\\
5&0.35&116&0.0878&0.0758&0.0307&0.0265&0.1025\\
5&0.45&54&0.0724&0.0680&0.0326&0.0306&0.1318\\
6&0.35&200&0.0799&0.0669&0.0280&0.0234&0.0854\\
7&0.35&318&0.0708&0.0595&0.0248&0.0208&0.0732\\
10&0.5&318&0.0494&0.0417&0.0247&0.0209&0.0732\\
\hline
\end{tabular}
\caption{Numerical estimates of the average lifetime $\bar{T}$, the standard deviation $\sigma_{\bar{T}}$, the corresponding average radius of the bubble at the time of its first collision $\bar{R}$, the standard deviation $\sigma_{\bar{R}}$, and the average radius of bubbles at the time of their collision when they are nucleated simultaneously and uniformly $R_\text{uni}$ for different input values of the the transition rate $\beta$ and bubble wall speeds $v$.}
\label{tab:Results}
\end{table}
The numerics are laid out in Table ~\ref{tab:Results}, and plots of the lifetime distribution $\nu_{T}(\bar{T}_i)$ are displayed in Figures~\ref{fig:b},~\ref{fig:v},~\ref{fig:Nb}. We notice that the average lifetime $\bar{T}$ and the standard deviation $\sigma_{\bar{T}}$ decrease as the transition rate $\beta$ increases. They also decrease as bubble wall speeds $v$ increase.
\\
\\
Moreover, The average radius of the bubble at the time of collision $\bar{R}$ is smaller than the radius of bubbles which have nucleated simultaneously and uniformly at the time of collision $R_{\text{uni}}$. This was an expected result since bubbles which have nucleated at later times can only nucleate within small pockets of space that still reside in the false vacuum.
Interestingly, we notice that the values computed for the average radius of the bubble at the time of collision $\bar{R}$, and the standard deviation $\sigma_{\bar{R}}$ corresponding to different input parameters for the transition rate $\beta$ and bubble wall speeds $v$ which yield the same total number of bubbles $N_{\text{tot}}$ are roughly the same. This might indicate that the distribution of bubble sizes at the time of collision $\nu_R(\bar{R}_i)$, at least for some range of parameters, depends only the number of bubbles $N_{\text{tot}}$ as shown in Figure~\ref{fig:nur}.
In addition, we have stacked up 1000 simulations where we have generated periodic Universes described by a unit cube $L^3$ where we have placed $N_{\text{tot}}=318$ nucleation points randomly within the cube on the condition that all bubbles nucleate in the metastable phase. The time of nucleation of the $n$th bubble $t_n$ is given by~\eqref{eq:tn}. We fixed the values of the transition rate at $\beta=5 \ L^{-1}$, and the wall speed at $v=0.25$. Then we proceeded to compute the radius of each bubble at the time of its first collision. The average radius of bubbles at the the time of their first collision is $\bar{R}=0.0269 \ L$, which is very close to the values shown in Table ~\ref{tab:Results} for transitions which yield $\lfloor N_{\text{tot}}\rfloor=318$.
We created a histogram of these radii and plotted it against the histogram obtained from our mathematical framework in Figure~\ref{fig:nur}. The distributions are in good agreement as they have roughly the same average and roughly the same shape, although the peak of the distribution in the simulations seems to be at a slightly smaller size which is likely due to the periodicity of the Universe in the simulations. Furthermore, we plotted the bubble size distribution given in \cite{Hindmarsh:2019phv} for exponential nucleations, described by
\begin{align}
\nu_{\text{exp}}(R)= \frac{\beta}{v} e^{-\beta/v R},
\end{align}
for the same values for the physical parameters given in the simulations. We can clearly see that it fails to replicate the distributions that resulted from the simulations at larger radii. The average radius of bubbles is also much larger as the distribution yields the average $\bar{R} = 0.05 L$.
\section{Conclusion}
We presented a mathematical framework to compute an estimate to the lifetime distribution of bubbles $\nu_{T}(\bar{T}_i)$ as a function of the transition rate $\beta$, and bubble walls speed $v$. We started by discretising the time of nucleation of each bubble $t_n$ and then calculating the probability that bubble $b_i$ had its first collision with bubble $b_j$ at time $t_*$. This was done by calculating the probability that every other bubble had nucleated outside the forbidden region of the metastable phase, and that bubble $b_j$ has nucleated on the surface parameterised by the radius $R_i(t_*)+R_j(t_*)$.
After computing the average lifetime of each bubble $\bar{T}_i$, we create a histogram and fit the distribution of bubble lifetimes $\nu_{T}(\bar{T}_i)$. We ran our computations using different values for the transition rates ranging between $\beta = 5-10 \ L^{-1}$, bubble wall speeds ranging between $v=0.25-0.5 \ $, and consequently the number of bubbles in a unit volume $L^3$ fell within the range $N_{\text{tot}}=54-318$.
As we expect, the average lifetime $\bar{T}$ and the standard deviation $\sigma_{\bar{T}}$ decrease as the transition rate $\beta$ increases. They also decrease as bubble wall speeds $v$ increase. We also note that the average radius of the bubble at the time of collision $\bar{R}$ is smaller than the radius of bubbles which have nucleated simultaneously and uniformaly at the time of collision $R_{\text{uni}}$. This was also an expected result since bubbles which have nucleated at later times can only nucleate within small pockets of space that still reside in the false vacuum. This shows that the size of the bubbles at the time of collision is misestimated in the simulations where $R_\text{uni}$ was given as the average radius at first collision \cite{Hindmarsh:2015qjv}.
Interestingly, when we fit the distribution of bubble sizes at the time of collision $\nu_R(\bar{R}_i)$ as a function of the average radius of bubbles at the time of collision $\bar{R}_i=v\bar{T}_i$, we noticed that decays which yielded the same number of bubbles $N_{\text{tot}}$ have produced the same distributions. This indicates that $\nu_R(\bar{R}_i)$ may only depend on the number of bubbles $N_{\text{tot}}\propto \frac{\beta^3}{v^3}$.
The mathematical framework presented is useful in producing bubble lifetime distributions $\nu_{T}(\bar{T}_i)$ for a range of parameters. However, as $\beta$ increases, and the number of bubbles $N_{\text{tot}}$ becomes very large, the computations become very time consuming. But we expect that as the number of bubbles $N_{\text{tot}}$ increases, the estimate that the average radius of the bubble at the time of collision $\bar{R}$ is given by the radius of the bubbles which have nucleated uniformaly and simultaneously at the time of their collision $R_{\text{uni}}\propto N^{-1/3}_{\text{tot}}$ becomes more viable.
We relied on expressions given in Enqvist \textit{et al} \cite{Enqvist:1991xw} to describe the times of nucleation $t_n$ and the fuction $h(t)$ which gives the fraction of space that resides the metastable phase as a function of time. By comparing the resultant distributions with distributions resulting from stacking up 1000 simulations of randomly generated periodic Universes with the same number bubbles per unit volume $n_\text{tot}$, we found that the distributions were of similar shapes and their average values were roughly same. This shows that the mathematical framework yields a good estimate of the distribution of bubble lifetimes.
We hope to find signatures of such cosmological phase transitions by dectecting resultant gravitational waves in the future. The ESA is planning to launch a laser interferometer into space in the late 2030s under the LISA project \cite{Caprini:2015zlo,Caprini:2019egz}. This will enable us to probe frequencies typical of such cosmological transitions which we expect to have occured in the early Universe.
\subsection*{Acknowledgments}
\vspace{-3mm}
I would like to thank Mark Hindmarsh and Apostolos Pilaftsis for insightful comments. The work of Mulham Hijazi is supported by UKSACB.
\
|
Title:
Multiple flares caused by mass ejection episodes during the advanced nebular phase of Nova Scuti 2019 |
Abstract: Our photometric and spectroscopic monitoring shows that starting with 2020
June 4, day +217 from optical maximum and well into its advanced nebular stage,
Nova Sct 2019 began displaying a series of nine large amplitude flares (up to
Delta(m)~1.7 mag), characterized by a rapid rise to peak (=<10 hours) and a
fast exponential decline (e-folding time =<50 hours). The time interval
Delta(t) between flares follows an ordered sequence, declining from 8.43 to
4.90 days, that safely allows to exclude that any other flare occured without
being recorded by the observations. When the sequence of flares was over by
2020 July 28 (day +271), Nova Sct 2019 slowed its overall decline rate from
Delta(m)=0.0067 mag/day to 0.0027 mag/day. The flares were caused by material
expelled at high velocity (~1000 km/s) from the still burning WD. The cooler
pseudo-photosphere forming at each flare in the expelled material, resulted in
a recombination wave to spread through the original nova ejecta (at ~170 AU
from the WD), quenching emission from [FeX] and [FeVII] and boosting that from
lower ionization species. After each flare, once the small amount of expelled
material had turned optically thin, the original nova ejecta resumed displaying
[FeX] and [FeVII] emission lines, a fact that clearly proves the direct
photo-ionization action exerted on the ejecta by the burning WD. While the
other known flaring novae (V458 Vul, V4745 Sgr, and V5588 Sgr) presented the
flares close to maximum brightness and with increasing Delta(t), Nova Sct 2019
is unique in having displayed them during the advanced nebular stage and with
decreasing Delta(t).
| https://export.arxiv.org/pdf/2208.14733 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
stars: novae, cataclysmic variables
\end{keywords}
\section{Introduction}
The details of the multiple discoveries and designations of Nova Scuti 2019
(NSct19 for short) were given by \citet{green}. The transient was first
discovered by K.~Nishiyama (Japan) on Oct 29.397 UT (HJD 2458785.897)
at unfiltered 9.4 magnitude, resulting in transient designation
TCP~J18395972-1025415 when reported to CBAT, and independent discoveries by
H. Nishimura and S. Kaneko (also from Japan) were soon reported to CBAT by
S. Nakano. On Oct 29.524 UT, via VSNET-alert 23669, P. Schmeer noted the
coincidence of TCP~J18395972-1025415 with the new and unclassified transient
ASASSN-19aad discovered by the ASASSN survey on Oct 29.05 (HJD
2458785.55) at $g$=11.5 mag. Schmeer also noted the positional
coincidence with a feeble progenitor source recorded by PanSTARR-S1 at
$i$=20.8 and $z$=20.0 mag (and undetected in $g$ and $r$), and based on the
apparent great outburst amplitude, he suggested the object likely to be a
nova. Confirmation as a nova came soon afterward by
\citet{2019ATel13241....1W} via spectroscopic observations obtained on Oct
29.81 UT with the Liverpool telescope, which revealed several broad emission
lines flanked by P-Cyg absorptions, features that were also noted by
\citet{2019ATel13245....1P} on their spectroscopic observations for Oct
29.54 UT. A description of the early evolution of profiles of emission
lines and associated P-Cyg absorptions was provided by
\citet{2020AN....341..781J}, who recorded a series of six high resolution
spectra covering the first $\sim$two weeks of the outburst. The TNS server
assigned the designation AT\,2019tpb to the transient and the General
Catalog of Variable Stars (GCVS) provided the permanent designation
V659~Sct.
Not much else was reported about NSct19 during the rest of 2019, with the
exception of a pointing with the X--ray/UV {\it Swift} satellite by
\citet{2019ATel13252....1S}, that did not detect the nova in the X--rays and
recorded it at UVM2=14.27 mag in the ultraviolet, which led them to estimate
the reddening as $E_{B-V}$$\sim$0.9.
The interest in NSct19 was briefly renewed the following year by
\citet{2020ATel13815....1W} that reported detecting [SiVI], [SiVII],
[CaVII] and [FeIX] coronal emission lines in an infrared spectrum of the
nova they recorded on 2020 June 6 with IRTF. This triggered our interest in
the nova and by the following day, 2020 June 19, we begun a
$B$$V$$R$$I$ photometric and 3300-8000~\AA\ spectroscopic monitoring that
covered the rest of the observing season for the nova up to its Solar
conjunction in late October 2020. \citet{2020ATel13819....1S} obtain a
low-resolution optical spectrum of NSct19 on 2020 June 19, revealing the
nova to be well into its nebular stage, and confirmed the high excitation
conditions seen by \citet{2020ATel13815....1W} in the infrared by detecting
the coronal [FeX] 6375 \AA\ in emission among a rich assortment of
double-peaked emission lines distributed over a wide range of ionization
conditions (from [OI] to [FeVII]).
In this paper we discuss the results of our monitoring of NSct19 during
2020, which coincide with its advanced nebular decline, focussing in
particular on the surprising appearance of a series of very fast ($\sim$2
days duration) and large amplitude flares (up to $\Delta B$=1.7 mag).
\begin{table}
\centering
\caption{Our BVRI photometry on the Landolt system of Nova
Sct 2019. The complete table is only available in
electronic as supplementary material; a portion is
shown here to provide guidance on its content. HJD is the
heliocentric JD$-$2450000.}
\label{tab:tab1}
\small
\begin{tabular}{@{}c@{~}c@{~}c@{~}c@{~}c@{~}c@{~}c@{~}c@{~}c@{~}c@{}}
\hline
&&\\
HJD & B & err & V & err & R & err & I & err & ID \\
&&\\
9023.483 &15.799 & 0.024 &14.778 & 0.023 &14.038 &0.014 &13.887 & 0.015 & 1507 \\
9023.491 &15.769 & 0.012 &14.729 & 0.013 &14.033 &0.010 &13.801 & 0.012 & 0310 \\
9024.408 &15.884 & 0.017 &14.908 & 0.015 &14.285 &0.012 &14.133 & 0.024 & 1507 \\
9024.496 &15.924 & 0.014 &14.931 & 0.013 &14.279 &0.010 &14.173 & 0.012 & 0310 \\
&&\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Log spectroscopic observations recorded with Asiago 1.22m + B\&C + 300
ln/mm (3300-8000 \AA, 2.3 \AA~pix$^{-1}$).}
\label{tab:tab2}
\begin{tabular}{c@{~~}c@{~~}ccrc}
\hline
&&\\
\multicolumn{3}{c}{date}& UT & expt & HJD \\
&&& middle & (sec) & (-2450000) \\
&&\\
2020 & Jun & 28 & 22:34 & 3600 & 9029.440 \\
2020 & Jul & 05 & 00:03 & 4800 & 9035.502 \\
2020 & Jul & 06 & 00:02 & 5400 & 9036.502 \\
2020 & Aug & 20 & 21:37 & 1080 & 9082.401 \\
2020 & Aug & 25 & 20:49 & 1080 & 9087.367 \\
2020 & Aug & 27 & 19:39 & 1140 & 9089.319 \\
2020 & Sep & 09 & 19:48 & 1200 & 9102.325 \\
2020 & Sep & 10 & 20:03 & 1800 & 9103.335 \\
2020 & Sep & 11 & 19:52 & 1200 & 9104.328 \\
&&\\
\hline
\end{tabular}
\end{table}
\section{Observations}
We have obtained $B$$V$$R$$I$ optical photometry of NSct19 in the
\citet{2009AJ....137.4186L} photometric system from 2020 June 19 to
September 18 at $\sim$daily cadence, and then three more every $\sim$ten
days to October 16, with ANS Collaboration telescopes ID 0310 and 1507;
when close in time their data were not combined, to provide a mutual
check. Data reduction has involved all the usual steps for bias, dark and
flat with calibration images collected during the same observing nights. We
adopted aperture photometry because the sparse field around NSct19 did not
required a PSF-fitting approach. The transformation from the local to the
Landolt standard system was carried out via nightly colour equations
calibrated on a photometric sequence recorded on the same frames with NSct19
and extracted from the APASS DR8 survey \citep{2014CoSka..43..518H}, ported
to the \citet{2009AJ....137.4186L} system via the transformations calibrated
by \citet{2014AJ....148...81M}. Our photometry of NSct19 is listed in
Table~\ref{tab:tab1}. The quoted errors are the quadratic sum of the
Poissonian error on the variable and the error in the transformation to the
standard system via colour equations.
Low resolution spectroscopy of NSct19 has been obtained with the 1.22m
telescope + B\&C spectrograph operated in Asiago by the Department of
Physics and Astronomy of the University of Padova. The CCD camera is a
ANDOR iDus DU440A with a back-illuminated E2V 42-10 sensor, 2048$\times$512
array of 13.5 $\mu$m pixels. A 300 ln/mm grating blazed at 5000~\AA\
results in 2.3~\AA~pix$^{-1}$ dispersion and 3300-8000~\AA\ spectral
coverage. The slit has always been rotated to the parallactic angle for
optimal flux mapping. All data have been similarly reduced within IRAF,
carefully involving all steps connected with correction for bias, dark and
flat, sky subtraction, wavelength calibration and heliocentric correction.
The spectra have been flux calibrated against observations of the nearby
spectrophotometric standard HR~7032 (2$^\circ$ angular distance) observed
each night immediately before or after NSct19, and the zero-points checked
against the result of simultaneous ANS Collaboration $B$$V$$R$$I$
photometry. A log of the spectroscopic observations is given in
Table~\ref{tab:tab2}.
\begin{table}
\centering
\caption{Summary of basic parameters for Nova Sct 2019.}
\label{tab:tab3}
\small
\begin{tabular}{ll}
\hline
&\\
names & Nova Sct 2019 \\
& V659 Sct \\
& TCP~J18395972-1025415\\
& ASASSN 19aad \\
& AT 2019tpb \\
equatorial &RA = 18:39:59.82 \\
&DEC = $-$10:25:41.9 \\
Galactic &$l$ = 022.352 \\
&$b$ = $-$02.227 \\
outburst: maximum &UT = 2019 Oct 31.0 \\
&$V$ = 8.38 mag \\
&type = FeII \\
\multicolumn{1}{r}{decline} &$t_2$ = 7.0 days \\
& $t_3$ = 13.5 \\
\multicolumn{1}{r}{amplitude} &$\Delta I$$\sim$14.4 mag\\
reddening &$E_{B-V}$ = 1.1 \\
distance (from $t_3$)& 5.3 kpc \\
&\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Average values for the interstellar KI line we measured on
Jack et al. (2020) spectra of Nova Sct 2019.}
\label{tab:tab4}
\begin{tabular}{c@{~}ccc@{~}cc}
\hline
&&\\
\multicolumn{2}{c}{RV$_\odot$} & FWHM & \multicolumn{2}{c}{equiv. width} & $E_{B-V}$ \\
\multicolumn{2}{c}{(km/s)} & (km/s) & \multicolumn{2}{c}{(\AA)} & (mag) \\
&&\\
$-$8.3 & $\pm$0.2 & 19 & 0.206 & $\pm$0.003 & 0.81 \\
$+$33.3& $\pm$1.2 & 29 & 0.080 & $\pm$0.004 & 0.30 \\
&&\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{FWHM (corrected for instrumental resolution) and velocity separation
of double peaks for some representative lines in the spectrum of
Figure~\ref{fig:fig1}.}
\label{tab:tab5}
\begin{tabular}{lcc}
\hline
&&\\
line & FWHM & peaks\\
& (km s$^{-1}$) &(km s$^{-1}$) \\
&&\\
HeI 7065 & 1930 & 1090 \\
H$\alpha$ & 1950 & 1280 \\
HeII 4686 & 2200 & 1320 \\
$[$OIII$]$ 4363 & 2200 & 1380 \\
$[$FeVII$]$ 6987& 2210 & 1450 \\
$[$FeX$]$ 6375 & 2450 & 1550 \\
&&\\
$[$OI$]$ 6300 & 1430 & \\
$[$OII$]$ 7325 & 1480 & \\
$[$OIII$]$ 5007 & 1930 & \\
$[$NeIII$]$ 3869& 2550 & \\
&&\\
\hline
\end{tabular}
\end{table}
\section{Basic parameters of Nova Sct 2019}
Observing conditions at the time of discovery were far from ideal, with the
object low on the horizon for the fast approaching Solar conjunction, and the
Moon at short angular distance. Interpolating the AAVSO lightcurve with a
spline function, the time of maximum in $V$-band is derived as Oct 31.0
($\pm$0.5) UT at $V$=8.38($\pm$0.1), with $B$ band anticipating by
$\sim$half a day and $R$, $I$ bands delayed by $\sim$half a day from
$V$-band, as expected from an expanding fireball
\citep[eg.][]{2017MNRAS.469.4341M}. A spectrum taken shortly after maximum
brightness is available in the ARAS database \citep{2019CoSka..49..217T},
and it shows NSct19 belonging to the FeII-class of novae
\citep{1992AJ....104..725W}. ASASSN observed the field of NSct19 for only a
couple of days into the outburst because of the approaching Solar
conjunction, deriving $g$$\geq$17.0, $g$=11.57, and $g$=9.54 for Oct 28.06,
Oct 29.06, and Oct 30.06 UT, respectively.
\citet{1987A&AS...70..125V} list +0.23 and $-$0.02 as the intrinsic
(B-V)$_\circ$ colour for novae at maximum and $t_{2}^{V}$, respectively.
From a spline interpolation of the AAVSO lightcurve we estimate
$B$$-$$V$$\sim$1.56($\pm$0.15) at maximum and $B$$-$$V$$\sim$1.1($\pm$0.10)
at $t_{2}^{V}$, corresponding to $E_{B-V}$=1.3 and 1.1 mag, respectively. VSNET
CCD photometry for Oct 30.07 UT (observer K. Yoshimoto) provides $V$=8.59 and
$B$$-$$V$=1.20, resulting in $E_{B-V}$=1.0 mag. \citet{2020AN....341..781J}
reported about saturated multi-components for the interstellar NaI lines
recorded on their TIGRE high resolution spectra. On our request, D. Jack
kindly forwarded us such spectra, and on them we measured the unsaturated,
multi-component profile of the KI 7699~\AA\ interstellar line, obtaining the
values listed in Table~\ref{tab:tab4}. By adopting the calibration of
\citet{1997A&A...318..269M}, the equivalent width of the two KI components
translate into a total reddening $E_{B-V}$= 0.81 + 0.30 = 1.11 mag. On the
same TIGRE spectra, we also measured an average 0.2568~\AA\ equivalent width
for the diffuse interstellar band (DIB) at 6614 \AA. Adopting for such DIB
the calibration by \citet{2014ASPC..490..183M}, its equivalent width
translates to $E_{B-V}$=1.13 mag. Averaging over the various estimates, we
derive $E_{B-V}$=1.10 ($\pm$ 0.05) as the reddening affecting NSct19.
\citet{2019A&A...622A.186S} have re-calibrated the standard MMRD relation
(mag at maximum vs. rate of decline) on GAIA DR2 parallaxes. Applying it
to the above $t_{3}^{V}$=13.5($\pm$0.7) days leads to an absolute magnitude
M(V)=$-$8.7 and, by combining with $E_{B-V}$=1.10 for a standard $R_V$=3.1
reddening law \citep{1999PASP..111...63F}, to a distance of 5.3 kpc to
NSct19. At such a distance, the Bayestar2019 3D model of Galactic
extinction by \citet{2019ApJ...887...93G} returns $E_{g-r}$$\geq$0.96.
Table~\ref{tab:tab3} summarizes the basic parameters of NSct19.
\section{Bright flares during the advanced nebular phase}
When we began our monitoring of NSct19 in June 2020, the nova was already
well into its advanced nebular stage as illustrated by the spectrum in
Figure~\ref{fig:fig1}, where [OIII] 4959, 5007 stand out prominently and
stronger than H$\alpha$. Considering the large intensity of [NII] 5755,
some unresolved emission from [NII] 6548, 6584 probably contribute to the
H$\alpha$ profile. The spectrum in Figure~\ref{fig:fig1} is well
representative of our entire June-October observing period, and well
supports the reports by \citet{2020ATel13815....1W} and
\citet{2020ATel13819....1S} for the presence of coronal emission lines in
the infrared and optical spectra of the nova they recorded in June 2020.
[FeX]~6375 is clearly present in our spectra (for a recent census of novae
showing [FeX] see \citet{2021AJ....161..291R}), as well as a full
assortments of [FeVI] and [FeVII] lines in addition to [ArIII], [ArIV] and
[ArV] transitions. Their profiles range in shape from Gaussian-like to
double-peaked, with Table~\ref{tab:tab5} listing the FWHM (corrected for
instrumental resolution) and separation of peaks for some representative
lines. Our FWHMs are remarkably close to those measured by
\citet{2020AN....341..781J} during the first two weeks of the outburst. No
emission component is visible in our spectra to match the higher velocities
characterizing the P-Cyg absorptions tracked by \citet{2020AN....341..781J}.
The photometric evolution of NSct19 as recorded by our June-October 2020
observations is presented in Figure~\ref{fig:fig2}. It is characterized by
a rather slow decline in brightness ($\Delta V$=0.5~mag in 120 days), as
typical of novae during the advanced nebular stage.
The most striking feature of the NSct19 lightcurve in Figure~\ref{fig:fig2}
is however the presence of a number of {\it flares}, short-lived
brightenings of between 0.4 and 1.7 magnitudes in $V$ and $B$, which are
superimposed onto the otherwise normal and smooth decline. Such events are
extremely rare in novae (see discussion below in sect. 6.2) and flag NSct19
with special interest. These flares should not be confused with the jitters
many slow novae present much earlier in their evolution, during the long
plateau they experience around maximum brightness
\citep[eg.][]{2010AJ....140...34S}, or around the transition from optically
thick to thin conditions \citep[eg.][]{1964gano.book.....P}.
To put the detection of flares into context, in Figure~\ref{fig:fig3} we
have built a more comprehensive lightcurve of NSct19 for 2020, by combining
our $B$-band data with $g$-band measurements collected by ASASSN
\citep{2014ApJ...788...48S, 2017PASP..129j4502K} and ZTF patrol surveys
\citep{2019PASP..131a8003M, 2019PASP..131a8002B}, that we have retrieved
from their respective databases. An offset has been applied to $g$-band
data (+0.7 mag for ASASSN, +0.9 mag for ZTF) to bring them on the same scale
of $B$-band data.
The photometric decline of NSct19 during 2020 has been characterized by two
different slopes, as clearly visible in Figure~\ref{fig:fig3}: an initial
(February through July) faster decline at 0.0067 mag~day$^{-1}$, followed by
a slower one at 0.0027 mag~day$^{-1}$. The flares appeared at the end
of the faster-decline portion of the lightcurve, and they ceased as NSct19
settled onto the slower-decline descent. The change in the decline speed
has been probably governed by some adjustment in the recombination vs
photo-ionization of the ejecta, which may have been driven by changes in
rate of nuclear burning on the central WD and/or changes of its out-flowing
wind, and by the possible injection of new material in the inner
circumstellar space as a result of the repeated flares.
The lower panel of Figure~\ref{fig:fig3} zooms on the time interval covered
by the flares, which are highlighted by the yellow vertical bands. They are
9 in number and their epochs are listed in Table~\ref{tab:tab6}. The flares seem to
follow a precise temporal sequence, as illustrated by Figure~\ref{fig:fig4}
where we have plotted the time interval between two successive flares. A
tight linear trend is obvious, and the small deviations from it could be
easily accounted for by the limited accuracy to which the time of maxima can
be derived with the available data (sampling time $\sim$0.5 day).
Extrapolating the linear trend outside the recorded 9 flares allow to
predict the times of occurrence for any further such event preceding or
following those actually observed, and such times are marked with pink
arrows in the lower panel of Figure~\ref{fig:fig3}. The arrows coincide in
time with observations that caught NSct19 at the normal quiescence level,
excluding that any further flare has occoured without being recorded by
observations (at least any other flare obeying to the linear trend of
Figure~\ref{fig:fig4}).
\begin{table}
\centering
\caption{Epochs of the nine flares exhibited by Nova Sct 2019.
They correspond to the brightest photometric observation
recorded for the given event, by either us, ASASSN, or ZTF. Given the
sparse sampling, the actual epoch of true peak brightness my differ up
to $\sim$0.5 day from the listed values.}
\label{tab:tab6}
\begin{tabular}{ccc}
\hline
&&\\
flare & HJD & UT date \\
N. & (-2450000) & (2020) \\
&&\\
1 & 9005.302 & June 4.802 \\
2 & 9013.731 & June 13.231 \\
3 & 9021.440 & June 20.940 \\
4 & 9028.855 & June 28.355 \\
5 & 9036.398 & July 05.898 \\
6 & 9042.828 & July 12.328 \\
7 & 9048.455 & July 17.955 \\
8 & 9053.527 & July 23.027 \\
9 & 9058.422 & July 27.922 \\
&&\\
\hline
\end{tabular}
\end{table}
If the $e-$folding time for the brightness decline appears to be similar for
all flares, approximately 50~hours, their amplitude may have not been always
the same, with flare N.3 peaking at $\Delta B$=1.7 and N.7 and 8 limited to
$\Delta B$=0.4 mag. Any definitive conclusion about flare amplitude is
hampered by the sampling time of the observations ($\sim$0.5 days) and the
real possibility that the true maximum has been missed altogether for some
of them.
The takeaways from this section are: (1) a series of 9 fast-evolving and
large-amplitude flares have been recorded during the advance nebular
decline, just prior to a major change in the rate of decline of NSct19; (2)
the flares are arranged in a time sequence that allows safely to conclude
that all events have been recorded and none has gone unnoticed; and (3) not all
flares attained the same brightness amplitude.
\section{Anatomy of the flares}
The observations we collected allow to document in detail the photometric
and spectroscopic characteristics of the flares experienced by NSct19. The
flares best covered by our photometry are the 3rd and the 5th, and
Figure~\ref{fig:fig5} provides a zoom on their $B$$V$$R$$I$ light- and
colour-curves: the two flares behaved very similarly and the apparent
difference in peak brightness is probably an effect of the sampling by the
observations.
\subsection{Photometric properties}
The rise to maximum brightness for all flares has been very fast. Our
sampling time interval put a general $\la$1-day upper limit to that. A more
stringent value can be derived from the evolution of flare N.4 (cf.
Figure~\ref{fig:fig3}). When we observed NSct19 on June 27.940 UT it was
still at the normal quiescent level observed between flares, but shortly
afterward, on June 28.355 UT, ZTF caught the nova one magnitude brighter
and close to peak flare brightness, implying an upper limit to the rise to
maximum brightness of $\leq$10~hours.
The photometric colour-change associated to the flares is rather peculiar, as
illustrated by Figure~\ref{fig:fig5}: at peak of the flare, NSct19 becomes
{\it bluer} in $B$$-$$V$ by 0.5 mag, and {\it redder} by 0.5 mag in
$V$$-$$I$, while $V$$-$$R$ remains rather flat. In other words, the amount
of brightening is larger at both ends of the optical range ($B$ and $I$
bands) than it is in the middle ($V$ and $R$ bands).
Both the continuum and the emission lines contribute to the flux recorded
through the photometric bands, and disentangling the respective roles played
during flares is impossible on purely photometric grounds. Accurately
fluxed spectra are required to this aim. Luckily, two of our spectra (June
28.94 and July 06.00 UT, cf. Table~\ref{tab:tab2}) were obtained close in
time to the peak of the 4th and 5th flares, while a third (Jul 05.00 UT) was
observed during the brief quiescence in between them (their epochs are
marked by the dot-dashed vertical lines in Figure~\ref{fig:fig3}). These
three spectra are compared in Figure~\ref{fig:fig6}. The spectra at flare
maxima look almost identical, as are the colour- and
light-curves for the 3rd and 5th flares presented in Figure~\ref{fig:fig5}.
Such similarities support the notion that the mechanism driving the flares
has been one and the same throughout the whole series of nine recorded
events.
Given their similarities, we have averaged the two spectra at flare maximum
and subtracted from them the spectrum for the in-between quiescence. The
resulting difference-spectrum is plotted in the lower panel of
Figure~\ref{fig:fig6}. We have then integrated the flux of the spectra in
Figure~\ref{fig:fig6} through the transmission profile of the $B$$V$$R$$I$
photometric bands as tabulated by Landolt (1992), with the zero-points being
fixed by repeating an identical operation on the spectrophotometric
standards observed along with NSct19. The resulting magnitudes are listed
in Table~\ref{tab:tab7}, where we also report the photometric magnitude
corresponding to the flux radiated separately in the continuum (fitted with
a bremsstrahlung distribution) and in the emission lines. From
Table~\ref{tab:tab7} it is evident how the variation going from quiescence
to flare peak is larger for the continuum (1.0 mag) that it is for the
emission lines (0.4 mag). A large change affects the {\it 4640-blend},
probably composed of NIII lines pumped by fluorescence from HeII Ly$\alpha$
via OIII 374.432 \AA\ \citep{1947PASP...59..196B, 2007A&A...464..715S}. The
3$\times$ increase in the intensity of the 4640 blend, which is located
close to the peak of the $B$-band transmission profile, seems the prime
responsible for the larger amplitude (0.8 mag) in $B$ compared to $V$ and
$R$ (both 0.4 mag) for the variation due to the emission lines.
As well illustrated by Figure~\ref{fig:fig6} and Table~\ref{tab:tab7}, in
going from quiescence to flare-peak the underlying continuum brightens
similarly at all (optical) wavelengths, so its shape remained unchanged. If
the fluorescence-pumped NIII 4640 complex may contribute to a larger
brightening of NSct19 in $B$ band compared to $V$ and $R$, a similar role
could be played for the $I$ band by OI 8446 line, fluorescence-pumped by HI
Ly$\beta$ (Bowen 1947), and located close to the peak of the $I$ band
transmission profile. The OI 8446 line is rather strong in novae, frequently second
only to H$\alpha$ in terms of emitted flux \citep[eg.][]{2014MNRAS.440.3402M}.
Unfortunately, we cannot test this hypothesis with our spectra that do not
extend redward of 8000~\AA.
\subsection{Changes in the emission line profiles}
\begin{table}
\centering
\caption{Photometry from flux integration on spectra of Nova Sct 2019
taken at flare peak (Jun 28.9 and July 6.0) and in-between quiescence
(Jul 5.0). The central column refers to the spectra zipped of their emission lines,
and the right column to the spectra subtracted of their underlying continuum
(approximated by fitting to a bremsstrahlung distribution).}
\label{tab:tab7}
\begin{tabular}{@{}c@{~}c@{~}c@{~}c@{~~}c@{~}c@{~}c@{~}c@{~~}c@{~}c@{~}c@{}}
\hline
&&\\
\multicolumn{3}{c}{spectrum}&&\multicolumn{3}{c}{continuum}&&\multicolumn{3}{c}{em. lines}\\ \cline{1-3} \cline{5-7} \cline{9-11}
$B$ & $V$ & $R$ && $B$ & $V$ & $R$ && $B$ & $V$ & $R$\\
&&\\
\multicolumn{11}{c}{\it flare peak}\\
& && & & && & & \\
15.17 & 14.47 & 13.74 && 16.05 & 15.13 & 14.39 && 15.84 & 15.33 & 14.64 \\
& && & & && & & \\
\multicolumn{11}{c}{\it quiescence}\\
&&\\
16.06 & 15.18 & 14.46 && 16.97 & 16.20 & 15.39 && 16.68 & 15.72 & 15.06 \\
&&\\
\multicolumn{11}{c}{\it difference spectrum}\\
&&\\
15.81 & 15.31 & 14.67 && 16.70 & 15.73 & 15.04 && 16.44 & 16.56 & 16.03 \\
&&\\
\hline
\end{tabular}
\end{table}
The type of change in the profiles of emission lines of NSct19 in going from
quiescence to flare peak is illustrated in Figures~\ref{fig:fig7} and
~\ref{fig:figr}, which zooms on spectra presented in Figure~\ref{fig:fig6}.
For all the emission lines, with the except of [FeVII] and [FeX], the flare
causes the emission profile to develop a blue-peaked component over an
otherwise flat (eg. [NII] 5755, [ArIII] 7136, HeI 5876) or rounded
top ([OIII] 4959 and 5007, [OII] 7325). The radial velocity of such blue
peak is $-$650 km/s for all lines, as indicated by the vertical lines in
Figure~\ref{fig:fig7}.
An opposite behavior characterized the highest ionization emission lines:
as illustrated by Figure~\ref{fig:fig7} for [FeVII] 5720,6087 and [FeX]
6375, their line profiles in quiescence are double-peaked, with the blue
peak at $-$650 km/s that disappears during a flare.
Going from quiescence to flare peak also causes the appearance of broad
wings, albeit of low intensity, to permitted lines which have no
counterparts for nebular ones, as deducible from Figure~\ref{fig:fig6} by
comparing H$\alpha$ and [OIII] lines. Figure~\ref{fig:figr} zooms on the
H$\alpha$ wings from Figure~\ref{fig:fig6}, fitted with a simple Gaussian
of FWHM$\sim$2300 km/s.
\section{Discussion}
\subsection{Interpreting the flares}
We assume a standard, spherical arrangement for the material ejected by the
nova during the main 2019 outburst, characterized by internal and external
radii and with the WD at the center (cf. Figure~\ref{fig:fig8}). The
presence of persistent [FeX] in emission, suggests that the WD was still
burning at its surface at the time of the flares, being hot and bright and
thus exerting its photo-ionizing action through the optically thin ejecta
and contrasting their recombination from higher ionized states.
Averaging over the FWHM of the emission lines and the velocity of P-Cyg
absorptions seen at the time of maximum brightness, we may adopt 1000 km/s
as the expansion velocity of the bulk of the ejecta. In the 10 months
elapsed since the outburst in Oct 2019, at the time of the flares, the ejecta
have expanded to a radius of 170 AU, corresponding to a travel light-time of
2.0 days to cross the diameter of the shell. In other words, an external
observer will receive news from the receding side of the ejecta only 2.0
days after being informed about the approaching one.
We believe that a flare in NSct2019 was initiated by a sudden ejection
(spherical or at least bi-conical along an axis approximately oriented to
the line of sight) of a limited about of material from the central WD. The
FWHM=2300~km/s broad wings visible in the H$\alpha$ flare profile of
Figure~\ref{fig:figr} trace the ejected material, which mass appears much
smaller than that ejected during the main outburst, as the $\sim$1:10 ratio
in the H$\alpha$ flux suggests. The material is optically thick when
expelled, and remain so until after the flare peak, which marks the time
when the expanding pseudo-photosphere reaches its maximum radius. The
expanding pseudo-photosphere formed by the optically thick material causes a
drop in the surface temperature of the central photo-ionizing source,
driving a recombination wave through the ejecta.
The effect is more pronounced at inner radii of
the ejecta where the higher electron density allows the recombination to
proceed at a faster pace. Emission from [FeVII] and [FeX] is quenched down because their
recombination is no more contrasted by photoionization from the central
source, and a surge is observed in the emission from lower
ionization lines such as Balmer, HeI, [OI], [OII], [OIII], [NII], [ArIII]
etc. populated by recombination from higher ionization states.
Our spectroscopic observations during flares (Figure~\ref{fig:fig6},
\ref{fig:fig7}, and \ref{fig:figr}) have been obtained within hours from the
recorded photometric maximum. Within such a short time interval, only light
from the approaching ejecta has been able to reach the observer (the
light-grey $A$ portion in Figure~\ref{fig:fig8} where originates the blue
portion of the emission line profiles), while the rest of the ejecta (the
dark-grey $B$ portion in Figure~\ref{fig:fig8} that produces the rest of the
line profiles) still appear to the observer as it was {\it before} the onset
of the flare. In the $A$ portion of the ejecta in Figure~\ref{fig:fig8},
not more exposed to hard radiation from the central WD, the recombination
depletes the medium from the highest ionization species (like [FeVII] and
[FeX]) and as a consequence the blue peak in their double-peaked profiles
fades away. At the same time, the recombination from higher ionization
levels in the $A$ region, increases the density of lower ionization species
and boost the blue peak in their double-peaked profiles.
The return to pre-flare conditions is rather quick, the $e-$folding time for
decline in brightness after a flare peak being $\approx$50~hours (cf.
Figure~\ref{fig:fig5}). Unfortunately, we do not have spectra of NSct2019
obtained two or three days past the maximum brightness of a flare; we may
however predict that on such spectra the ratio of blue-to-red peaks in the
double-peaked profile would appeared reversed with respect to
Figure~\ref{fig:fig7}: the strongest peak would be the blue one for [FeVII]
and [FeX], and the red one for the other lines.
\subsection{Flaring novae}
Very few novae have presented sequences of quick flares in their outburst
lightcurve as those displayed by NSct19.
Such events should {\it not} be confused with the chaotic up-and-downs that
several novae present during their plateaued maxima, as shown by DQ~Her,
HR~Del, V723~Cas, V2540~Oph, or V1405 Cas among many others
\citep[eg.][]{1964gano.book.....P, 2004PASJ...56S.193K,
2010AJ....140...34S}, nor with the single-secondary maxima of V2362~Cyg,
V1493~Aql, or V2491~Cyg \citep[eg.][]{2004AJ....128..405V,
2008A&A...492..145M, 2009ApJ...694L.103H, 2010PASJ...62.1103A,
2011NewA...16..209M}, nor even with the (periodic) oscillation that some
novae present around the time of transition from optically-thick to -thin
conditions, like V603~Aql, V1494~Aql, or LZ~Mus
\citep[eg.][]{1960stat.book..585M, 1995cvs..book.....W, 2000NewAR..44P..65R,
2003ASPC..303..232R, 2003A&A...404..997I}. Also the rapid variability
presented in quiescence by V2487 Oph \citep{2022MNRAS.512.1924S} or by
systems like TV~Col \citep{2022Natur.604..447S} represent entirely different
phenomena, driven by strong magnetic fields.
Our definition of a {\it flaring nova} is the following:
\begin{enumerate}
\item the flares appear superimposed on an otherwise smooth and normally
evolving lightcurve of a nova outburst;
\item the flares are isolated and very quick events, coming in sequences;
\item a sequence of flares shows a clearly ordered pattern, either
in terms of time-intervals and/or energy released;
\item the rise-time to flare peak brightness is rather short ($\leq$1
day), and the exponential decline is characterized by a quick
$e$-folding time, of the order of a few days;
\item the large amplitude of the flares ($\Delta m$$\geq$1~mag) makes them
outstanding features of the lightcurve.
\end{enumerate}
To the best of our knowledge, there are only four novae that satisfy these
criteria: V458~Vul \citep{2015ARep...59..920T}, V4745 Sgr
\citep{2005A&A...429..599C, 2011PASJ...63..159T}, V5588~Sgr
\citep{2015MNRAS.447.1661M}, and NSct19 discussed in this paper. All of
them started as FeII-type of slow/modest expansion velocities, with V458~Vul
and V5588~Sgr turning hybrid \citep{1992AJ....104..725W} at later times.
All presented the [FeX] coronal emission line in their spectra, with the
exception of V4745~Sgr for which the spectroscopic monitoring may have
stopped too early to catch the high-ionization phase. Line profiles at the
time of flares suggest that a low amount of material (much smaller then
that expelled during the initial nova eruption) is ejected at
high velocity.
Two main characteristics put NSct19 aside from the other flaring novae:
($a$) in NSct19 the flares were observed to occur at late times, during the
advanced nebular phase, while for the other novae the flares appeared very
early in the outburst, close to maximum brightness, and ($b$) the time
interval between consecutive flares {\it decreases} in NSct19 while it {\it
increases} for the others. There are many other differences that
contributes to make the group of flaring novae rather heterogeneous, like
that fact V458~Vul and V5588~Sgr did not develop a nebular spectrum contrary
to V4745 Sgr and NSct19, or the photometric colours during a flare evolved in
opposite directions for NSct19 and V5588~Sgr (multi-band lightcurves are not
available for V458~Vul and V4745 Sgr).
A detailed comparison of the properties of the four flaring novae is well
beyond the scopes of the present paper, but a comparative study would
certainly be instructive to carry out \citep[eg.][]{2009ApJ...701L.119P},
especially if supported by basic information like orbital periods and
inclinations, WD mass, companion type, and presence and role of magnetic
fields, all rather difficult to obtain in view of the faintness of these
novae in quiescence.
\section{Conclusions}
The observations monitoring the evolution of novae usually rarefy or
even stop when they enter and progress through the nebular stage, on the
wisdom that changes will be mostly slow, gradual, and predictable. Our
observations of NSct19 clearly prove that this is not always the case, with
the catching of rather unexpected phenomena rewarding a persisting
observational effort.
NSct19 displayed flaring of a nature never before detected: between days
+217 and +271 from optical maximum, nine short-lived brightenings of between
0.4 and 1.7 magnitudes in $V$ and $B$ were observed, all rather similar in
their photometric and spectroscopic development. At the time the nova was
still burning nuclearly at the surface of the WD and well into the advanced nebular
stage, $\sim$7 mag below maximum brightness, and with optical and IR spectra
displaying forbidden lines of a high ionization degree (eg. [ArV], [FeVII],
[FeX]). The flares appeared all of a sudden, without precursor events, and
the sequence neatly stopped after the ninth flare. The time interval
between the flares followed an ordered sequence, declining linearly from
8.43 to 4.90 days, that safely allows to exclude that any other flare
occured without being recorded by the observations. The color and
spectroscopic evolution of the flares indicates that their origin resides in
repeated episodes of mass ejection from the WD.
A few other novae have been noted to show flares, but they appeared very
early in the outburst, close to maximum brightness, and with the time interval
between consecutive flares increasing, while it was instead decreasing for
NSct19. Available observations not always allow to constrain the origin
of the flares, but at least for V5588~Sgr they were traced to episodic
mass ejections from the WD, similalrly to NSct19.
\section*{Acknowledgements}
We thank the Referee (Stewart Eyres) for valuable suggestions. We also
acknowledge the support by to P. Valisa, P. Ochner, and A. Frigo to this
project.
\section{Data availability} \label{sec:data}
The data underlying this article will be shared on reasonable
request to the corresponding author.
\bsp %
\label{lastpage} |
Title:
Mechanisms for high spin in black-hole neutron-star binaries and kilonova emission: inheritance and accretion |
Abstract: Black-hole neutron-star binary mergers, whose existence has been confirmed by
gravitational-wave detectors, can lead to an electromagnetic counterpart called
a kilonova if the neutron star is disrupted prior to merger. The observability
of a kilonova depends crucially on the amount of neutron star ejecta, which is
sensitive to the aligned component of the black hole spin. These binaries
likely originate from the evolution of isolated stellar binaries. We explore
the dependence of the ejected mass on two main mechanisms that provide high
black hole spin. When the black hole inherits a high spin from a Wolf-Rayet
star that was born with least $\sim 10\%$ of its breakup spin under weak
stellar core-envelope coupling, which is relevant for all formation pathways,
the median of the ejected mass is $\gtrsim 10^{-2}$ M$_{\odot}$. Though only
possible for certain formation pathways, similarly large ejected mass results
when the black hole accretes $\gtrsim 20\%$ of its companion's envelope to gain
a high spin, and a more massive stellar progenitor provides smaller ejected
mass compared to when the black hole inherits high spin. Together, these
signatures suggest that a population analysis of black hole masses and spins in
black-hole neutron-star binary mergers may help distinguish between mechanisms
for spin and possible formation pathways. Using a novel kilonova light curve
model we show that current capabilities are unlikely to observe a counterpart,
however future facilities such as the Vera Rubin Observatory will likely detect
counterparts even if the aligned dimensionless spin of the disrupting black
hole is as low as $\sim 0.2$. Our model predicts kilonovae as bright as $M_i
\sim -14.5$ for an aligned black hole spin of $\sim 0.9$.
| https://export.arxiv.org/pdf/2208.00973 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
black-hole neutron-star mergers -- gravitational waves -- transients: novae -- gamma-ray bursts -- black hole physics
\end{keywords}
\section{Introduction}
\label{sec:Intro}
Although the majority of gravitational waves observed are sourced by the merger of binary black-holes, the third observation run of LIGO/Virgo reported the detection of another class of compact binary: two black-hole neutron-star (BHNS) binary mergers, GW200115 and GW200105 \citep{LIGO2021BHNS}. Four additional candidate events were identified but not confidently detected. Assuming uninformative priors, the masses of GW200115 and GW200105 are $8.9^{+1.2}_{-1.5}$ M$_{\odot}$ and $1.9^{+0.3}_{-0.2}$ M$_{\odot}$, and $5.7^{+1.8}_{-2.1}$ M$_{\odot}$ and $1.5^{+0.7}_{-0.3}$ M$_{\odot}$, respectively, at the 90\% credible level. The spin of the black hole in GW200115 is not tightly constrained but may be misaligned as it is inferred to have a component below the orbital plane at 88\% probability, while the dimensionless spin magnitude of the black hole in GW200105 is likely $< 0.23$ and its direction is unconstrained. Observations with future ground-based detectors may uncover more BHNS binaries and shed light onto their peculiar properties \citep{Brown2021}.
If a neutron star (NS) is tidally disrupted by its black hole (BH) companion rather than directly plunging beyond its event horizon \citep{Foucart2020}, $\gamma$-ray emission in the form of a short gamma-ray burst (GRB) may result from accretion onto the remnant stellar-mass BH \citep{Rosswog05,Lee2007,Paschalidis15}, and radioactive decay in neutron-rich ejecta may produce a roughly isotropic optical/infra-red emission known as a kilonova \citep{Li98,Roberts2011,Metzger2012,Barnes13,Metzger17}. The observability of an electromagnetic counterpart depends crucially on the amount of mass ejected prior to merger. This is sensitive to the binary mass ratio, the compactness of the NS (i.e., its mass and radius), and the aligned spin component of the BH \citep{Foucart2012,Foucart2018,Kruger20} because the radius of the BH event horizon is smaller for higher (prograde) aligned spin. Although optical follow-up was not completely comprehensive, e.g., only $\approx 50\%$ of the sky location probabilities were searched by the \emph{Zwicky} Transient Facility \citep{Anand2021}, no electromagnetic counterparts were observed for either BHNS event detected by LIGO/Virgo consistent with theoretical expectations from their measured spins and mass ratios \citep{Gompertz2022}.
Theoretically, BHNSs can form in two broad scenarios: the dynamical channel where the compact binary forms in a dense stellar cluster \citep{Benacquista2013}, and the isolated channel where an isolated stellar binary forms into the compact binary through the various stages of binary evolution \citep{Postnov2014}. Although both channels of formation may explain the origin of the LIGO/Virgo population of presently known binary BHs \citep[for a review, see e.g.][]{Mapelli2020}, the dynamical channel is expected to produce a substantially smaller number of merging BHNS binaries \citep{Clausen2013,Ye2020} compared to what is estimated from current LIGO/Virgo observations, and possibly no counterparts \citep{Sedda2020}. The merger rates of isolated BHNS binaries are highly uncertain due to the uncertainties of stellar binary evolution, galactic star-formation history, and cosmic evolution of the metallicity dependence of star-forming regions \citep[e.g.][]{Dominik2015,Giacobbo2018,Belczynski2020,Broekgaarden2021}. The spins of BHNS binaries remain observationally uncertain \citep{Miller2015}.
Population synthesis studies of merger rates of isolated BHNSs typically find that the vast majority of binaries will not result in observable electromagnetic counterparts \citep[e.g.][]{Zhu2021,Fragione2021}. The fraction of BHNSs that yield significant ejecta can be sensitive to the assumptions that are employed. \citet{Drozda2020} found this fraction to be $\lesssim 20\%$ when the core-envelope coupling of their stellar progenitors is sufficiently weak to provide a high dimensionless BH spin magnitude, i.e., $\chi = 0.9$. A similar fraction of binaries is reported under the ad-hoc assumption that the BH spin magnitude is $\chi = 0.5$ \citep{Broekgaarden2021}. If stellar angular momentum transport is efficient, i.e., core-envelope spin coupling is strong, detection of an electromagnetic counterpart of a BHNS binary merger could imply the BH experienced significant accretion or its progenitor was tidally synchronized \citep{Belczynski2020}.
Although accretion in stable mass transfer is usually considered in such studies, it has not been investigated as a source for significant ejected mass.
Motivated by these previous studies, we explore the dependence of the ejected mass of BHNS binaries in two evolutionary pathways of the model of isolated formation of \citet{Steinle2021}, which parameterized various processes that are pertinent for evolving the binary spin magnitudes and directions. We focus on two mechanisms by which the BH obtains a high spin magnitude: i) inheritance of natal spin via weak core-envelope coupling of the stellar progenitors, and ii) accretion during stable mass transfer. We use the formula of \citet{Foucart2018} to determine whether the NS is tidally disrupted. This allows us to parameterize the ejected mass of the NS with the fraction, $f_{\rm B}$, of the breakup spin of the progenitor WR star, and with the fraction, $f_{\rm a}$, of the donor's envelope that is accreted in stable mass transfer. We do not attempt to compute the merger rate of our BHNS distributions as it would require the use of population synthesis.
This paper is organized as follows: in section \ref{sec:Meth} we detail our model of BHNS formation, NS tidal disruption, and counterpart light curves; in section \ref{sec:Results} we demonstrate the dependence of the ejected mass and the corresponding light curves on the mechanisms for obtaining high BH spin magnitude; and we conclude with a summary and discussion of implications in section \ref{sec:Disc}.
\section{Methodology}
\label{sec:Meth}
\subsection{Black-hole neutron-star binary formation}
\label{subsec:Formation}
A zero-age main sequence (ZAMS) binary star is initialized at the binary separation $a_{\rm ZAMS}$ with metallicity $Z$, and with masses $m_{1, \rm ZAMS}$ of the primary star and $m_{2, \rm ZAMS}$ of the secondary star such that the ZAMS mass ratio is $q_{\rm ZAMS} = m_{2, \rm ZAMS}/m_{1, \rm ZAMS} \leq 1$. For a detailed description of this model, see \citet{Steinle2021}, and for a detailed review of the physics of the isolated channel, see e.g., \citet{Postnov2014}.
Numerous astrophysical processes of isolated binary evolution are parameterized. Roche lobe overflow (RLOF) initiates mass transfer either as a phase of common-envelope evolution (CEE), which drastically shrinks the binary separation, or stable mass transfer (SMT), where the companion gains mass and spin angular momentum. The donor completely loses its envelope in mass transfer and its core emerges as a Wolf-Rayet (WR) star. Four pathways, labeled A1, A2, B1, and B2, are treated in this model. When the primary and secondary stars undergo SMT (CEE) and CEE (SMT), respectively, the binary evolves in Pathway A1 (B1), but if the secondary star undergoes RLOF before the core collapse supernova (SN) of the primary star the binary evolves in Pathway A2 (B2). We only present results for A1 and B1, which are depicted in Figure~\ref{F:Diagram}, because the boundary between pathways A1 and A2 (or equivalently, B1 and B2) defined by the mass ratio is large, i.e., $q_{\rm ZAMS} \approx 1$, for the total masses considered here. Additionally, equal mass binaries are unlikely to form BHNSs as either binary BH or binary NS formation is more likely.
To form BHNS binaries, rather than BH binaries, we modify the model of \cite{Steinle2021}. Most importantly, we examine stars with lower ZAMS mass, i.e., $13 \leq m_{\rm ZAMS} \leq 25$. These stars may result in either NSs or BHs depending on the amount of fallback accretion onto the proto-NS during core-collapse formation.
We use the \texttt{StarTrack} implementation of the rapid energy-expenditure mechanism for the SN explosion, i.e., Eq.'s (10-14) of \citet{Fryer2012}. This provides the mass of the compact object and the fallback parameter $f_{\rm fb}$ which determines the fraction of material that falls onto the collapsing core after it was ejected during the SN. The fraction $f_{\rm fb}$ monotonically increases with increasing initial mass leading to larger compact remnant masses. This more physically motivated prescription produces a smooth transition across the uncertain parameter space between the \citet{Hurley2000} NS model (i.e., their Eq. (92)) and the BH model of \citet{Steinle2021}. We assume a mass boundary of $m = 2.5$ M$_{\odot}$ between NS and BH formation, as in \cite{Fryer2012}. Fallback accretion suppresses the natal kick imparted on the compact object that forms according to $v_{\rm k,fb} = (1 - f_{\rm fb})v_{\rm k}$ \citep{Fryer2012}, where BHs that form from stars with ZAMS masses $\gtrsim 23$ M$_{\odot}$ are assumed to experience complete fallback, i.e., $f_{\rm fb} = 1$, and do not experience a natal kick \citep{Heger2003}. The natal kick velocity magnitude $v_{\rm k}$ is drawn from a Maxwellian distribution with dispersion $\sigma$, and the natal kick velocity direction is spatially isotropic. A smaller value of $\sigma$ is required in Pathway A1 than in B1 to avoid unbinding too many binaries, as the primary SN (SN1) occurs before CEE has decreased the orbital separation. As the remnant masses in our model are insensitive to $Z$, we only consider low metallicity ZAMS stars, i.e. $Z = 2\times10^{-4}$, where the effect of stellar winds on the spins of the stellar progenitors is negligible (see e.g., \citet{Vink2001}).
Prior to core collapse, the WR star can experience tides from its companion. The tidal torque is a strong function of the binary separation $a$, i.e., the synchronization timescale $t_{\rm sync} \sim \left(a/{\rm R_{\odot}}\right)^{17/2}\left(m/{\rm M_{\odot}}\right)^{-7.54}$ where $m$ is the WR mass. This implies that tides are effective at producing high spin magnitudes after CEE. Tidal synchronization and alignment would seem to be a natural mechanism for producing significant ejected mass as the ejected mass is a strong function of the aligned component of the BH spin magnitude. However, tides in Pathway A1 are only effective on the secondary WR star as the primary BH forms before CEE occurs, and thus will not produce observable counterparts unless SMT onto the secondary main sequence star is sufficient to cause a mass-ratio reversal. \cite{Broekgaarden2021} find that highly conservative mass transfer may reverse the binary mass ratio to allow a tidally spun-up secondary star to form a highly spinning BH. Consistent with their results, we find this is realizable in Pathway A1 only for highly conservative mass transfer in a narrow region of parameter space, i.e. $q_{\rm ZAMS} > 0.9$, and therefore we do not explore this in detail here. In Pathway A2, tides can affect both WR stars potentially allowing for a high BH spin, but this requires fine-tuning to ensure that the secondary is still not too massive to form a BH, e.g., $q_{\rm ZAMS} \sim 0.95$. If tidal interactions were to produce a highly spinning and aligned WR star such a system may yield significant ejected mass \citep{Hu2022}.
Given the difficulties with tidal spin-up, we focus on two alternative mechanisms that may provide high BH spin: i) the BH inherits a high spin from weak core-envelope coupling of the stellar progenitors (relevant in pathways A1 and B1), and ii) the BH gains a high spin magnitude from accretion during SMT (relevant in Pathway B1).
A high dimensionless BH spin magnitude can be inherited in both pathways via minimal core-envelope coupling if its WR progenitor has a sufficiently large initial spin, $\chi_{\rm 0} = f_{\rm B} \chi_{\rm B}$, which is parameterized by the fraction $f_{\rm B}$ of the dimensionless breakup spin, defined as
\begin{align} \label{E:Break}
\chi_{\rm B} = \frac{c |\bf{S}_{\rm B}|}{G m^2} = \frac{c r_{\rm g}^2 R^2 \Omega_{B}}{G m} = r_{\rm g}^2 \left( \frac{c^2R}{Gm} \right)^{1/2},
\end{align}
where $c$ is the speed of light, $G$ is the gravitational constant, $m$ is the mass of the WR star, $R$ is the WR stellar radius (see Eq.~(78) of \citet{Hurley2000}), $r_{\rm g}$ is the WR radius of gyration, and $\Omega_{\rm B}$ is the breakup angular frequency. For WR stars with $r_{\rm g}^2 = 0.075$, $\chi_{\rm B} \sim 15$ for $m \sim 10$ M$_{\odot}$. In the opposite extreme of maximal core-envelope coupling, angular momentum is efficiently transferred from the stellar progenitor's core to its envelope which is lost in mass transfer. This spin-down is modeled isotropically (see Eq.~(6) of \citet{Steinle2021}) and produces a natal WR dimensionless spin $\chi_0 \sim 0.001$ for $Z = 2\times10^{-4}$.
Accretion during SMT can result in a highly spinning primary BH in Pathway B1 depending on the fraction $f_{\rm a}$ of gas that is accreted. The increase in its mass $m_{\rm BH}$ and dimensionless spin $\chi$ per unit of accreted rest mass are given by,
\begin{subequations}
\begin{align}
\frac{dm_{\rm BH}}{dm} &= E(\chi)~, \label{E:BHAcc1} \\
\frac{d\chi}{dm} &= \frac{L(\chi)}{m_{\rm BH}^2} - \frac{2\chi E(\chi)}{m_{\rm BH}}~, \label{E:BHAcc2}
\end{align}
\end{subequations}
where $E(\chi)$ and $L(\chi)$ are the specific energy and orbital angular momentum of massive particles at the (prograde) innermost stable circular orbit (ISCO) of the Kerr BH \cite{Bardeen1972}. We allow for super-Eddington accretion as in \citet{Steinle2021} (see their Appendix B.2). Although the secondary star accretes on the main sequence in Pathway A1, this accretion is ineffective at yielding a highly spinning BH as any spin that is gained is not inherited by the core under minimal core-envelope coupling or is dissipated under maximal core-envelope coupling during mass transfer.
\subsection{The ejected mass of a tidally disrupted neutron star}
\label{subsec:Ejecta}
Near the end of the BHNS binary inspiral, the NS can be tidally disrupted by its BH companion. A simple criterion for whether this produces an observable electromagnetic signal can be estimated by comparing the separation, $r_{\rm tid}$, at which tidal disruption occurs with the radius, $R_{\rm ISCO}$, of the innermost stable circular orbit (ISCO) of the BH. Ignoring general relativistic effects, the tidal disruption separation $r_{\rm tid}$ can be approximated by balancing the gravitational acceleration due to the NS, $\sim m_{\rm NS}/R_{\rm NS}^2$, with the tidal acceleration due to the BH, $\sim (m_{\rm BH}/r_{\rm tid}^3)R_{\rm NS}$, as $r_{\rm tid} \sim R_{\rm NS}(m_{\rm BH}/m_{\rm NS})^{1/3}$. For a Kerr BH with dimensionless spin $\chi \equiv cS/Gm_{\rm BH}^2$ where $S$ is the magnitude of the spin angular momentum, $R_{\rm ISCO}$ is given by \citet{Bardeen1972} and depends sensitively on $\chi$. Tidal disruption is preceeded by mass-shedding of the outer layers of the NS at separations that are large compared to the ISCO of the BH when the tidal force exerted by the BH overcomes the self-gravity of the NS. As such, the separation at which mass-shedding begins is much larger than $r_{\rm tid}$, which is larger than $R_{\rm ISCO}$ when the NS is tidally disrupted. However, mass-shedding does not guarantee tidal disruption, and if the NS plunges into the BH before being tidally disrupted then only a low-mass accretion disk may form and an observable electromagnetic counterpart is very unlikely.
The tidal disruption criterion, and the computation of the amount of ejected mass, is more accurately determined by fits to results of numerical relativity simulations. These typically quantify the criterion in terms of the BHNS binary mass ratio $Q \equiv m_{\rm BH}/m_{\rm NS} \geq 1$, $\hat{R}_{\rm ISCO} = R_{\rm ISCO}/m_{\rm BH}$ which depends on the aligned component of the BH spin, and the NS compactness $C_{\rm NS} = Gm_{\rm NS}/(R_{\rm NS}c^2)$. We use the formula of \citet{Foucart2018} to determine whether the NS is tidally disrupted and to compute the corresponding ejected mass,
\begin{align}\label{E:EjectedMass}
\begin{aligned}
m_{\rm ejecta} = \left[ {\rm Max}\left( \alpha\frac{1 - 2C_{\rm NS}}{\eta^{1/3}} - \beta\hat{R}_{\rm ISCO}\frac{C_{\rm NS}}{\eta} + \gamma , 0 \right) \right]^\delta m_{\rm NS}\,
\end{aligned}
\end{align}
where $\eta = Q/(1 + Q)^2$ is the symmetric mass ratio which enforces invariance of a change of labels of the NS and BH, and $\alpha = 0.406,~\beta = 0.139,~\gamma = 0.255$, and $\delta = 1.761$ are constants derived from fitting the above model to 75 numerical relativity simulations \citep{Foucart2018}. Eq~(\ref{E:EjectedMass}) is zero if the NS is not tidally disrupted, and is nonzero if the NS is tidally disrupted. The ejected mass is largest for highly spinning and aligned BH spin magnitudes as $R_{\rm ISCO}$ is largest for a (prograde) maximally spinning BH with small mass. A larger NS mass or a smaller NS radius will result in a larger compactness $C$ and a smaller ejected mass.
We parameterize the ejected mass of BHNSs by the initial spin of the Wolf-Rayet progenitor star as a fraction of its breakup spin $f_{\rm B}$, and by the fraction of the donor's envelope $f_{\rm a}$ that is accreted in stable mass transfer. We also explore the dependence of $m_{\rm ejecta}$ on the binary component masses, $m_{1,2, \rm ZAMS}$, and the strength of the natal kicks at formation $\sigma$. Significant spin-orbit misalignments suppress $m_{\rm ejecta}$ by diminishing the aligned component of the BH spin. Although the effect of eccentricity is not considered in Eq.~(\ref{E:EjectedMass}), the supernova of the secondary star (SN2) can introduce eccentricity into the binary system. We compute the time to coalescence \citep{Peters1964} of our BHNS binaries with their semi-major axes and eccentricities after SN2, and only compute $m_{\rm ejecta}$ for circularized binaries that merge within the age of the Universe.
\subsection{Electromagnetic counterparts}
Having calculated the ejected mass for our BHNS binaries, we can take our analysis a step further by predicting their electromagnetic counterparts. Our BHNS kilonova model is from Gompertz et al. (in prep), where the full details will be presented. Here, we summarise the physics required to convert ejected mass into kilonova light curves. We divide the total ejected mass $m_{\rm ejecta}$ in Eq~(\ref{E:EjectedMass}) into two post-merger components: unbound dynamical ejecta $m_{\rm dyn}$ \cite{Kruger20} and bound disc mass $m_{\rm disc} = m_{\rm ejecta} - m_{\rm dyn}$. The average velocity of the dynamical ejecta is determined from the fitting function of \citet{Kawaguchi16}, who found it has a linear relation with Q in numerical relativity simulations. We model the dynamical ejecta with a grey opacity of $\kappa_{\rm dyn} = 10$\,cm$^2$\,g$^{-1}$ \citep{Tanaka13,Kasen17,Tanaka20}.
Simulations show that winds will be driven from the surface of the disc by viscous heating and nuclear recombination \citep[e.g.][]{Fernandez13,Fernandez15,Just15,Fernandez20,Fujibayashi20}. We parameterise the mass of this thermal wind as a fraction of the disc mass $m_{\rm therm} = \xi m_{\rm disc}$, where $\xi$ is a function of Q \citep{Raaijmakers21}, and assume a velocity $v_{\rm therm} = 0.034$~c \citep[cf.][]{Fernandez20}. The electron fraction ($Y_e$) of the thermally-driven outflow is expected to be in the range $0.25 \leq Y_e \leq 0.35$ \citep[e.g.][]{Foucart15,Fernandez20,Fujibayashi20}, with $> 50$ per cent of the outflow expected to possess a Lanthanide + Actinide fraction $X_{(La+Ac)} < 10^{-4}$ \citep{Fernandez20}. We incorporate this as a two-zone model with a leading `blue' mass with $\kappa_{\rm blue} = 1$\,cm$^2$\,g$^{-1}$ and a deeper layer of `purple' material with $\kappa_{\rm purple} = 5$\,cm$^2$\,g$^{-1}$ \citep[cf.][]{Tanaka20}. The fraction of blue mass is determined from an observed relationship with the disc mass via fits to Table 2 of \citet{Fernandez20}. Photons from the purple layer of material must diffuse through the blue layer to reach the observer.
\begin{table}
\begin{adjustbox}{max width=0.475\textwidth,center}
\begin{tabular}{c|cccc}
\hline\hline
Component & Mass & Velocity & Grey opacity & Region \\
& ($M_{\odot}$) & ($c$) & (cm$^2$\,g$^{-1}$) & (deg) \\
\hline
Dynamical ejecta & KF20 & $0.25$ & 10 & 80 -- 90 \\
Thermal wind & F18, R21 & $0.034$ & 1, 5 & 30 -- 80 \\
Magnetic wind & $m_{\rm therm}$ & $0.22$ & 10 & 0 -- 30 \\
\hline\hline
\end{tabular}
\end{adjustbox}
\caption{Outflow components for our BHNS kilonova model (Gompertz et al. in prep). KF20: \citet{Kruger20}; F18: \citet{Foucart2018}; R21: \citet{Raaijmakers21}. Region angles are from the poles of the spin axis.}
\label{tab:kilonova}
\end{table}
When magnetic fields are included in full three-dimensional general-relativistic magnetohydrodynamic models \citep[e.g.,][]{Siegel17,Siegel18,Fernandez19b} an additional magnetically-driven outflow is identified in polar regions. The dynamics of this ejecta depends on the magnetic field geometry \citep{Christie19}, but it is expected to have a mass roughly equal to the mass of the thermal outflow (i.e. $m_{\rm mag} = m_{\rm therm}$) and an average velocity of $v_{\rm mag} = 0.22$~c \citep{Fernandez19b}. The magnetic wind is driven from the poles before significant neutrino irradiation can occur, and therefore has $Y_e \sim 0.1$ \citep{Fernandez19b} corresponding to $\kappa_{\rm mag} = 10$\,cm$^2$\,g$^{-1}$. Our model is summarised in Table~\ref{tab:kilonova}.
The BHNS ejecta model is integrated as a module in {\sc mosfit} \citep{Guillochon18} which converts the $r$-process masses to light curves through semi-analytical models for the heating rate and deposition \citep{Korobkin12,Barnes16,Cowperthwaite17,Villar17,Metzger19}, and treats the propagation of photons in the common diffusion approximation \citep{Arnett82}. We use the same modules to calculate the photospheric radius \citep{Nicholl17} and the effects of viewing angle \citep{Darbha20} as in \citet{Nicholl21}.
\section{Results}
\label{sec:Results}
The ejected mass of the BHNS binary ultimately depends on the zero-age main sequence masses $m_{1,2, \rm ZAMS}$, binary separation $a_{\rm ZAMS}$, and metallicity $Z$, the Maxwellian velocity dispersion $\sigma$ that governs the strength of natal kicks, the breakup spin fraction of the Wolf-Rayet (WR) star $f_{\rm B}$, and the fraction of a donor's envelope that is accreted in stable mass transfer $f_{\rm a}$. In our results, we assume $Z = 0.0002$ which ensures the effect of stellar winds is negligible. Throughout this work, we assume a NS radius of $R_{\rm NS} = 12$ km.
\subsection{Parameter space exploration}
\label{subsec:Param}
The binaries presented in Figures \ref{F:PathwayA1} and \ref{F:PathwayB1_all} are distributions of BHNSs where one free parameter is varied (the horizontal axis) and all others are held constant. These figures depict the 5$^{\rm th}$, 50$^{\rm th}$, and 95$^{\rm th}$ percentiles of the ejected mass $m_{\rm ejecta}$ of the NS with a colorbar and the corresponding aligned component of the dimensionless spin of the BH $\chi_{\rm BH} = \chi_1\cos\theta_1$ (the vertical axis). As we evolve the spin magnitudes and directions until BHNS formation, $\chi_{\rm BH}$ depends principally on the mechanism by which the BH acquires spin (i.e., inheritance or accretion) and on the natal kick velocity dispersion $\sigma$. These isotropically oriented natal kicks produce scatter in the BH misalignments $\cos\theta_1$ that is preferentially peaked near $\cos\theta_1 \approx 1$ since the ZAMS spins are assumed to be aligned with the binary orbital angular momentum.
The first mechanism that we explore to obtain a highly spinning BH is inheritance via weak core-envelope coupling for binaries that evolve in Pathway A1. The WR breakup spin fraction $f_{\rm B}$ determines $\chi_{\rm BH}$ and $m_{\rm ejecta}$ as shown in Fig.~\ref{F:PathwayA1}. In the limit of small inherited spins, i.e., $f_{\rm B} < 0.01$, the aligned BH spin is also very small causing the NS to be captured rather than tidally disrupted, and hence $m_{\rm ejecta} = 0$ by definition. As the BH inherits larger spin, $f_{\rm B} \gtrsim 0.01$, the NS can be tidally disrupted allowing for nonzero $m_{\rm ejecta}$ and hence an observable counterpart. The inherited spin of the BH increases linearly with larger $f_{\rm B}$, as $\chi_1 \propto f_{\rm B}\chi_{\rm B}$ and $m_{\rm 1,ZAMS}$ is held constant (see Eq.~(\ref{E:Break})). Meanwhile, the scatter in the aligned BH spin component $\chi_{\rm BH}$ increases with $f_{\rm B}$ because the distribution of misalignments is biased towards $\cos\theta_1 \sim 1$ with a tail to larger values for this constant value of $\sigma$. For $f_{\rm B} \sim 0.03$, $\chi_{\rm BH}$ becomes sufficiently large to yield significant ejected mass, i.e., $m_{\rm ejecta} \gtrsim 0.01$ M$_{\odot}$.
For $f_{\rm B} \gtrsim 0.05$, the 95$^{\rm th}$ percentile (triangles) of $m_{\rm ejecta}$ is largest as the spin magnitude of the BH $\chi_1$ is maximal implying that $\chi_{\rm BH}$ asymptotes at 1, and $\chi_{\rm BH}$ corresponding to the median (circles) of $m_{\rm ejecta}$ approaches $\approx 0.9$.
The masses of the BH and of the NS are the same for each value of $f_{\rm B}$, and if either were larger then $m_{\rm ejecta}$ would be suppressed. The kick velocity dispersion which provides the scatter in $\chi_{\rm BH}$ is $\sigma = 30$ km/s for these binaries, implying that larger $\sigma$ could decrease the median of $m_{\rm ejecta}$.
The second mechanism we explore is accretion during stable mass transfer for binaries that evolve in Pathway B1 with the assumption of strong core-envelope coupling. Figure~\ref{F:PathwayB1_all} displays the dependence of $\chi_{\rm BH}$ and $m_{\rm ejecta}$ on the accreted fraction $f_{\rm a}$ (top-left panel), the initial mass of the primary star $m_{1, \rm ZAMS}$ (top-right panel), the initial mass of the secondary star $m_{2, \rm ZAMS}$ (bottom-left panel), and the natal kick strength $\sigma$ (bottom-right panel). As the primary undergoes core collapse prior to the loss of the envelope of the secondary star, the primary accretes as a BH.
In the top-left panel, the values of $m_{1, \rm ZAMS}$, $m_{2, \rm ZAMS}$, and $\sigma$ are fixed while $f_{\rm a}$ is varied. The BH spin magnitude $\chi_1$ remains small for a small amount of accretion $f_{\rm a} \lesssim 0.1$ due to dissipation of the stellar progenitor's spin via strong core-envelope coupling during common envelope evolution. The resultant ejected mass is small. As $f_{\rm a}$ increases, the BH accretes more gas from its companion's envelope resulting in a larger spin magnitude $\chi_1$ and ejected mass $m_{\rm ejecta}$. However, the increase of $\chi_1$ is nonlinear in $f_{\rm a}$ because the energy $E(\chi)$ of an accreted particle (Eq.~(\ref{E:BHAcc1})) is roughly constant with an increasing amount of accreted mass while the orbital angular momentum $L(\chi)$ of an accreting particle decreases due the smaller $R_{\rm ISCO}$ that results from the higher BH spin. In the limit of large accretion $f_{\rm a} \gtrsim 0.4$, the BH spin is high $\chi_1 > 0.5$ and increases only slightly. The ejected mass is significant for even the 5$^{\rm th}$ percentile of binaries, suggesting that accretion onto the primary BH may be a promising mechanism for producing observable counterparts.
In the top-right panel of Fig.~\ref{F:PathwayB1_all} the values of $f_{\rm a}$, $m_{2, \rm ZAMS}$, and $\sigma$ are fixed while $m_{1, \rm ZAMS}$ is varied, i.e., the initial mass of the BH accretor increases with increasing $m_{1, \rm ZAMS}$. Since the amount of gas that is accreted is held constant here, the spin of the BH generally decreases as the initial mass of the BH increases, as seen in Eq.~(\ref{E:BHAcc2}) where $m_{\rm BH}$ is in the denominator. Simultaneously, the radius $R_{\rm ISCO}$ of the BH is larger for smaller $m_{1, \rm ZAMS}$ due to the small BH mass but also because the BH spin is high, implying that $m_{\rm ejecta}$ is at its largest. The scatter in the aligned BH spin $\chi_{\rm BH}$ generally decreases for increasing $m_{1, \rm ZAMS}$ as the natal kick velocity is suppressed from fallback accretion and a larger orbital velocity. For $m_{1, \rm ZAMS} \gtrsim 22.5$ M$_{\odot}$, fallback completely suppresses the natal kick of the primary, and the misalignment solely originates from the natal kick of the secondary. In the limit of large $m_{1, \rm ZAMS}$, the BH spin is $\chi_1 \sim 0.6$, consistent with the results of \citet{Steinle2021}.
The dependence on $m_{2,\rm ZAMS}$, shown in the bottom-left panel of Fig.~\ref{F:PathwayB1_all} where $f_{\rm a}$, $m_{1,\rm ZAMS}$, and $\sigma$ are fixed, is complicated from the interplay of competing factors. For $m_{2,\rm ZAMS} < 16$ M$_{\odot}$, the mass of the NS is constant. The spin of the accreting BH increases with increasing $m_{2,\rm ZAMS}$ as the envelope of the secondary, and hence the amount of gas to be accreted, increases, implying a larger $m_{\rm ejecta}$. The scatter in $\chi_{\rm BH}$ decreases with increasing $m_{2,\rm ZAMS}$ which increases the orbital velocity prior to the second SN. For $m_{2,\rm ZAMS} \geq 16$ M$_{\odot}$, the mass of the NS at formation is larger increasing the NS compactness and decreasing $m_{\rm ejecta}$ despite the larger $\chi_{\rm BH}$. This competition between the increase in the BH spin from a larger donor's envelope in stable mass transfer and the increase in the NS compactness is a distinct feature, but is possibly model dependent since the dependence of the mass of the NS on $m_{2,\rm ZAMS}$ in uncertain.
The dependence on $\sigma$ is shown in the bottom-right panel of Fig.~\ref{F:PathwayB1_all} with fixed $f_{\rm a}$, $m_{1,\rm ZAMS}$, and $m_{2,\rm ZAMS}$. Larger values of $\sigma$ generally produce larger spin-orbit misalignments and smaller $\chi_{\rm BH}$, which suppresses $m_{\rm ejecta}$. The 5$^{\rm th}$ percentile of $\chi_{\rm BH}$ remains roughly constant with increasing $\sigma$ because the distributions of $\cos\theta_1$ are biased toward unity. This dependence is similar for binaries that evolve in A1, except for smaller values of $\sigma$ since common envelope evolution occurs prior to the natal kick of the primary.
Together, these results demonstrate that although we can produce rapidly spinning BHs in the isolated formation channel through the mechanisms of inheritance or accretion, the uncertainties of stellar binary evolution also affect other parameters such as spin-orbit misalignments and the binary mass ratio, which themselves affect the resultant ejected mass. Therefore, a question naturally arises: how would one distinguish between the two possible formation pathways explored here? Answering this question for a single (population of) observed BHNS(s) with statistical rigor would require (hierarchical) Bayesian parameter estimation. Although such an analysis is beyond the scope of this work, we can identify regions of the parameter space that are likely to favor systems with observable electromagnetic counterparts from either pathway.
The two panels of Figure~\ref{F:ContoursFaFb} depict contours of the maximum possible ejected mass, i.e., the 100$^{\rm th}$ percentile of $m_{\rm ejecta}$, under the assumption of weak core-envelope coupling. For binaries that evolve in Pathway A1, shown in the left panel, accretion is not an efficient mechanism of producing significant ejected mass, as the secondary star accretes during stable mass transfer of the primary. Indeed, a small amount of accretion, i.e. $f_{\rm a} \lesssim 0.1$, can result in large $m_{\rm ejecta}$ if the BH inherits a high natal spin, i.e., $f_{\rm B} \gtrsim 0.05$, because the mass of the NS is not too large. But, for larger amounts of accretion onto the secondary main sequence star, the mass (and hence compactness) of the NS that forms is subsequently too large and suppresses $m_{\rm ejecta}$. For $f_{\rm a} \gtrsim 0.25$, the NS is not tidally disrupted, indicated by the grey region. This is even more prominent for sufficiently small BH natal spins $f_{\rm B} \lesssim 0.05$, as a smaller NS mass can prevent tidal disruption for smaller BH spin. In the limit of no accretion $f_{\rm a} \sim 0$ and negligible inherited BH spin $f_{\rm B} \lesssim 0.01$, the maximum ejected mass $m_{\rm ejecta} \approx 0.001$, consistent with the 95$^{\rm th}$ percentile in Fig.~\ref{F:PathwayA1}. If we instead assume that mass loss in BH formation due to the Kerr limit is isotropic rather than negligible, then more accreted mass can yield larger $m_{\rm ejecta}$ due to the primary BH mass being smaller, although this effect is not significant even for $f_{\rm B} \gtrsim 0.05$.
Binaries that evolve in Pathway B1 are shown in the right panel of Fig.~\ref{F:ContoursFaFb}, where the the primary accretes as a BH during stable mass transfer of the secondary. For small inherited natal spin and small amounts of accretion, the maximum $m_{\rm ejecta}$ is small due to the small BH spin. As either $f_{\rm a}$ or $f_{\rm B}$ are increased, the BH spin increases and allows for a larger maximum $m_{\rm ejecta}$. Consistent with the 95$^{\rm th}$ percentile in the left panel of Fig.~\ref{F:PathwayB1_all}, an accreted fraction $f_{\rm a} \sim 0.5$ can produce high BH natal spin and $m_{\rm ejecta}$ as large as that from large inherited spin, i.e., $f_{\rm B} \sim 0.05$ in in Fig.~\ref{F:PathwayA1}.
The left (right) panel of Figure~\ref{F:ContoursMassFracs} displays contours of the maximum of $m_{\rm ejecta}$ as functions of $m_{1,\rm ZAMS}$ and $f_{\rm B}$ ($f_{\rm a}$) for binaries that evolve in Pathway A1 (B1) under the assumption of weak (strong) core envelope coupling. In both pathways, the mass of the BH increases as $m_{1,\rm ZAMS}$ increases, providing a larger mass ratio $Q$ which suppresses $m_{\rm ejecta}$. In the limit of small BH spins, i.e., $f_{\rm B} \to 0$ in the left panel and $f_{\rm a} \to 0$ in the right panel, sufficiently large $m_{1,\rm ZAMS}$ disallows tidal disruption of the NS, indicated by the grey region. In Pathway A1, $m_{\rm ejecta}$ increases sharply as $f_{\rm B}$ increases, consistent with Fig.~\ref{F:PathwayA1}, and reaches a maximum of $m_{\rm ejecta} \sim 0.4$ M$_{\odot}$ for $f_{\rm B} \gtrsim 0.05$ due to maximal BH spin.
Comparatively, in Pathway B1 $m_{\rm ejecta}$ increases gradually as $f_{\rm a}$ increases from 0, because the mass of the BH increases from gas accretion. This implies that smaller $m_{1,\rm ZAMS}$ is needed in Pathway B1 than in A1 to obtain very large $m_{\rm ejecta}$. However, if accretion in B1 is highly conservative, i.e., $f_{\rm a} \sim 0.9$, the spin of the BH is near maximal allowing for larger $m_{\rm ejecta}$ at higher values of $m_{1,\rm ZAMS}$.
Comparing the contours in Fig.~\ref{F:ContoursFaFb} between binaries that evolve from these two pathways, the asymmetry in the effect of accretion allows NSs from Pathway B1 (right panel) to be tidally disrupted and produce significant ejected mass essentially over the entire spin parameter space, whereas NSs from Pathway A1 (left panel) are not tidally disrupted over half of this region of the spin parameter space. On the other hand, comparing the contours in Fig.~\ref{F:ContoursMassFracs}, the mass of the BH can suppress the ejected mass in B1 more than in A1 due to increased BH mass from non-conservative accretion. This asymmetry provides signatures to distinguish these two pathways as likely formation possibilities if the values of $f_{\rm a}$, $f_{\rm B}$, and $m_{1,\rm ZAMS}$ can be constrained for a population of BHNS binaries observed from gravitational-wave data. Additionally, a systematically larger mass ratio $Q = m_{\rm BH}/m_{\rm NS}$ could be expected for binaries that evolve from Pathway B1 than in A1 due to accretion by the BH in Pathway B1.
Although the mass of the BH is measured from gravitational-wave detections it is degenerate with $f_{\rm a}$ and $m_{1,\rm ZAMS}$.
If the ejected masses can be measured from electromagnetic follow-up for a population of BHNS binaries whose BH spins are measured from gravitational-wave data, hierarchical Bayesian parameter estimation could constrain the likely source of the spin of the BH. In such an inference study, one could create an astrophysical model by leveraging the fact that there would be a stronger correlation between the BH mass and $f_{\rm a}$ than between the BH mass and $f_{\rm B}$, as shown by the contours of Fig.~\ref{F:ContoursMassFracs}. Naively, one could expect these correlations to be opposite, however this is highly model dependent as the relationship between $m_{1,\rm ZAMS}$ and $f_{\rm B}$ depends on the strength of core-envelope coupling, of which we simply consider extreme limits, and the mechanisms that drive stellar angular momentum transport, which are uncertain. Subsequently placing the constraints for the population of binaries in the planes of Fig.'s~\ref{F:ContoursFaFb} and \ref{F:ContoursMassFracs} could therefore elucidate the likely source of ejected mass, BH spin, and hence the likely formation pathway.
\subsection{Light curves of electromagnetic counterparts}\label{subsec:lightcurves}
We consider three values of the aligned BH spin component $\chi_{\rm BH} =$ 0.2, 0.56, and 0.9 which correspond to the medians of $\chi_{\rm BH}$ for the BHNSs from Section~\ref{subsec:Param} that evolved under weak core-envelope coupling with $f_{\rm B} =$ 0.01, 0.03, and 0.1, respectively (i.e., see Fig.~\ref{F:PathwayA1}). These BHNSs have the same mass ratio $Q = m_{\rm BH}/m_{\rm NS} = 4.7/1.3 = 3.6$ and $R_{\rm NS} = 12$ km, and yield $m_{\rm ejecta} \sim 10^{-3},~10^{-2}$, and $10^{-1}$ M$_{\odot}$, respectively. Though we did not show this in Subsection~\ref{subsec:Param}, note that higher $Q$ or a more compact NS can result in a lower ejecta mass from Eq.~(\ref{E:EjectedMass}) and therefore a fainter electromagnetic transient. We assume the observer is oriented 30$^{\circ}$ from the pole, at the boundary between the thermal and magnetic disc wind outflows. For $Q = 3.6$, $v_{\rm dyn} = 0.25c$ \citep{Kawaguchi16}.
The resultant light curves for a merger at an assumed distance of 200\,Mpc are shown in Figure~\ref{fig:lightcurves}. Their morphology is driven by the interplay of the three emission components (Table~\ref{tab:kilonova}) whose relative contributions depend on the properties of the input binary. Each component contributes radiation at different temperatures and on different timescales, resulting in time- and frequency-dependent light curve behaviour. The fraction of Lanthanides and Actinides in the ejecta is particularly impactful in this regard; the complex absorption patterns of these more massive elements absorb much of the light at optical frequencies \citep{Barnes13}, resulting in `red' (infra-red bright) emission. This can be broadly understood from the approximated grey opacities in Table~\ref{tab:kilonova}, where emission components with higher values contribute more in the infra-red ($K$-band, orange lines) at later times, while those with lower values produce optical emission ($g$-band, green lines) earlier in the evolution. The $i$-band (blue lines) is intermediate in frequency between the two. The total ejecta mass depends strongly on $\chi_{\rm BH}$ \citep[e.g.][]{Foucart2018}, hence higher spin models result in more massive winds and dynamical outflows that produce brighter and longer-lived emission.
Our model shows that while kilonovae from BHNS mergers at this distance are likely not detectable by the current generation of survey telescopes like the \emph{Zwicky} Transient Facility \citep[ZTF;][]{Bellm19}, the Asteroid Terrestrial Impact Last Alert System \citep[ATLAS;][]{Tonry18} or the Gravitational-wave Optical Transient Observer \citep[GOTO;][]{Dyer20,Steeghs22}, these events are expected to be detectable by the Vera Rubin Observatory \citep{Ivezic19,Andreoni22}. Such a finding is in line with the non-detections of BHNS merger candidates during O3 \citep{Hosseinzadeh19,Lundquist19,Ackley20,Andreoni20,Antier20,Gompertz20,Anand21,Oates21,Paterson21}, though GW-triggered events are likely to be probed to greater depths than untriggered survey observations.
In addition to the kilonova described above, we can estimate the power of any short GRB that is launched. For aligned spins of $\chi_{\rm BH} = 0.2$, $0.56$ and $0.9$, we estimate that $2.0\times10^{-3}$\,$M_{\odot}$, $3.0\times10^{-2}$\,$M_{\odot}$, and $1.4\times10^{-1}$\,$M_{\odot}$ accretes onto the remnant black hole after accounting for disc wind outflows. Assuming an accretion timescale of $0.2$\,s and 1 percent efficiency in converting accretion power to electromagnetic luminosity, this translates into a jet luminosity of $1.8\times 10^{50}$\,erg\,s$^{-1}$, $2.7\times 10^{51}$\,erg\,s$^{-1}$, and $2.4\times 10^{52}$\,erg\,s$^{-1}$, respectively \citep[cf.][]{Ruiz21}. These luminosities are consistent with the observed short GRB population, including the subset that have been suggested to arise from BHNS mergers \citep{Troja08,Gompertz20b}. They are therefore easily detectable even at cosmological distances if they are pointed along our line of sight.
\section{Conclusions and Discussion}
\label{sec:Disc}
The possibility of observing electromagnetic counterparts, i.e., short GRBs and kilonovae, from the merger of BHNS binaries is an exciting prospect for multi-messenger astronomy. The existence and detectability of these counterparts is sensitive to the properties of the BHNSs that produce them, implying that accurate modeling of populations of BHNS binaries is crucial for understanding the prevalence of counterparts in the Universe. Currently, there are great uncertainties in models of BHNS binary formation and spin evolution.
The most important BHNS properties for producing a large amount of ejecta mass $m_{\rm ejecta}$, and hence detectable counterparts, are the masses of the BH and NS, and the aligned component of the BH spin $\chi_{\rm BH}$. We explored the dependence of these quantities on two key mechanisms by which BHs in BHNS binaries may obtain high spin magnitude. Either the BH inherits spin from its Wolf-Rayet stellar progenitor that evolved under weak core-envelope coupling with a fraction $f_{\rm B}$ of its maximum breakup spin, or the BH gains spin by accreting a fraction $f_{\rm a}$ of its companion's envelope.
Significant $m_{\rm ejecta}$ is possible from:
\begin{enumerate}[leftmargin=*]
\item Inheritance of high BH spin $\chi_1$ via weak core-envelope coupling with $f_{\rm B} \gtrsim 0.03$, where $\chi_{\rm BH}$ scales linearly with $f_{\rm B}$ until the Kerr limit is saturated.
\item Accretion of high BH spin, where the BH spin scales nonlinearly with $f_{\rm a}$. The mass of the BH increases with increasing $f_{\rm a}$, which suppresses $m_{\rm ejecta}$ if the initial mass of the BH is too large.
\item Spin-orbit misalignments that are not too large, where the source of misalignments are natal kicks.
\end{enumerate}
We considered two main formation pathways defined as A1 (B1) when the primary star undergoes stable mass transfer (common envelope evolution) and the secondary undergoes common envelope evolution (stable mass transfer). The effects of $f_{\rm B}$ and $f_{\rm a}$ on the BH spin in these formation pathways are distinguishable via population analysis of BHNS binaries with gravitational-wave observations and electromagnetic follow-up, as the role of accretion differs between the two pathways. Although such a task is not trivial, it is certainly feasible as constraints on $f_{\rm a}$ have already been placed on the LIGO/Virgo binary BH population \citep{Wong2022}. The ZAMS mass of the stellar progenitor of the BH can further distinguish between these pathways, since for a given stellar initial mass function we predict more high-ejecta-mass kilonovae for binaries that evolve in Pathway A1 if typically $f_{\rm B} \gtrsim 0.04$, while kilonovae from binaries that evolve in Pathway B1 can be limited by increased BH mass from (non-conservative) accretion.
Since the number of BHNS mergers expected to be detected by LIGO/Virgo in the near future is not likely to be sufficiently substantial to anticipate the availability of a BHNS population, valuable information may still be extracted for single events. For example, our results indicate that observations of a kilonova counterpart to a BHNS binary merger with peak brightnesses $M_K \sim -15.5$, $M_i \sim -14.5$, and $M_g \sim -13.2$ in a 30s exposure imply an aligned BH spin $\chi_{\rm BH} \sim 0.9$ for modest mass ratio and NS compactness, but this depends on the uncertainties of the kilonova emission and the NS equation of state. This could suggest that the BH inherited a high spin or gained high spin from accretion.
As the strength of core-envelope coupling is an underlying process of the progenitors of stellar-mass BHs, this uncertainty affects all formation channels of BHNS binaries. When we showed the effect of accretion in Pathway B1, we assumed that core-envelope coupling was strong which provides negligible natal BH spin magnitudes. Binaries that evolve in Pathway A1 under strong core-envelope coupling will retain negligible BH spins in the absence of other spin-up mechanisms, which disallows significant $m_{\rm ejecta}$.
In reality, the strength of core-envelope coupling is likely somewhere between weak and strong and depends on the mechanism that drives angular momentum transport within the stellar interior. Population synthesis models typically assume that core-envelope coupling is strong, however this is uncertain for high mass stellar progenitors of BHs. We contend that better understanding of this process is crucial for predicting $m_{\rm ejecta}$ and the detectability of BHNS binary counterparts.
In our model, disk accretion can produce a highly spinning BH in Pathway B1 where the secondary star undergoes stable mass transfer. It is suspected that this accretion needs to be highly super-Eddington to achieve large BH spin \citep{Zevin2022}. Eddington-limited accretion would greatly suppress $\chi_1$, $m_{\rm ejecta}$, and the observability of any counterpart. However, super-Eddington accretion is not impossible in principle, as the Eddington limit depends on the geometry of the accretion and various kinds of instabilities. It is suspected that ultra-luminous X-ray binaries contain NSs accreting far above the Eddington limit, and some may contain accreting BHs that exceed the Eddington limit \citep{Miller2019,Gao2022}.
We also computed realistic light curves of kilonova counterparts to our BHNS binaries. Although there exist considerable uncertainties in the physics of kilonova, the light curve model utilized here is robust and reflects the current understanding. We showed that the kilonova emission that results from our highly spinning BHNSs are undetectable by ZTF, but will be discovered by future telescopes such as the Vera Rubin Observatory. Binaries that inherit high BH spin with $f_{\rm B} \gtrsim 0.03$ are predicted to produce kilonovae with peak brightness $M_i \sim -14.5$ for a few days of observing time and should be detectable from up to $\sim$500\,Mpc away in a 30s visit by Rubin.
Such systems may also produce short GRBs \citep[e.g.][]{Paschalidis15}, and we showed that the expected jet luminosity is consistent with the observed short GRB population. However, drawing direct links between the properties of the binaries and the GRB light curves is not possible due to uncertainties in the jet launch mechanism, the process by which $\gamma$-rays are produced in the jet, and the highly variable circumstellar environment. The isotropic nature of kilonovae also makes them more promising electromagnetic counterparts for gravitational wave detections of BHNSs compared to the strongly beamed short GRBs.
Although the two BHNS mergers found by LIGO/Virgo had unfavourable parameters for producing electromagnetic counterparts, our results show that rapidly rotating BHs in BHNS binaries, significant ejecta masses, and detectable kilonova emission are possible through accretion and inheritance of spin. Comparing the distributions of ejected masses from future electromagnetic searches with the binary parameters inferred by gravitational wave detectors offers a promising means to determine the physical mechanism producing the BH spin in these systems.
\section*{Acknowledgements}
The authors would like to thank Daria Gangardt and Davide Gerosa for insigthful comments. N.S. is supported by the Leverhulme Trust Grant No. RPG-2019-350, the European Union's H2020 ERC Starting Grant No. 945155--GWmining, and the Cariplo Foundation Grant No. 2021-0555. B.G. and M.N. are supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No.~948381). M.N. acknowledges a fellowship from the Alan Turing Institute.
\bibliographystyle{mnras}
\bibliography{bibme}{}
|
Title:
Growing the seeds of pebble accretion through planetesimal accretion |
Abstract: We explore the growth of planetary embryos by planetesimal accretion up to
and beyond the point where pebble accretion becomes efficient at the so-called
Hill-transition mass. Both the transition mass and the characteristic mass of
planetesimals formed by the streaming instability increase with increasing
distance from the star. We developed a model for the growth of a large
planetesimal (embryo) embedded in a population of smaller planetesimals formed
in a filament by the streaming instability. The model includes in a
self-consistent way the collisional mass growth of the embryo, the
fragmentation of the planetesimals, the velocity evolution of all involved
bodies, as well as the viscous spreading of the filament. We find that the
embryo accretes all available material in the filament during the lifetime of
the protoplanetary disc only in the inner regions of the disc. In contrast, we
find little or no growth in the outer parts of the disc beyond 5--10 AU.
Overall, our results demonstrate very long timescales for collisional growth of
planetesimals in the regions of the protoplanetary disc where giant planets
form. As such, in order to form giant planets in cold orbits, pebble accretion
must act directly on the largest bodies present in the initial mass-function of
planetesimals with little or no help from mutual collisions.
| https://export.arxiv.org/pdf/2208.01902 |
\title{
Growing the seeds of pebble accretion through planetesimal accretion
}
\author{
Sebastian~Lorek\inst{1},
Anders~Johansen\inst{1,2}
}
\institute{
Centre for Star and Planet Formation,
Globe Institute,
University of Copenhagen,
{\O}ster Voldgade 5–7,
DK-1350 Copenhagen,
Denmark \\
\email{[email protected]}
\and
Lund Observatory,
Department of Astronomy and Theoretical Physics,
Lund University,
Box 43,
221 00 Lund,
Sweden
}
\date{Received ; accepted }
\abstract{
We explore the growth of planetary embryos by planetesimal accretion up to and beyond the point where pebble accretion becomes efficient at the so-called Hill-transition mass. Both the transition mass and the characteristic mass of planetesimals formed by the streaming instability increase with increasing distance from the star. We developed a model for the growth of a large planetesimal (embryo) embedded in a population of smaller planetesimals formed in a filament by the streaming instability. The model includes in a self-consistent way the collisional mass growth of the embryo, the fragmentation of the planetesimals, the velocity evolution of all involved bodies, as well as the viscous spreading of the filament. We find that the embryo accretes all available material in the filament during the lifetime of the protoplanetary disc only in the inner regions of the disc. In contrast, we find little or no growth in the outer parts of the disc beyond 5--10 AU. Overall, our results demonstrate very long timescales for collisional growth of planetesimals in the regions of the protoplanetary disc where giant planets form. As such, in order to form giant planets in cold orbits, pebble accretion must act directly on the largest bodies present in the initial mass-function of planetesimals with little or no help from mutual collisions.
}
\keywords{
Methods: numerical --
Planets and satellites: formation
}
\section{Introduction}
\label{sec:introduction}
The classic picture for the formation of planets requires growth over several orders of magnitude in size. Starting from micrometre-sized dust and ice grains, coagulation produces millimetre-sized pebbles. Collisional and dynamical processes, such as bouncing, fragmentation, and radial drift, limit the maximum particle size \citep{Blum2008,Guettler2010,Zsom2010,Krijt2015} and the formation of larger bodies is effectively prevented. Porosity in combination with an increased stickiness of ice could bypass these barriers \citep{Wada2009,Okuzumi2012,Kataoka2013}. However, it requires that ice is indeed stickier than rocky material \citep{Gundlach2015,Arakawa2021,Schraepler2022}, which might not necessarily be the case \citep{Musiolik2019,Kimura2020}, and that the initial grains are sub-micron in size.
An alternative mechanism that has been extensively studied since its discovery invokes the concentration of pebbles through streaming instability and the subsequent gravitational collapse of dense filament-like structures that converts ${\sim}\mathrm{mm}$-sized pebbles directly to ${\sim}\mathrm{km}$-sized planetesimals \citep[e.g.][]{Youdin2005,Johansen2007,Johansen2014,Simon2016,Schaefer2017,Abod2019}. These planetesimals then would grow to planet-sized bodies through runaway and oligarchic growth.
Runaway growth occurs when a planetesimal that is slightly more massive than the rest of the bodies accretes more efficiently through gravitational focusing. The mass ratio between two bodies then increases with time and the more massive one quickly outgrows the other planetesimals. As the growing body becomes more massive, it starts to gravitationally stir the surrounding planetesimals which reduces gravitational focusing and runaway growth ceases \citep{Ida1993,Kokubo1996,Ormel2010a}. Eventually, a number of planetary embryos form which grow in an oligarchic fashion by accreting the planetesimals in their respective feeding zones until reaching their isolation mass \citep{Kokubo1998,Kokubo2000}. In the final assembly of planets, these bodies grow by collisions and when reaching a threshold mass of ${\sim}10\,M_\oplus$ they start to accrete gas from the surrounding nebula to form the terrestrial and giant planets.
Planetesimal accretion has long thought to be the main pathway of planet formation. However, accretion is only efficient if planetesimals are small. For small planetesimals, of the order of a few kilometres in size at most, gas drag damps eccentricities and inclinations, boosting the accretion rate. However, it is uncertain if planetesimals actually formed small or if they were large to begin with, typically around $100\,\mathrm{km}$ in diameter \citep{Morbidelli2009,Weidenschilling2011,Johansen2015}. Evidence for the latter case is seen not only in the size distribution of the asteroid belt \citep{Bottke2005} and the cold classical Kuiper belt objects \citep{Kavelaars2021}, which are most likely the unaltered remnants of the planetesimals that formed in the outer Solar System, but also in the absence of large craters on Pluto, which indicates a lack of bodies ${\lesssim}1$ to $2\,\mathrm{km}$ in diameter \citep{Singer2019}. Furthermore, numerical studies of planetesimal formation through the streaming instability point towards a large initial size \citep{Youdin2005,Johansen2007}. The fragmentation of dense pebble filaments into planetesimals results in an initial mass-function (IMF) of planetesimals that is well described by a power-law with exponential cut-off for bodies exceeding a characteristic mass. The characteristic mass translates to a characteristic size of ${\sim}100\,\mathrm{km}$ at a heliocentric distance of the asteroid belt and the largest bodies that form through this process are roughly the size of Ceres (${\sim}10^{-4}\,M_\oplus$) \citep{Simon2016,Schaefer2017,Abod2019,Li2019}.
Because of the long timescales for planetesimal accretion, an efficient formation of terrestrial planets and the cores of giant planets within the lifetime of the protoplanetary disc of typically only a few $\mathrm{Myr}$ \citep{Haisch2001} is problematic. \citet{Johansen2019b} explored the conditions for forming the cores of the giant planets through planetesimal accretion. Their model focuses on the growth track of a single migrating protoplanet sweeping through a population of planetesimals. They found that their fiducial model with constraints from the Solar System of
\begin{enumerate*}[label=(\roman*)]
\item a primordial population of planetesimals of a few hundred Earth masses,
\item a characteristic planetesimal size of ${\sim}100\,\mathrm{km}$, and
\item a weakly turbulent protoplanetary disc
\end{enumerate*}
allows protoplanets to grow to only ${\sim}0.1\,M_\oplus$ within the disc lifetime of $3\,\mathrm{Myr}$. Allowing for a massive disc of planetesimals of ${\sim}1000\,M_\oplus$ produces close-in giant planets, but fails to form cold giant planets, such as Jupiter or Saturn in the Solar System, unless the planetesimal size and turbulence strength are reduced at the same time. Their conclusion is that unless ignoring all three constraints, planetesimal accretion is insufficient to grow the cores of giant planets.
In contrast, fast growth can be achieved by the accretion of pebbles that are ubiquitous in the protoplanetary disc. Processes like the streaming instability that explain planetesimal formation would convert between ${\sim}10\,\%$ and up to $80\,\%$ of the pebble mass trapped in filaments to planetesimals \citep{Abod2019}. The remnant pebbles as well as newly forming pebbles in the outer disc, where growth timescales are longer, would then provide a mass reservoir for further growth through pebble accretion. Pebble accretion becomes efficient for sufficiently large embryos above the so-called transition mass \citep{Lambrechts2012}, at which the growth mode changes from slow Bondi accretion to the fast Hill accretion. The transition mass evaluates to ${\sim}2{\times}10^{-3}\,M_\oplus$ at $1\,\mathrm{AU}$ and ${\sim}6{\times}10^{-3}\,M_\oplus$ at $10\,\mathrm{AU}$ \citep{Ormel2010,Lambrechts2012}. Such a body then accretes a large amount of pebbles within a short timescale and can form the terrestrial planets and the cores of the giant planets, consistent with the lifetime of protoplanetary discs \citep{Lambrechts2012,Lambrechts2014,Johansen2017,Johansen2021}. However, a body of ${\sim}10^{-3}$ to $10^{-2}\,M_\oplus$ needs to form in the first place and planetesimal accretion could be the process for that.
In this paper, we investigate if and under which conditions planetesimal accretion would lead to the formation of pebble-accreting embryos. Most simplified models developed to describe the growth of planets start with a narrow annulus of uniformly distributed planetesimals and an embryo in the centre. The growth of the embryo is followed until all planetesimals from the feeding zone are accreted \citep[e.g.][]{Thommes2003,Chambers2006a,Fortier2013}. It is commonly assumed that there is only one planetesimal size, that the embryo mass follows from the transition from runaway to oligarchic growth, and that the feeding zone of the embryo is always populated with planetesimals. In our model, we want to deviate in some aspects from this approach. While we also model the growth of an embryo embedded in a population of planetesimals, we employ a different initial situation where we
\begin{enumerate*}[label=(\roman*)]
\item limit the available mass at a certain location by assuming that it is given by the mass budget of a streaming instability filament and
\item use the streaming instability IMF to derive the characteristic planetesimal size and the size of the embryo.
\end{enumerate*}
We furthermore deviate from the approach of \citet{Johansen2019b}, by
\begin{enumerate*}[label=(\roman*)]
\item ignoring migration,
\item including fragmentation,
\item having a self-consistent treatment of the growth rates and the eccentricity and inclination evolution of embryos, planetesimals, and fragments.
\end{enumerate*}
This way, we study here the growth from planetesimals to planetary embryos, while \citet{Johansen2019b} focused on the later growth stages where migration is important.
\citet{Liu2019} investigates the growth from planetesimals to embryos and beyond through planetesimal and pebble accretion at the water snowline by means of $N$-body simulations. In their work, they test different initial conditions for the planetesimal population of
\begin{enumerate*}[label=(\roman*)]
\item a mono-dispersed population of planetesimals,
\item a poly-dispersed population with IMF from streaming instability simulations, and
\item a two-component population emerging from runaway growth of planetesimals.
\end{enumerate*}
They find that a mono-dispersed population of planetesimals of size $400\,\mathrm{km}$ fails to form planets because growth timescales are too long due to the rapid excitation of eccentricities and inclinations of the planetesimals. In the other two cases, however, the largest body that forms, either because of the IMF or as a result of runaway growth of $100\,\mathrm{km}$-sized planetesimals, grows to several Earth masses firstly by planetesimal accretion and later through pebble accretion when the embryo reaches a mass of ${\sim}10^{-3}$ to $10^{-2}\,M_\oplus$. Our work is complementary to their study because we explore the planetesimal accretion phase at various locations, whereas \citet{Liu2019} focused on a single site, the snowline at $2.7\,\mathrm{AU}$.
The paper outline is as follows. In Sect.~\ref{sec:methods}, we give an outline of our model and our assumptions. In Sect.~\ref{sec:results}, we present the results of planetesimal accretion around a solar-like star, which is our fiducial model. In Sect.~\ref{sec:discussion}, we explore and discuss parameter variations of the fiducial model. Finally, in Sect.~\ref{sec:conclusions}, we summarise and conclude the study.
\section{Methods}
\label{sec:methods}
\subsection{Basic outline}
We use a semi-analytic model to follow the growth of a planetary embryo at a fixed distance $r$ from the central star \citep[e.g.][]{Chambers2006a}. Three different types of bodies are considered. These are
\begin{enumerate*}[label=(\roman*)]
\item an embryo,
\item a population of planetesimals, and
\item a population of fragments.
\end{enumerate*}
The embryo is treated as a single body with mass $M_\mathrm{em}$, radius $R_\mathrm{em}$, eccentricity $e_\mathrm{em}$ and inclination $i_\mathrm{em}$, and surface density $\Sigma_\mathrm{em}$. For the surface density of the embryo, we simply assume that the mass of the embryo is distributed uniformly in an annulus of area $A_\mathrm{em}$ centred at $r$ with a width of $b r_\mathrm{h}$,
\begin{equation}
\Sigma_\mathrm{em}=\frac{M_\mathrm{em}}{2\pi r b r_\mathrm{h}}=\frac{(3M_\odot)^{1/3}}{2\pi r^2 b}M_\mathrm{em}^{2/3},
\label{eq:embryosurfacedensity}
\end{equation}
where $r_\mathrm{h}{=}r\left(M_\mathrm{em}/(3M_\odot)\right)^{1/3}$ is the Hill radius \citep{Chambers2006a}. The value $b{=}10$ corresponds to the typical spacing of isolated embryos which has been shown in $N$-body simulations to be ${\sim}10$ Hill radii \citep{Kokubo1998}.
A single planetesimal has mass $m_\mathrm{p}$, radius $R_\mathrm{p}$, and the population has a surface density $\Sigma_\mathrm{p}$. We take the root-mean-square eccentricity and inclination to describe the orbits of the planetesimals. The fragments are treated in the same way as the planetesimals with a mass $m_\mathrm{fr}$ and radius $R_\mathrm{fr}$ for a single fragment, and surface density $\Sigma_\mathrm{fr}$, root-mean-square eccentricity and inclination for the population.
\subsection{Mass and surface density evolution, fragmentation}
The embryo grows by accreting planetesimals and fragments. The accretion rate of the embryo can be written as
\begin{equation}
\dot{M}_\mathrm{em}^{(k)}=h_{\mathrm{em},k}^2r^2\Omega_\mathrm{K}\Sigma_{k}P_\mathrm{col},
\end{equation}
where $k$ stands for either planetesimals or fragments. Furthermore, we have the reduced mutual Hill radius $h_{\mathrm{em},k}{=}\left(\left(M_\mathrm{em}+m_k\right)/\left(3M_\odot\right)\right)^{1/3}$, the Keplerian frequency $\Omega_\mathrm{K}$, and a dimensionless collision rate $P_\mathrm{col}$ that is a function of the sizes of the colliding bodies, and their mutual eccentricities and inclinations \citep{Inaba2001,Chambers2006a}.
We are now able to formulate the evolution equations for the surface densities. For the embryo surface density, we differentiate Eq.~\ref{eq:embryosurfacedensity} with respect to time and get
\begin{equation}
\dot{\Sigma}_\mathrm{em}^{(k)}=\frac{(3M_\odot)^{1/3}}{3\pi b r^2 M_\mathrm{em}^{1/3}}\dot{M}_\mathrm{em}^{(k)}=\frac{2}{3}\frac{\Sigma_\mathrm{em}}{M_\mathrm{em}}\dot{M}_\mathrm{em}^{(k)},
\label{eq:surfacedensityem}
\end{equation}
for the change of surface density due to the accretion of bodies from population $k$.
To derive the evolution of planetesimal and fragment surface densities, we first need to derive the amount of fragments produced in a collision between planetesimals. To do so, we first calculate the number of collisions per unit time between planetesimals
\begin{equation}
\dot{N}_\mathrm{p}=h_{\mathrm{p},\mathrm{p}}^2r^2\Omega_\mathrm{K}N_{\mathrm{s},\mathrm{p}}^2A_\mathrm{p}P_\mathrm{col},
\label{eq:collisionratep}
\end{equation}
where $N_{\mathrm{s},\mathrm{p}}{=}\Sigma_\mathrm{p}/m_\mathrm{p}$ is the surface number density of planetesimals and $P_\mathrm{col}$ is the collision rate between planetesimals \citep{Inaba2001}. Initially, there are no fragments. When planetesimals are excited to high enough eccentricity, such that $ev_\mathrm{K}{\approx}v_\mathrm{esc}$, collisions become disruptive and fragments are produced. Typically this results in a collisional cascade with a size distribution of fragments, however, here we use a typical fragment size of $0.5\,\mathrm{km}$ to represent the fragment population. The fragmentation model of \citet{Kobayashi2010} allows us to determine the total mass of fragments $\Delta M_\mathrm{fr}$ that is produced in a disruptive collision between two planetesimals. The value of $\Delta M_\mathrm{fr}$ depends on the ratio of impact energy and material-dependent critical disruption energy of the planetesimals. By multiplying the collision rate of planetesimals with $\Delta M_\mathrm{fr}$ we get the mass production rate of fragments
\begin{equation}
\dot{M}_\mathrm{fr}^{+}=\dot{N}_\mathrm{p}\Delta M_\mathrm{fr}.
\label{eq:productionratefr}
\end{equation}
Our approach is to keep the total mass of solids constant, that is we have the condition
\begin{equation}
M_\mathrm{em}+M_\mathrm{p}+M_\mathrm{fr}=\mathrm{const.},
\label{eq:totalmass}
\end{equation}
where the total mass of planetesimals $M_\mathrm{p}$ is given by $\Sigma_\mathrm{p}A_\mathrm{p}$, where $A_\mathrm{p}{=}2\pi r\Delta r$ is the area of the annulus of width $\Delta r$ that the planetesimals occupy; and likewise for the fragments. The initial width of the annulus is $\eta r$ (see below) for planetesimals and fragments, but the annuli widen diffusively with time because of excitation of eccentricities and inclinations due to viscous stirring \citep{Ohtsuki2003,Tanaka2003}.
With the assumption of constant total mass, we can now formulate the evolution equations for the surface densities given the mass accretion rate of the embryo $\dot{M}_\mathrm{em}^{(k)}$ and the production rate of fragments by taking the time derivative of Eq.~\ref{eq:totalmass}
which gives
\begin{equation}
\dot{M}_\mathrm{em}+\dot{M}_\mathrm{p}+\dot{M}_\mathrm{fr}=0.
\end{equation}
Substituting the total mass changes with surface densities and areas, we get that the surface density of planetesimals reduces due to accretion of planetesimals by the embryo and due to fragmentation
\begin{equation}
\dot{\Sigma}_\mathrm{p}=-\frac{3A_\mathrm{em}}{2A_\mathrm{p}}\dot{\Sigma}_\mathrm{em}^{(\mathrm{p})}-\frac{\dot{M}_\mathrm{fr}^{+}}{A_\mathrm{p}}.
\label{eq:surfacedensityp}
\end{equation}
The area ratio appears because of the mass conservation condition. Likewise, the surface density of fragments evolves according to
\begin{equation}
\dot{\Sigma}_\mathrm{fr}=-\frac{3A_\mathrm{em}}{2A_\mathrm{fr}}\dot{\Sigma}_\mathrm{em}^{(\mathrm{fr})}+\frac{\dot{M}_\mathrm{fr}^{+}}{A_\mathrm{fr}}.
\label{eq:surfacedensityfr}
\end{equation}
The set of Eqs.~\ref{eq:surfacedensityem}, \ref{eq:collisionratep}, \ref{eq:productionratefr}, \ref{eq:surfacedensityp}, and \ref{eq:surfacedensityfr} fully describe the mass growth of the embryo and the conversion of planetesimals into fragments.
We emphasise that the embryo does not grow to the isolation mass in the classical sense by accreting all the material in the expanding feeding zone. Instead, growth is limited by the available mass contained in the annulus of width $\Delta r$. We consider this approach suited for studying the growth of an embryo in an isolated filament formed through streaming instability where a fixed amount of mass is converted to planetesimals in a confined narrow ring. Furthermore, while the embryo grows by the accretion of planetesimals and fragments, the mass distribution of planetesimals and fragments does not evolve over time owing to our choice of representing those populations by bodies of a characteristic mass.
\subsection{Velocity evolution}
The velocity distributions, that is the eccentricities and inclinations, of the bodies evolve through viscous stirring and dynamical friction. To take this into account, we use the rate equations from \citet{Ohtsuki2002} for the root-mean-square eccentricities and inclinations. We include viscous stirring and dynamical friction between all populations with the exception that the single embryo is not interacting with itself. Gas drag dampens the orbits of the bodies and we include damping for the embryo, the planetesimals, and the fragments \citep{Adachi1976,Inaba2001}. We do not include turbulent stirring of planetesimals through the disc gas.
\subsection{Protoplanetary disc}
We use the self-similar solution for a viscously evolving $\alpha$-disc \citep{Shakura1973,LyndenBell1974}. The disc is heated by stellar irradiation with a temperature profile of
\begin{equation}
T=150\,\mathrm{K}\left(\frac{M_\star}{M_\odot}\right)^{-1/7}\left(\frac{L_\star}{L_\odot}\right)^{2/7}\left(\frac{r}{\mathrm{AU}}\right)^{-3/7}
\end{equation}
\citep{Ida2016}. The viscously heated part of the disc would initially extend to ${\sim}5\,\mathrm{AU}$ for our fiducial parameter choices \citep{Ida2016}. However, we neglect viscous heating here
\begin{enumerate*}[label=(\roman*)]
\item because recent work indicates that irradiation rather than viscous heating might be the relevant heat source for protoplanetary discs \citep{Mori2019,Mori2021}, and
\item because we verified by running a model with a viscous temperature profile where it applies that the choice of temperature profile has negligible impact on the planetesimal accretion process studied here. The filament mass (see Sect.~\ref{sec:filamentmass}) and the initial radii of planetesimals and embryos (see Sect.~\ref{sec:initialmassesofplanetesimalsandembryo}) vary only within a factor of unity which does not affect the results.
\end{enumerate*}
The viscosity in the $\alpha$-disc model is $\nu{=}\alpha c_\mathrm{s}^2\Omega_\mathrm{K}^{-1}$, where we use $\alpha{=}10^{-2}$. The value of $\alpha$ determines the viscous evolution timescale of the disc and the accretion of the gas onto the star. The value is consistent with what is determined from observations of protoplanetary discs \citep{Hartmann1998}. For the temperature profile used here, the viscosity is a power-law in radial distance, $\nu{\propto}r^{\gamma}$, with an exponent of $\gamma{=}15/14$.
The surface density profile of the self-similar solution for a power-law viscosity is
\begin{equation}
\Sigma_\mathrm{gas}=\frac{\dot{M}_{\star,0}}{3\pi\nu_1}\left(\frac{r}{r_1}\right)^{-\gamma}\tau^{(5/2-\gamma)/(2-\gamma)}\exp{\left[-\frac{1}{\tau}\left(\frac{r}{r_1}\right)^{2-\gamma}\right]}.
\end{equation}
where $t_\mathrm{vis}=r_1^2/\left(3(2-\gamma)^2\nu_1\right)$ is the characteristic time for the viscous evolution, $\tau{=}\left(t/t_\mathrm{vis}\right)+1$, $r_1$ is the characteristic radius of the disc, and $\nu_1{=}\nu(r_1)$ is the viscosity at distance $r_1$. For the initial mass accretion rate, we use $\dot{M}_{\star,0}{=}10^{-7}\,M_\odot\,\mathrm{yr}^{-1}$ at $0.5$ Myr, which corresponds to a typical class-II object \citep{Hartmann1998,Hartmann2016}. The total disc mass is set to be $10\,\%$ of the stellar mass and by integrating the surface density at $t{=}0$ from the inner edge of the disc, which we set to $0.1\,\mathrm{AU}$, to infinity, we can determine the characteristic disc radius, which is ${\sim}72$ AU in our fiducial case.
\subsection{Formation time of planetesimals and embryo}
Our model start at time $t_0$ when planetesimals are expected to have formed. To estimate $t_0$, we assume that the planetesimal accretion phase takes place in the class-II phase of the disc, ${\sim}0.5\,\mathrm{Myr}$ after star formation \citep{Evans2009,Williams2011}.
To form planetesimals, dust first grows by coagulation to pebbles. The growth timescales $t_\mathrm{grow}{=}R/\dot{R}$ of the dust is
\begin{equation}
t_\mathrm{grow}=\frac{2}{\sqrt{\pi}\epsilon_\mathrm{g} Z \Omega_\mathrm{K}},
\label{eq:growthtimescale}
\end{equation}
where $\epsilon_\mathrm{g}{\approx}0.5$ is a coagulation efficiency and $Z$ is the solid-to-gas ratio of the dust in the disc \citep{Birnstiel2012,Lambrechts2014}. The time for grains of radius $R_0$ to grow to pebbles of radius $R_\mathrm{peb}$ is found to be
\begin{equation}
\Delta t\approx t_\mathrm{grow}\ln\left(\frac{R_\mathrm{peb}}{R_0}\right)
\label{eq:pebblegrowthtime}
\end{equation}
\citep{Lambrechts2014}. Dust growth is limited by radial drift and fragmentation \citep{Birnstiel2012}, and we use the minimum of both for the pebbles. The fragmentation-limited Stokes number is
\begin{equation}
\tau_{\mathrm{s},\mathrm{frag}}=\frac{v_\mathrm{frag}^2}{2\alpha_\mathrm{t}c_\mathrm{s}^2},
\end{equation}
where we set the collision velocity of similar-sized pebbles driven by turbulence \citep{Ormel2007} equal to the fragmentation threshold velocity $v_\mathrm{frag}$. The value of $v_\mathrm{frag}$ ranges from ${\sim}1\,\mathrm{m}\,\mathrm{s}^{-1}$ for silicate pebbles \citep{Blum2008} to ${\sim}10\,\mathrm{m}\,\mathrm{s}^{-1}$ for icy pebbles \citep{Gundlach2015}. However, more recent studies have shown that ice might not be as sticky as previously thought \citep[e.g.][]{Musiolik2019,Kimura2020} and that the tensile strength of ice aggregates is comparable to the tensile strength of silicates \citep{Gundlach2018}. We therefore use a common fragmentation threshold velocity of $v_\mathrm{frag}{=}1\,\mathrm{m}\,\mathrm{s}^{-1}$ for fragmentation-limited growth in our study. The turbulent collision velocity depends on the midplane turbulence $\alpha_\mathrm{t}$, which is different from the gas $\alpha$ that drives the viscous evolution of the protoplanetary disc. The value of $\alpha_\mathrm{t}$ obtained from observations of the dust component of protoplanetary discs ranges from ${\sim}10^{-5}$ to a few times $10^{-4}$ \citep{Pinte2016,Villenave2022}. We decided to use a value of $\alpha_\mathrm{t}{=}10^{-4}$ in our study.
In the drift-limited case, the Stokes number is
\begin{equation}
\tau_{\mathrm{s},\mathrm{drift}}=\frac{3\sqrt{\pi}}{4}\frac{\epsilon_\mathrm{g}Z}{\eta},
\end{equation}
which is obtained from setting the growth timescale of the pebbles (Eq.~\ref{eq:growthtimescale}) equal to the drift timescale $t_\mathrm{drift}{=}r/v_r$ with $v_r{=}2\eta v_\mathrm{K}\tau_\mathrm{s}$ being the radial velocity of the pebbles.
The pebbles are subsequently concentrated by the streaming instability and form planetesimals through the gravitational collapse of dense particle filaments. Depending on the size of the pebbles that form, the streaming instability takes about ten (for big pebbles, $\tau_\mathrm{s}{\approx}0.3$) to a few thousand (for small pebbles, $\tau_\mathrm{s}{\approx}10^{-3}{-}10^{-2}$) local orbital periods to create dense filaments that subsequently fragment gravitationally into planetesimals \citep{Yang2014,Yang2017,Li2018,Li2019}. We calculate the pebble growth time at distance $r$ according to Eq.~\ref{eq:pebblegrowthtime} and the streaming instability timescale as $t_\mathrm{SI}{\sim}500\,2\pi\Omega_\mathrm{K}^{-1}$ and add both to the $0.5\,\mathrm{Myr}$ to obtain the initial time $t_0$ for our simulations.
\subsection{Filament mass}
\label{sec:filamentmass}
The initial conditions of the planetesimal population are derived in the framework of planetesimal formation through the streaming instability. We assume that the streaming instability forms a dense filament of pebbles which fragments into planetesimals. The typical radial width of a filament is ${\sim}\eta r$, where $\eta$ is related to the pressure gradient of the disc gas and $r$ is the distance from the star \citep{Yang2014,Liu2019,Gerbig2020}. This length scale can be thought of as the length scale over which the Keplerian flow adjusts to the gas flow. We set the solid-to-gas ratio in the filament to $Z_\mathrm{fil}{=}0.1$ and the mass contained in one filament is therefore $M_\mathrm{fil}{=}2\pi r \Sigma_\mathrm{gas} Z_\mathrm{fil} \eta r$ \citep{Liu2019}. Because not all pebbles are converted into planetesimals, we introduce a planetesimal formation efficiency $p_\mathrm{eff}$. The total mass of planetesimals is then $p_\mathrm{eff}\,M_\mathrm{fil}$ \citep{Liu2019}. For an optimistic upper limit on embryo growth, we assume $p_\mathrm{eff}{=}1$ throughout the study. Figure~\ref{fig:filament} shows the mass of the filaments as a function of distance for different stellar masses.
\subsection{Initial masses of planetesimals and embryo}
\label{sec:initialmassesofplanetesimalsandembryo}
The initial sizes of planetesimals and embryos follow from the initial mass-function found in streaming instability simulations. The IMF is a power-law with an exponential cut-off above a characteristic mass,
\begin{equation}
\frac{N_>(m)}{N_\mathrm{tot}}=\left(\frac{m}{m_\mathrm{min}}\right)^{-p}\exp\left[\left(\frac{m_\mathrm{min}}{m_\mathrm{p}}\right)^{q}-\left(\frac{m}{m_\mathrm{p}}\right)^{q}\right]
\label{eq:streaminginstabilityIMF}
\end{equation}
\citep{Schaefer2017}. The slope of the power-law is $p{\approx}0.6$ and the steepness of the cut-off is $q{\approx}0.4$ \citep{Simon2016,Schaefer2017}. The IMF is top heavy which means that most of the mass is in the large bodies of characteristic mass. We use the characteristic mass above which the IMF drops exponentially as a proxy for the planetesimal mass, which is
\begin{equation}
m_\mathrm{p}\approx5\times10^{-5}\,M_\oplus\,\left(\frac{Z_\mathrm{fil}}{0.02}\right)^{1/2}\left(\frac{\gamma}{\pi^{-1}}\right)^{3/2}\left(\frac{h}{0.05}\right)^3\left(\frac{M_\star}{M_\odot}\right),
\label{eq:characteristicmass}
\end{equation}
where $\gamma{=}4\pi G\rho_\mathrm{g}\Omega_\mathrm{K}^{-2}$ and $h$ is the aspect ratio of the disc \citep{Liu2020}. To determine the mass of the embryo, we calculate the mass of the single most massive body that forms from the IMF. To do so, we set $N_>(m){=}1$ in Eq.~\ref{eq:streaminginstabilityIMF} and solve for $m$. The value of $N_\mathrm{tot}$ is found by noting that the total number of bodies will be determined by the smallest bodies. We set the minimum mass to $m_\mathrm{min}=10^{-3}m_\mathrm{p}$ and calculate $N_\mathrm{tot}{=}M_\mathrm{fil}/m_\mathrm{min}$. Figure~\ref{fig:initialsizes} shows the initial mass and size of the embryo and the planetesimals as a function of distance for different stellar masses.
\subsection{Diffusion of planetesimals and fragments}
We include the diffusive widening of the planetesimal and fragment rings due to viscous stirring \citep{Ohtsuki2003,Tanaka2003}. This reduces the surface densities with time which impacts both the accretion and the stirring rates. We assume that the initial width of $\eta r$ increases with time as $\sqrt{2 D t}$, where $D$ is the diffusion coefficient and $t$ is the time, as it is characteristic for a random walk \citep{Liu2019}. The diffusion coefficient is related to the viscous stirring rates of eccentricity and inclination \citep{Ohtsuki2003,Tanaka2003}.
\section{Results}
\label{sec:results}
\begin{table}
\caption{fiducial simulation parameters}
\label{tab:simparameters}
\begin{tabular}{lccl}
\hline
parameter & symbol & value & unit \\
\hline
stellar mass & $M_\star$ & $1$ & $M_\odot$ \\
stellar luminosity & $L_\star$ & $1$ & $L_\odot$ \\
disc mass & $M_\mathrm{disc}$ & $0.1$ & $M_\star$ \\
fragment radius & $R_\mathrm{fr}$ & $0.5$ & $\mathrm{km}$ \\
solid bulk density & $\rho_\bullet$ & 2 & $\mathrm{g}\,\mathrm{cm}^{-3}$ \\
mass accretion rate ($0.5\,\mathrm{Myr}$) & $\dot{M}_{\star,0}$ & $10^{-7}$ & $M_\odot\,\mathrm{yr}^{-1}$ \\
viscous parameter & $\alpha$ & $10^{-2}$ & \\
midplane turbulence & $\alpha_\mathrm{t}$ & $10^{-4}$ & \\
filament solid-to-gas ratio & $Z_\mathrm{fil}$ & $0.1$ & \\
grain size & $R_0$ & $0.1$ & $\mu\mathrm{m}$ \\
\hline
\end{tabular}
\end{table}
In the fiducial model, we use a solar-mass central star with a viscously evolving disc heated by solar irradiation. We place filaments at various distances ranging from $0.1$ to $100\,\mathrm{AU}$ and simulate the growth of the embryo for $10\,\mathrm{Myr}$. Table~\ref{tab:simparameters} summarises the model parameters. We later vary the stellar mass, the solid-to-gas ratio of the filament, and other parameters to explore how they affect the growth of the embryo.
\subsection{Growth of the embryo}
Figure~\ref{fig:massmap} shows how the embryo mass evolves with time as a function of distance. The time snapshots are relative to the initial time $t_0$ of the simulation, which ranges from ${\sim}0.5$ Myr in the inner disc to ${\sim}1.5$ Myr in the outer disc. The planetesimal size increases with distance and the initial mass of the embryo is typically a factor $10^{2}$ to $10^{3}$ higher than the planetesimal mass, as shown in Fig.~\ref{fig:initialsizes}. Inside ${\sim}2\,\mathrm{AU}$, the growth timescale is short and embryos reach their final mass by accreting all the available mass in the filament within $10\,\mathrm{Myr}$. Farther out, growth slows down and ceases to almost zero for ${\gtrsim}20\,\mathrm{AU}$.
The reasons for the rapid growth inside ${\sim}2\,\mathrm{AU}$ are the high surface density of planetesimals, which results in a short growth timescale, and the excitation of the eccentricities of the planetesimals, which results in fragmentation and the boost of growth through accretion of fragments. This is visible in Fig.~\ref{fig:timemap}, which shows the time evolution of the surface densities of the planetesimals and the fragments and the eccentricity evolution of the planetesimals in the middle and bottom panels for various distances. The eccentricity at which the collision speed exceeds the escape speed of the planetesimal is approximately given by
\begin{equation}
e_\mathrm{esc}\approx 5\times10^{-3}\left(\frac{\rho_\bullet}{2\,\mathrm{g}\,\mathrm{cm}^{-3}}\right)^{1/2}\left(\frac{R_\mathrm{p}}{100\,\mathrm{km}}\right)\left(\frac{r}{\mathrm{AU}}\right)^{1/2},
\end{equation}
where we set the random speed of the planetesimals ${\sim}ev_\mathrm{K}$ equal to their escape speed. Figure~\ref{fig:timemap} bottom panel shows that within ${\sim}2\,\mathrm{AU}$, the embryo excites the planetesimals above the threshold in short times (${\lesssim}10^{4}\,\mathrm{yr}$). As a consequence, the embryo efficiently accretes the small fragments (which we assume here to have a constant radius of $0.5\,\mathrm{km}$). However, in the outer disc, this effect is negligible because planetesimal eccentricities are not excited enough to result in fragmentation. For example, at $5\,\mathrm{AU}$ and for $200\,\mathrm{km}$ planetesimals (see Fig.~\ref{fig:initialsizes}) this requires eccentricities ${\gtrsim}2{\times}10^{-2}$, which are reached only after ${\sim}1\,\mathrm{Myr}$ (after $t_0$) and later. Therefore, fragmentation, if at all, sets in late and embryo growth is not boosted by fragment accretion as it is the case in the inner disc. At even larger distances, fragmentation plays no role because the stirring of planetesimals by the embryo and by self-stirring is not sufficient to reach $e_\mathrm{esc}$. Therefore, in the outer disc, the long accretion timescale limits the growth.
\subsection{Eccentricity evolution}
Figure~\ref{fig:eccentricitymap} shows the eccentricities of embryos, planetesimals, and fragments as a function of distance for different times. Initially, embryos and planetesimals have the same eccentricity and inclination such that $e{=}\eta/2$ and $i/e{=}1/2$, because we assumed that just after formation there has not been enough time for dynamical friction to result in a mass dependent eccentricity. The initial eccentricity of the fragments is set to $10\,\%$ of the escape speed of a planetesimal.
\subsubsection{Planetesimals}
In the inner disc, the eccentricity of the planetesimals is determined by the equilibrium of viscous stirring by the embryo and gas drag, because the gas density is sufficiently high. The damping timescale for gas drag is
\begin{equation}
T_\mathrm{drag}=\frac{1}{e}\frac{2m_\mathrm{p}}{C_\mathrm{D}\pi R_\mathrm{p}^2\rho_\mathrm{gas}r\Omega_\mathrm{K}},
\label{eq:dampingtimescale}
\end{equation}
where $m_\mathrm{p}$ and $R_\mathrm{p}$ are mass and radius of the planetesimal, $\rho_\mathrm{gas}$ is the gas density, and $C_\mathrm{D}$ is the drag coefficient \citep{Adachi1976,Inaba2001}. The timescale on which viscous stirring of planetesimals by an embryo of mass $M_\mathrm{em}$ and surface density $\Sigma_\mathrm{em}$ excites eccentricities is given by
\begin{equation}
T_\mathrm{vis}=\frac{1}{40}\left(\frac{\Omega_\mathrm{K}^2 r^3}{G M_\mathrm{em}}\right)^2 \frac{M_\mathrm{em}e^4}{\Sigma_\mathrm{em}r^2\Omega_\mathrm{K}}
\label{eq:stirringtimescale}
\end{equation}
\citep{Ida1993}. When viscous stirring by the embryo and gas drag on the planetesimal are in equilibrium, the eccentricity can be calculated by setting the damping timescale equal to the stirring timescale,
\begin{equation}
e_\mathrm{eq}=1.7\frac{m_\mathrm{p}^{1/15}M_\mathrm{em}^{1/3}\rho_\bullet^{2/15}}{b^{1/5}C_\mathrm{D}^{1/5}\rho_\mathrm{gas}^{1/5}M_\star^{1/3}r^{1/5}}
\end{equation}
\citep{Thommes2003}. The drag coefficient can be assumed to be $2$ which is valid for planetesimal-sized bodies. The typical spacing of embryos $b$ is of the order of $10$ Hill radii \citep{Kokubo1998} and enters via the embryo surface density $\Sigma_\mathrm{em}{=}M_\mathrm{em}/(2\pi r b r_\mathrm{h})$. From Fig.~\ref{fig:initialsizes}, we can see that the mass of planetesimals and embryos scales with distance as approximately $m_\mathrm{p}{\propto}r^{3/2}$. Because of the distance dependency of $m_\mathrm{p}$ and $\rho_\mathrm{gas}$, $T_\mathrm{drag}{\propto}r^{47/14}$ (for our viscous $\alpha$-disc) increases strongly with distance and the equilibrium eccentricity increases with distance approximately as $e_\mathrm{eq}{\propto}r^{61/70}$, which is the slope of $e$ with $r$ as we see in Fig.~\ref{fig:eccentricitymap} inside of ${\sim}0.8\,\mathrm{AU}$.
In the outer disc, damping by gas drag becomes inefficient because of the lower gas density and because of the large planetesimals. Therefore, viscous stirring by the embryo is the process that determines the planetesimal eccentricity. The viscous stirring of planetesimals by the embryo is expressed as
\begin{equation}
\frac{\mathrm{d}e^2}{\mathrm{d}t}=\frac{e^2}{T_\mathrm{vis}}
\label{eq:stirringrate}
\end{equation}
\citep{Ida1993}. Inserting Eq.~\ref{eq:stirringtimescale}, we can integrate Eq.~\ref{eq:stirringrate} which gives
\begin{equation}
e(t)=e_0\left(1 + \frac{2\left(t-t_0\right)}{T_{{\mathrm{vis},0}}}\right)^{1/4},
\label{eq:ecctime}
\end{equation}
where $T_{\mathrm{vis},0}$ is the viscous stirring timescale for initial planetesimal eccentricity $e_0$. In Fig.~\ref{fig:timemap}, we can see that $e{\propto}t^{1/4}$ for large distances, that is outside of ${\sim}2\,\mathrm{AU}$. Evaluating the scaling with distance, we find that $e{\propto}r^{1/4}$, when taking the $r$ dependency of the embryo mass (initial mass because there is some growth up to ${\sim}20\,\mathrm{AU}$) and of the initial eccentricity (${\propto}r^{4/7}$ because $\eta{\propto}(c_\mathrm{s}/v_\mathrm{K})^2$) into account. However, Fig.~\ref{fig:eccentricitymap} shows a more flat scaling with $r$. The reason is that even though the eccentricity evolution is determined by viscous stirring because the damping timescale is too long to result in equilibrium eccentricities, gas drag still damps the eccentricities.
\subsubsection{Fragments}
The eccentricities of the fragments within ${\sim}1\,\mathrm{AU}$ are given by the equilibrium eccentricity. Because fragments are smaller in size, they are more strongly damped by the gas and hence acquire lower eccentricities. The ratio of fragment size to planetesimal size is ${\sim}10^{-2}$ close to the star, which translates to a ratio of $e_\mathrm{eq}$ to ${\sim}0.4$, as shown in Fig.~\ref{fig:eccentricitymap}. In the outer disc, eccentricities are excited by viscous stirring and damped by gas drag, where equilibrium values might be reached at late times.
\subsubsection{Embryo}
The eccentricity of the embryo is more complex as seen in Fig.~\ref{fig:eccentricitymap} because it is determined through the interplay of viscous stirring through the planetesimals, dynamical friction from planetesimals and fragments, and damping through gas drag. The mass growth further complicates the picture and simple scaling arguments as provided for planetesimals and fragments no longer suffice. However, qualitatively, the embryo keeps a low eccentricity (${\lesssim}10^{-3}$) throughout the simulation. Close to the star, the high gas and planetesimal surface density in combination with the fast growth circularises the orbit. In the outer disc, where no growth occurs, the embryo eccentricity remains close to the initial value experiencing some damping through gas drag and dynamical friction.
\section{Discussion}
\label{sec:discussion}
\subsection{Varying the stellar mass}
The stellar mass affects the mass accretion rate $\dot{M}_\star$, the luminosity $L_\star$, as well as the density and temperature structure of the disc. Here, we investigate the growth of embryos around different stellar masses, ranging from $0.1\,M_\odot$ to $1\,M_\odot$. The mass accretion rate of low-mass stars are also lower. \citet{Manara2012} provide a fit for the mass accretion rate as a function of stellar age and mass. For our chosen initial time of $0.5\,\mathrm{Myr}$, we find $\dot{M}_\star{\propto}M_\star$. \citet{Hartmann2016} find that the mass accretion rate correlates with stellar mass as $\dot{M}_\star{\propto}M_\star^{2.1}$. The linear scaling of \citet{Manara2012}, with $\dot{M}_\star{\propto}M_\star$, results in smaller and more rapidly evolving discs than the quadratic scaling. We run simulations with both relations to scale the initial mass accretion rate of $10^{-7}\,M_\odot\,\mathrm{yr}^{-1}$ for $M_\star{=}1\,M_\odot$ to lower stellar masses. The luminosity scales with mass as $L_\star{\propto}M_\star^{1..2}$ for stellar ages ${\lesssim}10$ Myr \citep{Liu2020} and hence we set the slope of the $L_\star{-}M_\star$-relation to an intermediate value of $1.5$.
Figure~\ref{fig:finalsizes} shows the final embryo mass at $10\,\mathrm{Myr}$ as a function of distance for different stellar masses for $\dot{M}_\star{\propto}M_\star^{2.1}$. We find that the maximum distance out to which embryos accrete all available mass of the filament scales with stellar mass, ranging from ${\sim}0.3\,\mathrm{AU}$ for a $M_\star{=}0.1\,M_\odot$ to ${\sim}1\,\mathrm{AU}$ for a $M_\star{=}1\,M_\odot$. Farther out, accretion becomes less efficient and ceases for distances ${\gtrsim}10\,\mathrm{AU}$ for $M_\star{=}0.1\,M_\odot$ and ${\gtrsim}30\,\mathrm{AU}$ for $M_\star{=}1\,M_\odot$. In comparison to the quadratic scaling, the final embryo masses for $\dot{M}_\star{\propto}M_\star$ are shown in Fig.~\ref{fig:finalsizes_linear}. The general finding is the same as for Fig.~\ref{fig:finalsizes}, however, the embryos are more massive for all stellar masses ${<}1\,M_\odot$. The reason for this is that $\Sigma_\mathrm{gas}{\propto}\dot{M}_\star$. For the shallower scaling, the discs around the lower mass stars have higher surface densities and hence the masses of the filaments are higher, which consequently leads to higher masses of embryos and more mass available for the embryos to accrete.
\subsection{Varying the filament solid-to-gas ratio}
Figure~\ref{fig:finalsizes_Zfil} shows the outcome of our model for different values of the filament solid-to-gas ratio. This $Z_\mathrm{fil}$ provides a minimum value for how much pebble mass is turned into planetesimals (assuming that the planetesimal formation efficiency is $100\,\%$, which we use in our model). We vary $Z_\mathrm{fil}$ from the canonical value of $Z_\mathrm{fil}{=}0.01$ to $Z_\mathrm{fil}{=}0.5$, which means that the mass in planetesimals would be $50\,\%$ of gas mass at distance $r$. We find that varying the value of $Z_\mathrm{fil}$ does not change the general picture of efficient growth for distances ${\lesssim}1$ to $2\,\mathrm{AU}$. The final embryo masses vary according to the value of $Z_\mathrm{fil}$ simply because the filaments are more massive.
\subsection{Reducing the initial embryo mass}
In the fiducial run and variations thereof, we used the single most massive body from the streaming instability IMF as the embryo. In this case, the embryo is typically a factor $10^3$ more massive than the planetesimals (see Fig.~\ref{fig:initialsizes}). We run a model where we reduced the embryo mass by a factor of $10$ while keeping the total mass of the filament fixed. The mass ratio of embryo to planetesimals is hence ${\sim}10$ to $100$. Figure~\ref{fig:finalsizes_me01} shows that reducing the initial mass of the embryo does not change the final outcome for ${\lesssim}3\,\mathrm{AU}$, where the embryo accretes nearly all the available mass. Outside ${\sim}3\,\mathrm{AU}$, however, accretion efficiency decreases and for ${\gtrsim}20\,\mathrm{AU}$, the embryo does not grow significantly.
\subsection{Fragmentation, eccentricities, and diffusive widening}
Figure~\ref{fig:finalsizes_params} compares the final masses of embryos for models, where we set the fragment eccentricity and inclination to zero, disabled fragmentation, disabled diffusive widening of the planetesimal and fragment rings, or extended the simulation from $10\,\mathrm{Myr}$ to $1\,\mathrm{Gyr}$.
Disabling fragmentation results in longer growth timescales. The final mass of the embryo after $10\,\mathrm{Myr}$, however, is not strongly affected. Inside of ${\sim}1\,\mathrm{AU}$, we find the same final mass while between ${\sim}1$ and ${\sim}5\,\mathrm{AU}$, the final mass is lower by less than a factor of $2$ at most. Outside ${\sim}5\,\mathrm{AU}$, we find the same final mass as in the fiducial case because fragmentation does not play a role.
The eccentricity of the fragments affect the growth behaviour more strongly. Fixing the fragments on orbits with zero eccentricity and inclination (that is assuming that gas drag is very efficient) allows the embryo to accrete fragments at a constant rate in the low-velocity regime \citep{Inaba2001,Chambers2006a}, while in the fiducial case the accretion rate decreases as fragments are excited by viscous stirring through the embryo and the planetesimals. As a consequence, we find that the embryos are more massive than in the fiducial case out to distances of ${\sim}20\,\mathrm{AU}$. We also run a model where we set the eccentricity and inclination of the embryo to zero. In this case, we did not find any significant deviation from the fiducial run. We conclude that a circular and planar embryo orbit as used in other studies \citep[e.g.][]{Chambers2006a} is a valid approximation because the eccentricity of the embryo is ${\sim}10^{-3}{\ll}e_\mathrm{p},e_\mathrm{fr}$ because of dynamical friction and gas drag (Fig.~\ref{fig:eccentricitymap}).
Lastly, we look at the diffusive widening of the planetesimal and fragment rings. Disabling diffusion has a strong impact on embryo growth. Within $10\,\mathrm{Myr}$ and out to ${\sim}5\,\mathrm{AU}$, the embryo accretes all the filament mass which allows growth to up to ${\sim}0.2\,M_\oplus$. On much longer timescales than $10\,\mathrm{Myr}$, embryos would grow up to ${\sim}1\,M_\oplus$ out to ${\sim}20\,\mathrm{AU}$. This is seen in Fig.~\ref{fig:finalsizes_params} where we show the final mass after $1\,\mathrm{Gyr}$ for the fiducial and the no-diffusion case for comparison. Without diffusion embryos grow up to ${\sim}1\,M_\oplus$, whereas with diffusion even after $1\,\mathrm{Gyr}$ the mass is at most ${\sim}0.1\,M_\oplus$. The reason for the strongly enhanced growth without diffusive widening is that the surface densities of planetesimals and fragments decrease only through accretion. However, the increase of eccentricities and inclinations through viscous stirring cause the bodies to occupy a larger volume additionally reducing the surface density and hence reducing the accretion rate of the embryo which is proportional to the surface density of the accreted bodies.
\subsection{Implications for pebble accretion}
The growth of embryos by planetesimal accretion in the filaments formed through streaming instability is efficient only in the inner part of the protoplanetary disc. In the inner disc, the collision timescale is short enough and fragmentation of planetesimals is efficient enough for an embryo to accrete all the available material. At larger distances, and especially outside ${\sim}5$ to $10\,\mathrm{AU}$, planetesimal accretion is highly inefficient, even though the available material in the filament increases. The larger sizes of the planetesimals, the excitation of planetesimal eccentricities, and the lack of fragmentation prevents embryos from growing massive within the lifetime of the disc of $10\,\mathrm{Myr}$.
The inefficient growth by planetesimal accretion does not necessarily imply that planets cannot form at all. We did not consider pebble accretion in our model because we focused on the accretion of the filament material turned into planetesimals. However embryos might still be able to reach masses for which pebble accretion becomes an highly efficient growth process. Pebble accretion becomes important when the friction time of the pebbles is shorter than the time in which they would pass by the embryo \citep{Ormel2017}. This condition leads to the onset mass for pebble accretion
\begin{align}
M_\mathrm{on}&=\frac{1}{4}\tau_\mathrm{s}\eta^3M_\star \nonumber \\
&\approx 4.871\times10^{-7}\,M_\oplus\,\left(\frac{\tau_\mathrm{s}}{0.01}\right)\left(\frac{M_\star}{M_\odot}\right)^{-17/7}\left(\frac{L_\star}{L_\odot}\right)^{6/7}\left(\frac{r}{\mathrm{AU}}\right)^{12/7}
\label{eq:onsetmass}
\end{align}
\citep{Visser2016,Ormel2017}. Above the so-called transition mass, pebble accretion becomes very efficient. The transition mass marks the change from drift-driven (Bondi) accretion to shear-driven (Hill) accretion of pebbles \citep{Lambrechts2012,Johansen2017,Ormel2017}. That means that in the latter case pebbles from the entire Hill sphere are accreted by the embryo. The transition mass can be found by equating the Bondi radius $r_\mathrm{B}{=}GM_\mathrm{em}/(\eta v_\mathrm{K})^2$ and the Hill radius and reads
\begin{align}
M_\mathrm{tr}&=\frac{1}{\sqrt{3}}\eta^3M_\star \nonumber \\
&\approx1.125{\times}10^{-4}\,M_\oplus\,\left(\frac{M_\star}{M_\odot}\right)^{-17/7}\left(\frac{L_\star}{L_\odot}\right)^{6/7}\left(\frac{r}{\mathrm{AU}}\right)^{12/7}
\label{eq:transitionmass}
\end{align}
\citep{Ormel2017}. Pebble accretion stops when the embryo reaches the pebble isolation mass. At this mass, the embryo carves a gap in the gas disc that creates a pressure bump outside its orbit which stops pebbles from drifting inward and being accreted by the embryo. The pebble isolation mass is
\begin{align}
M_\mathrm{iso}&=25\,M_\oplus \nonumber \\
&\times\left(\frac{H/r}{0.05}\right)^3\left(0.34\left(\frac{-3}{\log_{10}\alpha_\mathrm{t}}\right)^4+0.66\right)\left(1-\frac{\frac{\partial\ln P}{\partial\ln r}+2.5}{6}\right),
\label{eq:isolationmass}
\end{align}
which is derived from fits to hydrodynamic simulations of pebble accretion \citep{Bitsch2018}.
We now compare the embryo masses to the characteristic masses for pebble accretion given above. Figure~\ref{fig:allfinal_10Myr} shows a map where we highlight the different regimes of pebble accretion. For an embryo to accrete pebbles efficiently, the mass needs to be above the transition mass. Below the transition mass and above the onset mass, embryos would still be able to accrete pebbles, but on the less efficient Bondi branch. We see from Fig~\ref{fig:allfinal_10Myr} that the initial embryo mass is below the transition mass for all stellar masses; even though the difference is small for $M_\star{=}1\,M_\odot$. For $M_\star{=}0.1\,M_\odot$, the initial embryo mass is even below the onset mass for distances ${\gtrsim}2\,\mathrm{AU}$. For $M_\star{=}0.3\,M_\odot$, the initial embryo mass is below the onset mass outside of ${\sim}50\,\mathrm{AU}$. We also see from Fig~\ref{fig:allfinal_10Myr} that the maximum mass an embryo can reach through planetesimal accretion in a filament (the filament mass $M_\mathrm{fil}$) is above the transition mass (except for the $M_\star{=}0.1\,M_\odot$ outside ${\sim}20\,\mathrm{AU}$), but well below the pebble isolation mass, which means that there would be enough mass in planetesimals available for embryos to grow into the pebble accreting regime. However, we find that embryos growing through accretion of planetesimal would reach the transition mass only out to a distance of ${\sim}20\,\mathrm{AU}$ for a solar-like central star within the lifetime of a protoplanetary disc. For stars of lower mass, this distance shifts significantly inwards to ${\lesssim}1\,\mathrm{AU}$ for $M_\star{=}0.1\,M_\odot$. Therefore, we conclude that planetesimal accretion might be a channel for forming the seeds for pebble accretion out to ${\sim}20\,\mathrm{AU}$. Farther out, where planetesimal accretion becomes negligible, pebble accretion even though on the slow Bondi branch, would be the only growth channel. Our result is in agreement with \citet{Liu2019} who investigated the growth of planetesimals to planets at a single site, namely the water snowline at $2.7\,\mathrm{AU}$, through planetesimal and pebble accretion using $N$-body simulations. Also in their work, embryos would grow to masses of $10^{-3}$ to $10^{-2}\,M_\oplus$ through planetesimal accretion after which pebble accretion would take over. Comparing the final embryo masses in our model at $2.7\,\mathrm{AU}$ (\ref{fig:massmap} and the bottom right panel of \ref{fig:allfinal_10Myr}), shows comparable masses.
\subsection{Limitations of the model}
Our model describes the growth of an embryo at a fixed location. We neglected the migration of the embryo, the planetesimals, and the fragments for several reasons:
\begin{enumerate*}[label=(\roman*)]
\item for the embryo masses considered here (${\lesssim}0.1\,M_\oplus$), the migration timescales are ${\sim}\mathrm{Myr}$ and longer \citep{Tanaka2002,Cresswell2008,Ida2020},
\item apart from having a more complicated model, radial drift of planetesimals and fragments would only reduce the accretion efficiency due to an additional drain of available material, and
\item gas drag induced radial drift peaks for metre-sized bodies but planetesimals, fragments, and embryos have sizes of kilometre to several hundreds of kilometres resulting in slow radial drift.
\end{enumerate*}
Therefore, having non-migrating bodies provides us an upper limit on the final masses, any migration would result in bodies with lower mass.
In our study, we did not assume that streaming instability forms filaments at special locations in the disc, such as snowlines where a pressure bump would naturally lead to an increased solid-to-gas ratio due to a pile-up of pebbles that would trigger the streaming instability \citep{Drazkowska2017,Schoonenberg2018}. Instead, we look at what would happen if filaments occur at any location \citep{Carrera2017,Lenz2019}. A consequence of this would be a significant reservoir of planetesimals that are not accreted. This reservoir can nevertheless interact with the planets that might form by pebble accretion leading to scattering and populating of the Kuiper belt, scattered disc, and Oort cloud, thus providing the bodies for comets and Kuiper belt objects \citep{Brasser2013}. The fact that outside of $10\,\mathrm{AU}$ neither significant growth nor fragmentation occurs might imply that also the size distribution of the planetesimals remains largely unchanged. The cold-classical Kuiper belt might be a remnant of this. In our model, the total mass of planetesimals in the whole disc is ${\sim}333\,M_\oplus$. The mass contained in the asteroid belt region between $2\,\mathrm{AU}$ and $4\,\mathrm{AU}$ is ${\sim}10\,M_\oplus$, and in the region of the primordial disc region between $15\,\mathrm{AU}$ and $30\,\mathrm{AU}$ it is ${\sim}50\,M_\oplus$, both of which are orders of magnitude larger than the current mass in asteroids and in the Kuiper belt, estimated to be $4{\times}10^{-4}\,M_\oplus$ and ${\sim}10^{-2}\,M_\oplus$, respectively \citep{DeMeo2013,Fraser2014}. Therefore, efficient depletion becomes necessary, such as the giant planet instability in the Nice model, which could have been responsible for sculpturing the outer Solar System and depleting the asteroid belt by scattering of planetesimals and ejection of planetesimals from the Solar System \citep{Gomes2005,Tsiganis2005,Morbidelli2010,Brasser2013}. On the other hand, we assumed that $100\,\%$ of the filament mass is converted to planetesimals, which gives an upper limit on the available mass. The planetesimal formation efficiencies in streaming instability simulations are not well constrained and can vary significantly from ${\lesssim}10\,\%$ to as high as ${\sim}80\,\%$ \citep{Abod2019}. Converting less pebbles to planetesimals will reduce the available mass significantly and, additionally, lifting the assumption that filaments form throughout the disc reduces the amount of planetesimals even further.
We furthermore neglect that filaments might interact with each other and that, especially the large embryos in the outer disc that might have Hill radii exceeding the typical spacing of the filaments, would be able to accrete from neighbouring filaments. However, the relevance of this might be low because even though there is a huge mass reservoir of planetesimals of several Earth masses, the embryos accrete almost no planetesimals.
\section{Conclusion}
\label{sec:conclusions}
In this paper, we modelled the planetesimal accretion phase that follows the birth of planetesimals. Therefore, we developed a model for the growth of a large planetesimal (embryo) embedded in a population of smaller planetesimals of characteristic size. The model included mass growth of the embryo, the fragmentation of planetesimals and the velocity evolution of all involved bodies in a self-consistent fashion. We represented the planetesimal size distribution at birth with bodies of characteristic masses, the planetesimals and the embryo. Our growth model hence described an oligarchic-like growth. Fragmentation assumed a representative fragment size. We found that embryos accrete the available material efficiently only in the inner disc where a combination of high planetesimal surface density and fragmentation ensures short growth timescales for the embryo. On the other hand, we find little to no growth in the outer parts of the disc beyond ${\sim}5$ to $10\,\mathrm{AU}$ on a $10\,\mathrm{Myr}$ timescale. The embryos typically reached masses in the range ${\sim}10^{-3}$ to $10^{-1}\,M_\oplus$. When we compare the embryo masses to the transition mass for pebble accretion, we find that embryos would be able to grow into the pebble accreting regime through planetesimal accretion out to ${\sim}20\,\mathrm{AU}$. Pebble accretion on the less efficient Bondi branch might help embryos to reach the transition mass also beyond ${\sim}20\,\mathrm{AU}$.
\begin{acknowledgements}
We thank the anonymous referee for a constructive feedback that contributed in improving the quality of our work. A.J. is supported by the Swedish Research Council (Project Grant 2018-04867), the Danish National Research Foundation (DNRF Chair grant DNRF159), and the Knut and
Alice Wallenberg Foundation (Wallenberg Academy Fellow Grant 2017.0287).
A.J. further thanks the European Research Council (ERC Consolidator Grant 724
687-PLANETESYS), the Göran Gustafsson Foundation for Research in Natural Sciences and Medicine, and the Wallenberg Foundation (Wallenberg Scholar
KAW 2019.0442) for research support.
\end{acknowledgements}
\bibliographystyle{bibtex/aa} %
\bibliography{../references/ref} %
\begin{appendix}
\end{appendix}
|
Title:
QCD in the cores of neutron stars |
Abstract: I discuss why state-of-the art perturbative QCD calculations of the equation
of state at large chemical potential that are reliable at asymptotically high
densities constrain the same equation of state at neutron-star densities. I
describe how these theoretical calculations affect the EOS at lower density. I
argue that the ab-initio calculations in QCD offer significant information
about the equation of state of the neutron-star matter, which is complementary
to the current astrophysical observations.
| https://export.arxiv.org/pdf/2208.03086 |
\title{QCD in the cores of neutron stars%
\thanks{Presented at Quark Matter 2022}%
}
\author{Oleg Komoltsev,
\address{Faculty of Science and Technology, University of Stavanger, 4036 Stavanger, Norway}
}
\section{Introduction}
The equation of state (EOS) of the dense matter at zero temperature is a necessary input for the neutron-stars (NS) physics. Theoretical calculations of the EOS can be done only at the two opposite (low- and high-density) limits. At the low-density limit the matter can be described within the chiral effective field theory (CET) \cite{Tews:2012fj,Drischler:2017wtt}. Those calculations are reliable up to around nuclear saturation density $n_s = 0.16/\textrm{fm}^3$. On the other hand we can access the EOS using perturbative Quantum Chromodynamics (pQCD) at the asymptotically high densities, above $\sim 40n_s$ \cite{Gorda:2018gpy,Gorda:2021znl}. Central densities of maximally massive neutron stars are around $4-8 n_s$, which is not reachable within CET or pQCD. Therefore, there are no tools in our possession to compute EOS of the cores of NS from the first principles.
However, we can obtain an empirical access to the cores of NSs using recent astrophysical observations. The most important probes of NS physics are the discovery of massive NSs \cite{Demorest:2010bx, Antoniadis:2013pzd,Fonseca:2021wxt}, mass - radius measurements \cite{Miller:2021qha,Riley:2021pdl}, and the gravitational-wave and multi-messenger astronomy \cite{TheLIGOScientific:2017qsa,GBM:2017lvd}. Utilizing all constraints coming from astrophysical observation as well as first principle calculations narrows down dramatically the range of possible EOSs, which allows us to use the densest objects in the Universe to test independently various beyond standard model scenarios and/or general relativity.
Majority of the EOS studies extrapolate CET EOS up to NS densities 5-10$n_s$ and conditioning it with the observational inputs. The results differ from the works that include high-density limit and interpolate between two orders of magnitude. The qualitative difference is in the softening of the EOS happening around $\epsilon \sim$750 MeV/fm$^{-3}$, which can be interpreted as quark matter cores inside the most massive NS \cite{Annala:2019puf}.
In this work I answer why and how pQCD input offers significant information about the EOS at NS densities. I find that pQCD input propagates non-trivial constraints all the way down to 2.2$n_s$ just by using solely thermodynamic stability, consistency and causality \cite{Komoltsev:2021jzg}. In addition the complementariness of the pQCD input to the astrophysical observations was studied in \cite{Gorda:2022jvk}. I show that pQCD is responsible for the softening of the EOS at the NS densities. Therefore, it is essential to include pQCD input in any inference study of the EOS.
\section{Setup}
All technical details as well as analytical formulas are presented in \cite{Komoltsev:2021jzg}. In this section I describe the conditions I use, in particular stability, consistency and causality, and the resulting propagation of the pQCD input down to lower densities. Let us start with the baryon density $n$ as a function of the chemical potential $\mu$ as shown in fig.\ref{Fig:n_mu}. The goal is to find all possible lines that connect endpoint of CET results (dark blue line in the bottom left corner) with the first point of pQCD calculations (purple line in the upper right corner) using 3 conditions.
The first condition is thermodynamic stability, which implies concavity of the grand canonical potential $\partial^2_{\mu} \Omega(\mu) \leq 0$. At zero temperature $\Omega (\mu) = - p(\mu)$, which implies that the number density is monotonically increasing function of the chemical potential $\partial_{\mu} n(\mu) \geq 0$.
The second condition is causality -- the sound speed cannot exceed the speed of light $c^2_s \leq 1$. This provides constraints on the first derivative of the number density with respect to the chemical potential
\begin{equation}
c^{-2}_s = \frac{\mu}{n}\frac{\partial n}{\partial \mu} \leq 1.
\end{equation}
For each point on the $\mu - n$ plane we can calculate the least allowed slope coming from causality, which is represented by the arrows in the fig.\ref{Fig:n_mu}. This cuts upper (lower) region of the plane, because any points from the area above (below) orange line $c^2_s=1$ cannot be connected to pQCD (CET) in a casual way.
The third condition is thermodynamic consistency. In addition to $n$ and $\mu$ we need to match pressure $p$ at the low- and high- density limits. The pressure is giving by the integral of the number density
\begin{equation}
\int^{\mu_{\rm QCD}}_{\mu_{\rm CET}} n(\mu) d\mu = p_{\rm QCD} - p_{\rm CET} = \Delta p.
\end{equation}
This implies that the area under the curve for any EOS is fixed by our input parameters. For each arbitrary point ${\mu_0,n_0}$ we can construct the EOS that maximize/minimize the area under the curve $\Delta p_{max/min}(\mu_0,n_0)$ shown as a green/blue dashed line in the fig.\ref{Fig:n_mu}. If $\Delta p_{max}(\mu_0,n_0) < \Delta p$ then any EOS that goes through the point ${\mu_0, n_0}$ does not have enough area under the curve. This discards the region in the lower right corner in the fig.\ref{Fig:n_mu} under the red line called "integral constraints". If $\Delta p_{min}(\mu_0,n_0) > \Delta p$ then any EOS that goes through the point ${\mu_0, n_0}$ has too much area under the curve. This cuts area in the upper left corner above the red line. The integral constraints can be obtained without any assumptions of interpolation function in a completely general and analytical way.
We can map the allowed region from $\mu-n$ to $\epsilon-p$ plane. The results of such mapping is shown in the fig.\ref{Fig:e_p}. The green envelope corresponds to the the white area in the fig.\ref{Fig:n_mu} restricted by the causality and the integral constraints. The shapes of allowed region with and without pQCD are shown for the fixed number density $n$ = 2,3,5 and 10$n_s$. This explicitly shows how pQCD input can propagate information down to lower density starting from 2.2$n_s$. And, strikingly, at 5$n_s$ it excludes 75\% of otherwise allowed area.
Using the new constraints we can check the consistency of publicly available EOSs. Results for all zero temperature EoSs in $\beta$-equilibrium from the public CompOSE database \cite{Typel:2013rza, Oertel:2016bki} are shown in the fig.\ref{fig:1b}. Almost all of the EOSs start to be inconsistent with pQCD input at some density within the provided range.
\section{Bayesian inference of EOS}
With the construction described above we can propagate information from ab-initio QCD calculations down to NS densities, where we already have constraints from astrophysical observations. To understand if the new constraints from pQCD go beyond the constraints coming from the NS measurements we construct a Bayesian-inference framework. This was done in \cite{Gorda:2021znl}, where we generate a large ensemble of different EOSs using Gaussian-process regression. We anchor the ensemble to CET calculations and extrapolate it up to 10$n_s$, where we impose pQCD input as a blue shape from fig.\ref{Fig:e_p}. We condition ensemble sequentially with the astrophysical observations. With this setup we can turn on and turn off pQCD input in order to study its effect on our posterior after imposing astrophysical observation.
The results are present in fig.\ref{Fig:gp}. The reduction of the pressure (green arrow on the right plot), which is caused by the QCD input, happens before the density reaches its maximal central value. In another words, the prediction of QCD input is the softening of the EOS that happens inside the most massive neutron stars.
\section{Conclusion}
In this work, I show how QCD calculations at asymptotically high densities can propagate information down to lower densities using solely thermodynamic consistency, stability and causality. This information offers a significant constraints to the EOS at NS density, which is complementary to the current astrophysical observations. In addition, I show that the prediction of QCD input is the softening of the EOS that happens in the most massive NSs. An easy-to-use python script is provided to check consistency of the EOS with pQCD input, available on \href{https://github.com/OKomoltsev/QCD-likelihood-function}{Github} \cite{OlegGithub}.
In order to achieve accurate determination of the EOS it is crucial to utilize all available controlled measurements and theoretical calculations. This strategy either helps us to understand the matter of the densest objects in the Universe or find a discrepancy between different inputs, which allows us to use NS as a tool for fundamental discoveries.
\bibliographystyle{IEEEtran}
\bibliography{main.bib}
|
Title:
Time-resolved polarizations of gamma-ray burst prompt emission with observed energy spectra |
Abstract: Time-resolved polarizations carry more physical information about the source
of gamma-ray bursts (GRBs) than the time-integrated ones. Therefore, they give
more strict constrains on the models of GRB prompt phase. Both time-resolved
and time-integrated polarizations are considered here. The model we use is the
synchrotron emission in a large-scale ordered aligned magnetic field.
Time-resolved polarizations of GRB prompt phase are derived with the
corresponding time-resolved energy spectra. We found the time-integrated PDs
calculated with two methods are similar. So it is convenient to estimate the
time-integrated PD by the time-integrated energy spectrum. Most of the
time-resolved PDs calculated in this paper will increase with time. The trend
could match the observed time-resolved PD curve of GRB 170114A, but contrary to
the predictions of a decaying PD of both the magnetized internal shock and
magnetic reconnection models. PAs calculated in this paper, in general, are
roughly constants with time. The predicted PAs here can not match with the
violent PA changes observed in GRB 100826A and GRB 170114A. Therefore, more
accurate time-resolved polarization observations are needed to test models and
to diagnose the true physical process of GRB prompt phase.
| https://export.arxiv.org/pdf/2208.04681 |
\title{Time-resolved polarizations of gamma-ray burst prompt emission with observed energy spectra}
\author{Rui-Rui Wu$^{1}$, Qing-Wen Tang$^{2}$, and Mi-Xiang Lan$^{1}$}
\affil{$^{1}$Center for Theoretical Physics and College of Physics, Jilin University, Changchun, 130012, China; [email protected] \\
$^{2}$Department of Physics, School of Physics and Materials Science, Nanchang University, Nanchang 330031, China \\}
\keywords{Gamma-ray bursts (629); magnetic fields (994);}
\section{Introduction}
The origins of the cosmological gamma-ray bursts (GRBs) remain mysterious. Although the light curves show rich diversities, both the time-integrated and time-resolved energy spectra of these violent explosions can be typically described by an empirical Band function \citep{Band1993}. Band function is a smoothly connected broken power law, with high- and low-energy spectral indices $\beta$ and $\alpha$ linked at peak energy $E_p$. Polarization is determined by the asymmetry of the system. In GRBs, such asymmetry can originate from the magnetic field \citep{Sari1999, GK2003, Toma2009, Lan2019}, jet structure \citep{Rossi2004, Wu2005, Lan2018}, and observational geometry \citep{Waxman2003}. \cite{Toma2009} had considered the time-integrated polarizations of GRB prompt phase with large-scale toroidal magnetic field. They used the energy spectra, i.e., the Band function, to construct the time-integrated Stokes parameters.
Time-integrated polarization observations of GRB prompt phase show rich diversities. The observed polarization degrees (PDs) are around $10\%$ for POLAR's detection \citep{Zhang2019, Kole2020}, while they are concentrated above $50\%$ for AstroSat's measurements \citep{Chattopadhyay2019, Chand2019, Rupta2022}. \cite{GL2022} had interpreted the time-integrated observational data of GRB prompt phase using the observed time-integrated energy spectra. They found polarizations of synchrotron emission in a large-scale ordered magnetic field could match most of the data from the Gamma-ray Burst Polarimeter (GAP) and POLAR, while the observed data of the AstroSat were obviously higher than the theoretical predicted values of the same model.
Time-integrated polarizations, compared with the time-resolved ones, have erased lots of evolution information about the source. For example, it is possible to diagnose the reasons for the observed lower time-integrated PD values than the theoretically predicted ones \citep{GL2022}. There are a few time-resolved polarization observations in GRB prompt phase so far \citep{Yonetoku2011, Zhang2019, Burgess2019}. The abrupt $~90$ degree Polarization angle (PA) change can happen either between two pulses in GRB 100826A \citep{Yonetoku2011} or between the two time bins within one pulse in GRB 170114A \citep{Zhang2019}. \cite{Burgess2019} had reanalyzed the data of GRB 170114A and divided the prompt phase into 9 time bins. They found, even with large errors, PD of the burst increases and PA rotates with the observational time.
Of the three popular models in GRB prompt phase, the emission mechanisms of the internal shock and the magnetic reconnection models are both the synchrotron emission. The internal shock model involves low- or no-magnetized shells with different velocities and these shells collide with each other, leading to the formation of the internal shocks \citep{PX1994,RM1994,Fan2004}. If the colliding shells are highly magnetized and the collisions will result in the avalanches of the magnetic reconnection precesses \citep{Zhang2011}. Time-resolved polarization predictions of both the magnetized internal shock and the magnetic reconnection models were considered recently \citep{Lan2021,Lan2020}, and a decaying PD with time is predicted for the two models.
In this paper, time-resolved polarizations of twenty GRBs simultaneously detected by $Fermi$ and polarization detectors (i.e., GAP, POLAR, and AstroSat) are calculated based on our time-resolved spectral parameters. We described the model and our numerical results in Section 2. GRB 170114A with time-resolved polarization observation was analyzed in detail in Section 3. The time-integrated PDs of these twenty GRBs were calculated with two methods in Section 4. Our conclusions and discussion were presented in Section 5.
\section{The Model and The Numerical Results}
Because the typical time-resolved energy spectra of GRB prompt phases can also be described by the Band function \citep{Band1993} as the time-integrated ones, we can construct time-resolved Stokes parameters with such Band spectra. Here, as in \cite{GL2022}, we set the Lorentz factor and the jet half-opening angle of all the bursts to be $\gamma=100$ and $\theta_j=0.1$ rad. The redshifts of the bursts are assumed to be 1 if there are no redshift reports (see a collection in \cite{GL2022} and references there-in). The magnetic field, in which the accelerated electrons radiate the synchrotron emission, is assumed to be large-scale aligned and its direction is set to be $\delta=\pi/6$.
Of thirty GRBs analyzed in \cite{GL2022}, twenty are simultaneously detected by $Fermi$. We analyze the public data of $Fermi$ to get the parameters of the time-resolved energy spectra of these twenty GRBs. Our data processing are as follows. The rmfit software package in version 432 is employed to perform the spectral analysis. Observational data sets are downloaded in the official web site of $Fermi$ Gamma-ray Burst Monitor (GBM) data~\footnote{https://heasarc.gsfc.nasa.gov/W3Browse/fermi/fermigbrst.html}. We select two NaI detectors (8 keV to 1000 keV) and one BGO detector (200 keV to 40 MeV), both of which are most close to the GRB center position. The ground time-tagged events (TTE) in these three detectors with 2 $\mu$s precision are used in the data reduction. We select one interval before and one after the burst as the background intervals, then employ a polynomial model with one order to fit the background. After that, we thus perform the time-averaged and time-resolved spectral fitting using the chi square statistics. For the time-resolved spectrum, the GBM $T_{90}$ interval in the 50-300 keV energy band is selected as presented in the latest Fermi GBM catalog~\citep{vonKienlin2020}. For the time-resolved spectral fitting, we derive the time-resolved time bins by setting a signal to noise ratio between 12 and 100 for individual GRB. A typical Band function is employed in all spectra~\citep{Band1993}. All spectral parameters and photon flux of individual spectrum are calculated in the energy band between 8 keV and 40 MeV.
Then the time-resolved Stokes parameters are calculated with these time-resolved spectral parameters. The integrated energy ranges of the Stokes parameters for GRBs with polarization detected by GAP, POLAR, and AstroSat are $50-300$, $50-500$, and $10-100$ keV, respectively. With these time-resolved energy-integrated Stokes parameters, we get the final time-resolved PDs and PAs for each GRB. We have selected five GRBs of total twenty as representatives to analyze their polarization evolutions in details. The results of other fifteen GRBs are presented in Appendix A.
For the polarization model used here \citep{Toma2009, GL2022}, the final energy-integrated polarizations are determined by its spectral parameters ($E_p$, $\alpha$, and $\beta$) and the observational angle ($\theta_V$). The spectral indices ($\tilde{\alpha}$) are positively correlated with the local PD $\pi_0$ \footnote{$\pi_0=(\tilde{\alpha}+1)/(\tilde{\alpha}+5/3)$, $\tilde{\alpha}=\alpha$ for the low-energy photons below $E_p$ and $\tilde{\alpha}=\beta$ for the high-energy photons above $E_p$.}, hence are positively correlated with the final energy-integrated PD. The spectral index of the low-energy photons ($\alpha$) is usually smaller than that of the high-energy photons ($\beta$). So the local PD $\pi_0$ is smaller for low-energy photons compared with the high-energy photons. The contributions from the low-energy photons with a lower local PD $\pi_0$ will be higher for a spectrum with a larger $E_p$. Therefore, a larger $E_p$ will lead to a lower energy-integrated PD. PD and $E_p$ are negatively correlated.
Because PD is positively corrected with the spectral indices and negatively correlated with the peak energy $E_p$. In our calculation, same as in \cite{GL2022}, we use the upper limit of the spectral indices and the lower limit of $E_p$ to derive the upper limit of the PD and PA. The lower limit of the spectral indices and the upper limit of $E_p$ are used to calculate the lower limit of the PD and PA. And the typical values of spectral indices and $E_p$ are used to derive the typical values of the PD and PA.
The $E_p$ evolution pattern is intensity-tracking mode for GRB 100826A. In Fig. 1, PDs increase with observational time $t$ for both on-axis and off-axis observations. PD shows a negative correlation with the peak energy $E_p$. When $E_p$ increases, the contribution from the low-energy part with lower local PD also increase, then the energy-integrated PDs will decrease. PAs stay roughly constants with time for various observational angles. So the model can not produce the violent $90^\circ$ PA change between two pulses (0-50 and 50-100 s) observed in GRB 100826A \citep{Yonetoku2011}.
The $E_p$ evolution pattern is hard-to-soft mode, meanwhile the low- and high-energy spectral indices are positively correlated for GRB 160802A as shown in Fig. 2. The general trends of PD curves are determined by $E_p$ (i.e., PD increases with a decaying $E_p$.). There are small spikes in the PD curves at early stage, which follow the evolutions of the spectral indices. If the spectral index $\tilde{\alpha}$ increases, local PD $\pi_0$ will increase, then the energy-integrated PD will also increase and vice versa. The polarization of GRB 160802A was detected by Astrosat with a detection energy range of 10-100 keV. At early stage, the time-resolved $E_p$ are all larger than the upper limit of the energy range of the polarization detector. So the energy-integrated Stokes parameters detected by AstroSat are completely from the low-energy photons below $E_p$. Therefore, PDs are positively correlated with the low-energy spectral index $\alpha$ at early stage.
Fig. 3 shows the polarization evolution of GRB 170114A with a hard-to-soft mode of $E_p$. At early stage, PDs are obviously positively correlated with the spectral indices for $\theta_V=0$ and $\theta_V=\theta_j+1/\gamma$. The evolution of PD is not obvious for $\theta_V=\theta_j+2/\gamma$. At late stage, $E_p$ decreases and spectral indices increase, resulting in increasing PDs for all three observational angles. PAs of the burst stay roughly constants for various observational angles. Therefore, the polarization model here can not predict the observed PA evolution of the burst, but can reproduce the observed increasing PD curve \citep{Burgess2019}. We have analyzed the polarization evolutions of this burst in details in Section 3.
$\alpha$, $\beta$, and $E_p$ in GRB 160325A show co-evolution as presented in Fig. 4. In the early stage, the evolutions of PDs track the trends of the spectral indices and $E_p$ for all calculated observational angles. Because the changes of $E_p$ is small at the beginning, their influences on the PD values are very tiny. The evolution trends of PD curves are mainly determined by the spectral indices. The polarization properties of the last point can not be obtained because the upper limit of the low-energy spectral index of this point is still smaller than $-1$ \footnote{If the spectral index $\tilde{\alpha}$ is smaller than $-1$, the local PD $\pi_0$ will be smaller than 0.}.
Fig. 5 shows the polarization evolutions for GRB 161218B. In general, the evolutions of PDs show negative correlations with peak energy $E_p$s. The variation range of $E_p$ is relatively large compared with GRB 160325A shown in Fig. 4 and the evolutions of PDs are mainly determined by $E_p$. At the beginning, $E_p$s are about 350 keV, close to the upper limit 500 keV of POLAR and change shallowly. The main contribution to the Stokes parameters come from low-energy photons. So polarization properties are mainly determined by the spectral index $\alpha$ of the low-energy photons. Therefore, PD curves are positively correlated with the low-energy spectral index at early stage.
\section{Interpreting the time-resolved polarization data of GRB 170114A}
The polarization data of this burst are taken from \cite{Burgess2019}. Two sets of the time-resolved spectra are considered here. One set is obtained by analyzing the $Fermi$ data, the other is also taken from \cite{Burgess2019}. We use both spectra of the burst to calculate its time-resolved polarizations. Our results are shown in Fig. 6. The observational angles of the two fits are $\theta_V=\theta_j+1/\gamma=0.11$ rad. The values and trends of the two fits are similar, and both fits the observed PD data equally well.
Because polarization direction keeps unchanged with a variation of $n\pi$ in PA (n is an integer.), here we set the observed PAs in the range of $[-\pi/2, \pi/2]$ by adding or subtracting $n\pi$ from the values given in \cite{Burgess2019}. A roughly $-\pi/2$ PA change happens between the second and third points, and a roughly $\pi/2$ PA change happens between the fifth and sixth points. The model used in this paper only predicts a roughly constant PA and can not reproduce such violent PA variations.
Because the minimum time-resolved PD of our best fit is above $20\%$, which is still larger than the observed time-integrated $4\%$ \citep{Zhang2019} or $10\%$ \citep{Kole2020} PD. And the predicted PA of the burst is roughly constant. So the predicted time-integrated PD will be larger than $20\%$, which is also inconsistent with the observations. For this burst, the observed low time-integrated PD, compared with the high time-resolved PDs, may mainly because of the twice abrupt $90^\circ$ PA changes. Therefore, more detailed model should be considered to reproduce the observed PA evolution of GRB 170114A.
\section{Time-integrated PDs of twenty bursts}
Since most of the observed polarization properties of GRB prompt emission are time-integrated values, the time-integrated polarizations can be derived from time-resolved ones. Here, we summarize the time-resolved Stokes parameters to get the time-integrated Stokes parameters. Then the time-integrated polarization properties for the twenty bursts (with both time-resolved spectra obtained from $Fermi$ data and polarization observations) can be obtained. We compare these results with both the time-integrated polarizations obtained by the time-integrated spectral parameters and the observed values.
The time-integrated spectra used here are obtained from analyzing the public data of $Fermi$ and are slightly different from that used in \cite{GL2022}. Time-integrated polarizations of the twenty GRBs are recalculated with both the time-resolved and time-integrated spectra reanalyzed in this paper. The results are shown separately in Figs. 7-9 for GRBs with polarizations detected by GAP, POLAR, and AstroSat, respectively. The calculated time-integrated PDs with two different methods mentioned above of the same burst are similar. Therefore, the time-integrated PD of the single burst can be estimated by the time-integrated energy spectrum.
For three GRBs detected by GAP \citep{Yonetoku2011,Yonetoku2012}, two of them have the observed PDs larger than the theoretical ones. Because the energy spectra used here and in \cite{GL2022} are slightly different, the predicted PDs are slightly smaller than the observed one for GRB 110301A in this paper, but it matches the observed value in \cite{GL2022}. Nine GRBs of which detected by AstroSat, except for the three upper limits observations, the calculated PDs of the other six GRBs are all smaller than the observed values, in accordance with the results of \cite{GL2022}.
For POLAR's detections, there are three (GRB 161218B, GRB 170114A, and GRB 170207A) of total eight bursts here with the calculated PDs larger than the observed values. In \cite{GL2022}, four bursts detected by POLAR with predicted PDs larger than the observed values are GRB 170101A, GRB 170127C, GRB 170114A, and GRB 170207A. So two bursts (GRB 170114A and GRB 170207A) have the predicted PDs larger than the observed values for calculations with different spectral parameters, while for GRB 161218B, GRB 170101A, and GRB 170127C, calculations with different spectral parameters will lead to different but consistent results (i.e., consistent with the predictions of the synchrotron-emission model). The spectral parameters used in polarization calculations will affect the final results. The accurate measurements of the spectral parameters are very important for polarization calculations.
\section{Conclusions and Discussion}
In this paper, we mainly discuss the time-resolved polarizations of GRB prompt phases with the corresponding time-resolved energy spectra. Then the time-integrated polarizations could be studied by these time-resolved ones. And the obtained time-integrated polarizations are compared with the ones calculated with the time-integrated energy spectra. The main advantage of the model used in this paper \citep{Toma2009, GL2022} is that it considers the evolutions of the spectral parameters exactly, while the main drawback is that the equal arrival time surface (EATS) effect is not included, so the evolutions of the physical quantities with the radius from the central engine can not be considered.
Time-integrated PDs are similar for the two calculation methods used in this paper, so it is convenient to estimate the time-integrated PDs with the time-integrated energy spectra. The spectral parameters are essential for the calculated PDs. The time-integrated spectral parameters used here are different from that we use in \cite{GL2022}. For example, the predicted PD here of GRB 161218B detected by POLAR are larger than the observed values, while it matches the observed one in \cite{GL2022}. Therefore, the accurate observations of the energy spectra are very important. The PDs predicted here are the upper limit because the large-scale ordered aligned magnetic field is assumed in the emitting region. If the observed PDs are larger than the predicted values, the polarization model will be challenged. The conclusions for the time-integrated polarization here are the same as in \cite{GL2022}. The polarization observations of the AstroSat challenge the synchrotron-emission model, while they are consistent with the synchrotron-emission model in a large-scale ordered magnetic field for POLAR's detections.
If the source is axisymmetric, the Stokes parameters U will be zero and PA of such system can only keep as a constant or change abruptly by $90^\circ$. For example, the conical jet with a toroidal magnetic field is such a system. To obtain a gradual evolution of PA, the axial symmetry should be broken. Large-scale ordered aligned magnetic field can offer such asymmetry. Although the magnetic field is assumed to be large-scale aligned, the polarization model here can only predict roughly constant PAs and can not reproduce a large-amplitude evolving PA as observed in GRB 100826A \citep{Yonetoku2011} and GRB 170114A \citep{Burgess2019}. More detailed models considering the evolutions of the emitting source should be considered.
The calculated PDs are positively correlated with the spectral indices and are negatively correlated with the peak energy $E_p$. Most of the PD curves calculated in this paper will increase with time. The trend could match with that of the only burst (GRB 170114A) having time-resolved polarization observation \citep{Burgess2019}. An increasing PD with time is contrary to the predictions of the magnetized internal shock \citep{Lan2021} and the magnetic reconnection models \citep{Lan2020}. The main reason for the difference originates from whether or not the EATS effect is consider. In both the magnetic internal shock and the magnetic reconnection models, the EATS effect is consider and a decaying PD with time is mainly due to the decrease of the $\tilde{f}$ parameter \footnote{$\tilde{f}$ is defined to be the flux ratio between the contributions within and outside the local $1/\gamma$ cone.} \citep{Lan2020}. While an increased PD with time is mainly because of a decreased $E_p$ in the model here, in which the EATS effect is not considered. It is amazing that the PD observations of GRB 170114A could be roughly matched with the model prediction in this paper. Therefore, to test models and to determining the physical process of the emitting sources, more and more accurate time-resolved polarization observations are needed.
\acknowledgments
This paper is dedicated to the 70th anniversary of the physics of Jilin University.
This work is supported by the National Natural Science Foundation of China (grant Nos. 11903014, 11903017, 12065017).
\bibliography{ms_arxiv}
\appendix
\section{Other 15 GRBs with time-resolved spectra and polarization Observations}
For the third points of GRB 160623A, the typical value and lower limit of the low-energy spectral index are both smaller than $-1$, leading to a negative local PD $\pi_0$. And the lower limit of $E_p$ is smaller than 0, which is unphysical. So we can not predict the polarization property of this point. For the second and fourth points of GRB 170127C, the upper limit of the low-energy spectral index $\alpha$ of the two points are both smaller than $-1$. Therefore, the polarizations of the two points can not be derived.
|
Title:
Thermal Testing for Cryogenic CMB Instrument Optical Design |
Abstract: Observations of the Cosmic Microwave Background rely on cryogenic
instrumentation with cold detectors, readout, and optics providing the low
noise performance and instrumental stability required to make more sensitive
measurements. It is therefore critical to optimize all aspects of the cryogenic
design to achieve the necessary performance, with low temperature components
and acceptable system cooling requirements. In particular, we will focus on our
use of thermal filters and cold optics, which reduce the thermal load passed
along to the cryogenic stages. To test their performance, we have made a series
of in situ measurements while integrating the third receiver for the BICEP
Array telescope. In addition to characterizing the behavior of this receiver,
these measurements continue to refine the models that are being used to inform
design choices being made for future instruments.
| https://export.arxiv.org/pdf/2208.02755 |
\keywords{Cosmic Microwave Background, Polarization, Instrumentation, Cryogenics, Thermal Testing, BICEP Array}
\section{Background}
\label{sec:intro}
Measurements of the Cosmic Microwave Background (CMB) play a critical role in our study of the early universe. Models of cosmic inflation predict that primordial gravitational waves will have imprinted a faint signal in the B-mode spectrum of the CMB. To measure this polarization signal, we require instrumentation that can provide high throughput (through large optical field of view and detector count) as well detectors with high sensitivity to the CMB signal. By using cryogenic instrumentation, we are able to achieve low noise performance and stability of the instrument response. In particular, the cryogenic optics design provides the dual benefit of reducing the thermal loading, allowing the colder components of the instrument to be more effectively cooled, as well as reducing the portion of that loading that is in the detector band, which increases the detector sensitivity to the CMB signal.
In order to achieve these sensitive measurements, the BICEP Array (BA) telescope consists of four receivers that will operate at the South Pole, each with their own cryogenic system \cite{Crumrine_2018}. The first of these receivers has been deployed, demonstrating the necessary cryogenic performance for science operations \cite{Moncelsi_2020}. The cryostat design (Fig. \ref{fig:design}) consists of concentric cylinders at nominal temperatures of 50K and 4K, inside of a vacuum jacket, cooled by a Cryomech PT415 pulse tube. A He-3 sorption fridge supported from the 4K stage provides the cooling power for stages at 2K, 350 mK, and 250 mK, with the instrument’s superconducting detectors on the coldest stage. Conductive loads between each of these stages are minimized through the use of low thermal conductivity materials in the support structure (carbon fiber, G-10, and thin Titanium) as well as in the readout wiring (manganin and NbTi), while the quantity of readout wiring is reduced through the use of multiplexed readout \cite{Crumrine_2018}. Convective loads are minimized through vacuum inside the cryostat and low radiative loads between the walls of the thermal stages are achieved through multilayer insulation \cite{Crumrine_thesis}.
Our design calculations indicate that the largest component to the thermal loading in the cryostat arises from thermal radiation that passes through the optics \cite{Crumrine_2018}. The filtering materials chosen in this design serve as low pass filters, so the higher frequency infrared radiation is absorbed while the lower frequency CMB signal is allowed to pass through. Each optical component will then radiate its own blackbody spectrum, with a lower total load commensurate with its lower source temperature. The BA optics design (Fig. \ref{fig:design}) starts with an HDPE window on the vacuum jacket, with a stack of 12 $1/8$ inch thick Zotefoams Plastazote HD-30 filters. Due to the low thermal conductivity of the foam, the heat flow into the individual foam layers is dominated by radiation from the adjacent layers. As the stack comes into radiative equilibrium, the temperature decreases through the stack so that the net radiation passed on to the 50K stage is reduced. An alumina filter provides another stage of thermal filtering on the 50K stage, above the HDPE lenses on the 4K stage. Also on the 4K stage (either between or below the lenses, depending on the particular receiver's optical design) a nylon filter provides another stage of thermal filtering before reaching the final filtering on the subKelvin stages. To fully understand the effectiveness of this filtering scheme, we have developed a thermal model for the optics, which has been refined and tested through direct measurements in BICEP Array receivers.
\section{Thermal Modeling}
\label{sec:model}
To model the thermal load through the optics, we consider the radiative balance equation for each element (depicted in Fig. \ref{fig:balance}):
\begin{equation}
P_{\text{trans}, i-1} + P_{\text{rad}, i-1} + P_{\text{rad}, i+1} - P_{\text{trans}, i} = P_{\text{cond}, i} + 2 \times P_{\text{rad}, i}
\end{equation}
$P_{\text{trans}, i}$ indicates the power transmitted through the $i$th element of the optics, $P_{\text{rad}, i}$ is the power radiated by that stage, and $P_{\text{cond}, i} $ is the heat conducted out through the support at the edge of the element. In the case of all elements other than the stack of foam filters, we can say that $P_{\text{rad}, i-1} \gg P_{\text{rad}, i+1}$ since the temperature of the $i-1$ layer is significantly colder than the $i+1$ layer. Therefore our model follows:
\begin{equation}
P_{\text{trans}, i-1} + P_{\text{rad}, i-1} - P_{\text{trans}, i} = P_{\text{cond}, i} + 2 \times P_{\text{rad}, i}
\end{equation}
At each stage, we model the (cylindrically symmetric) elements as discrete rings with known radius and thickness and a variable temperature (Fig. \ref{fig:shells}). We can then calculate the energy balance for each ring using the total power that was absorbed and emitted in the portion of the disk interior to a particular ring, and the conducted power through the outer edge of the ring. The absorbed power is taken from the incident loads, with the fractions absorbed and transmitted calculated from the transmission spectrum of incident radiation and frequency dependent transmission data for the particular material. To calculate that average optical depth for the transmission coefficient, we use the diffuse approximation to describe the transmission between layers, as they are very close together. The model then calculates the temperature gradient that is necessary to achieve thermal equilibrium, as the conducted power is calculated from the material conductance and modeled temperature difference, with the edge temperatures constrained by model inputs (taken from direct measurement in the cryostat) and the radiated power comes from the component transmittance and temperatures. Once a temperature gradient that minimizes the residuals in the energy balance for a given layer is found, an effective temperature is calculated to provide the radiated power for the next layer in the stack.
For the multilayer foam filter stack, the load is modeled separately via the method discussed in Choi et al.\cite{Choi_2013}. Because the foam material has a low thermal conductivity, the conduction out the sides of the filters can be neglected, allowing the filter temperatures to be modeled solely through radiative equilibrium. However, the warm stages are sufficiently close together in temperature that $P_{\text{rad}, i+1}$ is significant for the load on each filter, so the model described above does not directly apply. Instead, it is instead modeled as a system of equations that evaluates the temperature of all foam filters at once. The radiative model is achieved entirely through balance of absorption and reflection, assuming no significant load transmitted through the filters. Room temperature measurements have shown that the transmittance is very near 0 at the wavelengths dominating the thermal radiation, while the loads are relatively small at the low frequencies where the transmission is large for the polyethylene based HD-30 foam that we use, as it was for the polystyrene foam in the Choi. et al. paper (Fig. \ref{fig:hd30fts}) \cite{Wandui_thesis, Choi_2013}.
The results of this model depend strongly on the parameters for transmittance and conductance that are put into the model for each of the materials. The model has references for alumina \cite{Inoue_2014}, HDPE \cite{Goldsmith}, fused quartz \cite{Inoue_2014}, teflon \cite{Goldsmith, NIST}, nylon \cite{NIST}, and Silicon \cite{Touloukian_1970, Afsar_1990}. Where possible, cryogenic values have been used; however these parameters are not available for all materials at all temperatures. Optical testing efforts at the Harvard labs aim to provide cryogenic references for more of these components, so that the reliability of this model may be improved.
\section{Thermal Testing}
In order to test how the performance of as-built cryogenic receivers compares to the thermal model, we have implemented cryogenic macrobolometers (Fig. \ref{fig:macrobolo}) that can be installed on different stages of the receiver to measure the incoming radiative load. These macrobolometers work by taking an absorber varying in size from 2-10 cm$^2$ and supporting it with a section of thermally resistive G10/FR4 so that the absorbed thermal load will produce a measurable temperature difference between the two sides of the thermal resistance. To ensure that the absorbing region has an emissivity of 1, the absorbing surface is covered with Bock Black \cite{BockThesis} while the rest of the device is wrapped in a low-emissivity aluminized mylar tape to minimize other hidden contributions to the absorber area. The conductance of the thermally resistive portion is measured in situ by applying additional heat to the absorber side of the device and measuring how the temperature difference scales with the load. While fitting to that data, we assume a functional form for the temperature dependence of the conductivity that matches standard references \cite{NIST, Runyan_2008}. Cryogenic measurements of our supply of G10/FR4 material without the macrobolometer absorbers has been shown to effectively match the reference in the 0.27 - 4.2 K range (Fig. \ref{fig:g10fit}).
To confirm that these macrobolometers can deliver physically reasonable measurements, we have installed two of these in an enclosure with the absorbing areas facing one another, such that the load incident on one device is dominated by the load radiated by the other, which could be controlled via a heater (Fig. \ref{fig:enclosure} Left). Since we are measuring the temperature of the radiating region of the source macrobolometer, this allows us to provide a known load, scaled by the view factors between the two devices. The results of this test are as expected, with the predicted load falling within the measurement range, taking into account the uncertainties arising from the precision of the thermometer calibration (Fig. \ref{fig:enclosure} Right).
These macrobolometers have been deployed in the BA3 receiver as it is being commissioned for its eventual deployment as part of BICEP Array. By installing several of these above the 50K alumina filter, we are able to evaluate whether the performance of the foam filter stack is consistent with our expectations. In fact, what we have measured is that the radiative load is significantly higher than expected, and is higher near the edge of the filter than it is closer to the center (Fig. \ref{fig:50kload}). The thermal model asserts that there will be a uniform load from the foam filter stack, based on the assumption that there is negligible conducted heat from the supports to the filter, so each surface should be isothermal. This radial trend suggests that the support rings at the edge of the foam filters are conducting measurable heat from the vacuum jacket at room temperature, to provide the unmodeled load. To test this conclusion, we have modified the support structure with the goal of mitigating this effect and reducing the overall load on our 50K stage.
Similar macrobolometers have been installed on the focal plane to measure the load coming from the 4K optics. These devices have a larger absorbing area and a narrower resistive region than the 50K devices, in order to be sensitive to the smaller flux at that stage. Using the BA2 receiver, which is equipped with the 4K optics, we have measured a load of $10.2\pm2.8$ $\mu$W/m$^2$. The predicted load from the thermal model for this cryostat configuration is 11.2 $\mu$W/m$^2$, within the uncertainties of our measurement and providing confidence in our model at that level.
\section{Conclusions}
Astronomical measurements that requires cryogenic instrumentation to achieve the necessary sensitivity requires careful consideration of the thermal design to ensure that the cryostat achieves the required thermal performance. We have developed a model of thermal radiation that passes through the optics of our receivers, which has been used to evaluate the performance of the BICEP Array telescope. To verify the results of this model, we have also deployed macrobolometers to directly measure the radiation environment inside the cryostat. This has allowed us to measure effects that are inconsistent with the assumptions of the model, motivating practical modifications to the design that can lead to future improvements in the performance.
\section{Acknowledgments}
The BICEP/Keck project (including BICEP2, BICEP3, and BICEP Array) have been made possible through a series of grants from the National Science Foundation including 0742818, 0742592, 1044978, 1110087, 1145172, 1145143, 1145248, 1639040, 1638957, 1638978, 1638970, 1726917, 1313010, 1313062, 1313158, 1313287, 0960243, 1836010, 1056465, \& 1255358 and by the Keck Foundation. The development of antenna-coupled detector technology was supported by the JPL Research and Technology Development Fund and NASA Grants 06-ARPA206- 0040, 10-SAT10-0017, 12-SAT12-0031, 14-SAT14-0009, 16-SAT16-0002, \& 18-SAT18-0017. The development and testing of focal planes were supported by the Gordon and Betty Moore Foundation at Caltech. Readout electronics were supported by a Canada Foundation for Innovation grant to UBC. The computations in this paper were run on the Odyssey cluster supported by the FAS Science Division Research Computing Group at Harvard University. The analysis effort at Stanford and SLAC was partially supported by the Department of Energy, Contract DE-AC02-76SF00515. We thank the staff of the U.S. Antarctic Program and in particular the South Pole Station without whose help this research would not have been possible. Tireless administrative support was provided by Kathy Deniston, Sheri Stoll, Irene Coyle, Amy Dierker, Donna Hernandez, and Julie Shih.
\bibliography{goldfinger_spie_2022.bib}
\bibliographystyle{spiebib} %
|
Title:
Benchmarking MESA isochrones against the Hyades single star sequence |
Abstract: Based on GAIA EDR3, we revisit and update our sample of bonafide single stars
in the Hyades open cluster. The small observational uncertainties in parallax
and photometry of EDR3 result in a tightly defined stellar sequence, which is
ideal for the testing and calibration of theoretical stellar evolutionary
tracks and isochrones. We benchmark the solar-scaled MESA evolutionary models
against the single star sequence. We find that the non-rotating MESA models for
[Fe/H] = +0.25 provide a good fit for stars with masses above 0.85, and very
low mass stars below 0.25 M$_\odot$. For stars with masses between 0.25 and
0.85 M$_\odot$ the models systematically under predict the observed stellar
luminosity. One potential limitation of the models for partially convective
stars more massive than 0.35 M$_\odot$ is the prescription of (superadiabatic)
convection with the mixing-length theory parameter $\alpha_{\rm ML}$ tuned to
match the Solar model. Below 0.35 M$_\odot$, the increased scatter in the
stellar sequence might be a manifestation of the convective kissing
instability, which is driven by variations in the $^3$He nuclear energy
production rate due to instabilities at the convective core to envelope
boundary. For a Hyades-like stellar population, the application of solar-scaled
models to subsolar mass stars could result in a significant underestimate of
the age, or an overestimate of the metallicity. We suggest that future grids of
solar-scaled evolutionary stellar models could be complemented by Hyades-scaled
models in the mass range 0.25 to 0.85 M$_\odot$.
| https://export.arxiv.org/pdf/2208.04969 |
\label{firstpage}
\pagerange{\pageref{firstpage}--\pageref{lastpage}}
\begin{keywords}
open clusters and associations: individual: Hyades -- convection -- stars: evolution -- stars: fundamental parameters -- stars: interiors -- Hertzsprung-Russell and colour-magnitude diagrams
\end{keywords}
\section{Introduction}
Stellar cluster and individual binary stars constitute the most important astrophysical calibration sources.
At an average distance of 45\,pc, the Hyades open cluster is the closest (populous) stellar cluster to the Sun. The Hyades have super-solar metallicity. \cite{Kopytova2016} derived [Fe/H]=+0.14 for the best-fitting BT-Settl2010+PISA and DARTMOUTH isochrones \citep{Allard2013,Dotter2008,Tognelli2011,DaRio2012,Tognelli2012}, while \cite{Gossage2018} derived [Fe/H]=+0.10 to +0.12 using MESA isochrones \citep{Paxton2011,Paxton2013,Paxton2015,Dotter2016,Choi2016,Paxton2018} in near infrared (NIR).
At an age of $\approx 635 \pm 135$\,Myr, the Hyades open cluster comprises main sequence and post-main sequence stars with initial masses in the range $\approx$0.1 to 3.6\,M$_\odot$ \citep[e.~g.][]{Perryman1998,deBruijne2001,Krumholz2019}, which serve as benchmarks for models of stellar evolution. Colour-absolute magnitude diagrams (CMD) \citep{Castellani2001,Roeser11} revealed discrepancies between stellar models and observations for sub-solar mass stars. \cite{Castellani2001} attributed this to limitations in the description of the efficiency of superadiabatic convection in the outer layers of partially convective stars. \cite{Kopytova2016} showed that the incorporation of updated input physics \citep[equation of state, opacities, etc.,][]{DeglInnocenti2008} results in an improved match between theoretical isochrones and 2MASS photometric measurements in the mass range 0.6 to 0.8\,M$_\odot$. Below 0.6\,M$_\odot$, the close-to-vertical (i.e.\ constant colour) stellar sequence in NIR CMDs makes isochronal fitting rather insensitive to stellar luminosity, and hence metallicity or age.
In \cite{Brandner2022} we used MESA and BHAC2015 \citep{Baraffe2015} isochrones in the {\it GAIA} photometric system to determine the age of the nearby exoplanet host star GJ~367. Both sets of isochrones suggested a young age in the range of $\approx$30 to 60\,Myr for the star, which is considerably younger than its age suggested by gyro-chronology, and by its space motion and galactic dynamics models. This and the unprecedented photometric and parallax accuracy of {\it GAIA} EDR3 observations prompted us to benchmark the solar-scaled MESA isochrones against the single star sequence of the Hyades open cluster.
The structure of the paper is as follows. In section 2 we present the update sequence of bonafide single stars in the Hyades open cluster. In section 3 we summarize literature age and metallicity estimates of the Hyades based on photometric data. In section 4 we benchmark the MESA isochrones against the single star sequence in the {\it GAIA} photometric system. In section 5 we discuss potential short-comings of grids of stellar models, and suggest ways forward.
\section{The Hyades single star sequence}
\begin{table*}
\caption{Single and multiple candidate members of the Hyades cluster, classified according to {\it GAIA} EDR3, sorted by RA. The full table is available online.} %
\label{SingleStar} %
\centering %
\begin{tabular}{r c c c c c c c c c c} %
GAIA EDR3 ID& dpgeo & lo$\_$dpgeo & hi$\_$dpgeo & G & $\sigma_{\rm G}$& BP & $\sigma_{\rm BP}$ & RP & $\sigma_{\rm RP}$ & flag \\
& [pc] & [pc] & [pc] & [mag]&[mag]& [mag]&[mag]&[mag]&[mag]& \\
\hline %
395696646953688448& 59.612 & 59.565 & 59.655 &10.7791& 0.0009& 11.4353& 0.0004& 10.0031& 0.0002& 1\\
386277165192421120& 30.935 & 30.908 & 30.962 &14.7670& 0.0089& 16.6299& 0.0030& 13.4805& 0.0009& 2\\
393017579491591168& 64.815 & 64.423 & 65.115 &17.0372& 0.0073& 19.2764& 0.0380& 15.6597& 0.0031& 1\\
420637590762193792& 83.610 & 82.524 & 84.646 &18.5022& 0.0007& 20.9101& 0.0994& 17.0266& 0.0068& 1\\
385502112574538624& 42.821 & 42.780 & 42.859 &14.3729& 0.0005& 16.1299& 0.0041& 13.1075& 0.0015& 1\\
2860677398591440768& 39.751 & 39.395 & 40.054 &11.7972& 0.0017& 12.8881& 0.0017& 10.6002& 0.0022& 4\\
\end{tabular}
\begin{quote}
flag: 1 - bonafide single, 2 - likely binary or multiple, 3 - white dwarf, 4 - peculiar {\it GAIA} EDR3 BP-G vs.\ G-RP colours; Median , low (lo, 16th quantile) and high (hi, 84th quantile) of the photogeometric distance posterior dpgeo are from \cite{Bailer2021}
\end{quote}
\end{table*}
\cite{Kopytova2016} defined a fiducial observational sequence of single stars suitable for testing of stellar evolutionary and atmospheric models. The stars were selected from a sample of 724 probable members of the Hyades open cluster established by \cite{Roeser11} based on their proper motion according to the PPMXL catalog \citep{Roeser2010}. Using literature data and high-angular resolution Lucky Imaging observations with AstraLux Norte \citep{Hormuth2008}, the stars were screened for stellar binarity and photometric blends to derive a sample of single stars. This single star sample was selected quite conservatively. Considering the intrinsic 2$''$ angular resolution of the 2MASS Point Source Catalog \citep{Cutri2003}, and in order to minimize the effect of photometric blends, \cite{Kopytova2016} flagged all occurrences of another source within $\approx$4$''$ as potential binary companions, and excluded them from the single star sample.
{\it GAIA} EDR3 facilitates a refinement of the single star sequence from \cite{Kopytova2016}. \cite{GAIA_Smart2021A} published a {\it GAIA} Catalogue of Nearby Stars (GCNS) listing 920 candidate members of the Hyades. In order to reject photometric outliers caused, e.g., by blends in the {\it GAIA} BP and RP bands, we first applied a colour cut-off: $-0.2$\,mag $\le$ BP - G $\le$ 3.2\,mag and $-0.3$\,mag $\le$ G - RP $\le$ 1.7\,mag. This rejected 30 sources, resulting in a sample of 890 candidate members listed in Table \ref{SingleStar}. As a second step we fitted a 4th order polynomial to the data in a G - RP vs.\ BP - G two-colour diagram. Application of an iterative sigma-clipping resulted in a sample of 783 candidate members with good photometric quality data. As a third step, we used the Renormalized Unit Weight Error (RUWE, see \cite{gaia_edr3lite,Lindegren2021}) to distinguish between bonafide single stars and likely unresolved binary and multiple systems. RUWE values around 1.0 indicate that the {\it GAIA} astrometric observations are well fitted by the single-star model. A significantly larger RUWE value indicates that the single-star model does not provide a good fit to the astrometric solution due to, e.g., the non-single nature of the source.
Next we computed absolute G$_{\rm abs}$ magnitudes based on the apparent G magnitudes and the EDR3 photogeometric distances according to \cite{Bailer2021}\footnote{For 18 stars, including five of the Hyades white dwarfs (GAIA EDR3 ID 45980377978968064, 3313606340183243136, 3313714023603261568, 3306722607119077120, 3294248609046258048), we substituted the missing photogeometric distance by the geometric distance.}. In order to identify and flag photometric binaries (i.e.\ sources falling on the binary sequence in the CMD, but with separations too close to be identified by the RUWE selection), we fitted an 8th order polynomial to the single star main-sequence, and applied an iterative sigma clipping. This resulted in 616 sources classified as bonafide single stars, 156 as likely binary or multiple systems, and 11 as white dwarfs.
Figure \ref{CMDfull} shows the colour-absolute magnitude diagram of the Hyades open cluster, covering the main sequence and part of the post-main sequence. Blue dots mark the candidate members of the Hyades from the GCNS sample. Red crosses mark bonafide single stars, with observational uncertainties derived from uncertainties in {\it GAIA} photometry and parallax indicated. The median uncertainty in BP-RP colour amounts to 4.1\,mmag, and to 3.3\,mmag in G$_{\rm abs}$.
In particular for BP-RP $\le 2.3$\,mag, the single stars form a very tight sequence, which is clearly distinct from the scatter of apparently overluminous Hyades members located on the binary sequence. For redder (and intrinsically fainter) stars, there is a larger scatter in the bonafide single star sequence.
As Figure \ref{CMDvlm} highlights, the single star sequence becomes successively incomplete for BP-RP $\ge 3.2$\,mag. The sample of GCNS Hyades candidate members extends to BP-RP = 4.7\,mag. The reddest and lowest mass object included in the single star sample is LSPM J0354+2316 ({\it GAIA} EDR3 65638443294980224), which is of spectral type M8 \citep{Bardalez2014}, and has a mass of $\approx$0.1\,M$_\odot$ \citep{Goldmann2013}.
\section{Age and metallicity of the Hyades}
There is a vast literature on abundance estimates and the calibration of astrophysical parameters for stars in the Hyades cluster (see, e.g., \cite{Perryman1998,Tognelli2021}, and references therein). Age estimates for the Hyades are in general derived from the main sequence turn-off and isochrone fitting. Abundance estimates rely both on spectral analysis and isochrone fitting.
\begin{table*}
\caption{Compilation of abundance, $\alpha_{\rm ML}$, and age estimates for the Hyades based on isochrone fitting} %
\label{hyades_metal} %
\centering %
\begin{tabular}{l c c c c c c c c l} %
[Fe/H] & X & Y & Z& $\Delta$Y/$\Delta$Z& $\alpha_{\rm ML}$ & mass range & age &PD$^1$&reference \\
& & & & & & [M$_\odot$] & [Myr] & & \\ \hline
$+0.14${\raisebox{0.5ex}{\tiny$^{+0.05}_{-0.05}$}} &0.716 & $0.260${\raisebox{0.5ex}{\tiny$^{+0.020}_{-0.020}$}} & $0.024${\raisebox{0.5ex}{\tiny$^{+0.003}_{-0.003}$}}& &1.64 &[0.8,1.6] & $625\pm 50$ &BD&\cite{Perryman1998}\\
+0.14 &0.691 &0.285 &0.024 & &1.68 &[0.5,0.9] &$638 \pm 13$ &TY&\cite{deBruijne2001}\\
+0.14 &0.716 &0.260 &0.024 & &1.64 &[0.9,1.6] &$638 \pm 13$ &TY&\cite{deBruijne2001}\\
+0.14 &0.708 &0.273 &0.019 & &1.68 &[1.6,2.4] &$631$ &TY&\cite{deBruijne2001}\\
$+0.14$ &0.700 &0.283 &0.0175 &2 &1.74 &[0.13,2.30] &$726 \pm 50$ &2M&\cite{Kopytova2016}\\
$+0.24${\raisebox{0.5ex}{\tiny$^{+0.02}_{-0.02}$}}& & & & &1.82 &[0.5,2.4] &$726 \pm 50$ &TY&\cite{Gossage2018}\\
$+0.10${\raisebox{0.5ex}{\tiny$^{+0.02}_{-0.02}$}}& & & & &1.82&[0.5,2.4] &741{\raisebox{0.5ex}{\tiny$^{+36}_{-14}$}} &2M&\cite{Gossage2018}\\
$+0.169${\raisebox{0.5ex}{\tiny$^{+0.025}_{-0.025}$}} &0.6947 &0.2867 &0.01863 &$2.03\pm0.33$ &$2.01\pm0.05$ &[0.83,1.35]&500&G2& \cite{Tognelli2021}\\ \hline
\end{tabular}
\begin{quote}
$^1$ key to photometric data set (PD): 2M - based on 2MASS photometry \citep{Cutri2003}; BD - based on BDA \citep{Mermilliod1995}; G2 - based on {\it GAIA} DR2 \citep{GAIA2016,GAIA2018}; TY - based on {\it TYCHO} photometry \citep{Hog2000}
\end{quote}
\end{table*}
Table \ref{hyades_metal} summarizes some of the canonical estimates, including the helium-to-metal enrichment ratio $\Delta$Y/$\Delta$Z, based on isochrone fitting. The differences in the parameter estimates can in part be explained by variations in the observational methods and data sets and their intrinsic uncertainties, in part by differences and advances in the modelling of stellar interiors and atmospheres, and in part by advances in the analysis of the solar elemental abundances (see, e.g., \cite{Asplund2009}). The majority of the estimates focused on post-main sequence stars and main sequence stars of spectral type K and earlier (more massive than 0.5\,M$_\odot$). The sole exception is the study by \cite{Kopytova2016}, which includes stars with masses as low as 0.13\,M$_\odot$. Some of the studies also consider variations of the mixing length (ML) parameter $\alpha_{\rm ML}$ or stellar rotation \citep{Gossage2018,Tognelli2021}. The latter effect appears to be most noticeable in the colours and luminosity of post-main sequence stars.
Common to all studies is the derived (or assumed) super-solar metallicity of the Hyades, with [Fe/H] estimates in the range $+0.10$ to $+0.24$. Age estimates for the Hyades cover the range 500 to 770\,Myr. The majority of the studies also indicate a higher than solar He abundance for the Hyades (see Table \ref{hyades_metal}).
\section{Benchmarking isochrones}
\begin{table*}
\caption{Model stellar parameters at inflection points between MESA isochrone (age = 710 Myr, [Fe/H]=+0.25, v/v$_{\rm crit}$ = 0) and Hyades single star sequence} %
\label{StellarColourRegions} %
\centering %
\begin{tabular}{c c c c c c c c} %
G$_{\rm abs}$$^1$& BP - RP$^1$& B - V & log T$_{\rm eff}$ & log g & log L & Mass & note \\
[mag]& [mag]& [mag]& [K]& [cm/s$^2$]& [L$_\odot$] & [M$_\odot$] & \\ \hline
$<$6.7&$<$1.2 &$<$1.03 &$>$3.676 &$<$4.61 &$>$-0.594 &$>$0.85 &good fit\\
6.7 to 11.2 &1.2 to 2.6 &1.03 to 1.30 &3.676 to 3.519 &4.61 to 4.90 &-0.594 to -1.864 &0.85 to 0.35&model underluminous\\
11.2 to 13.5&2.6 to 3.1 &1.30 to 1.36 &3.519 to 3.485 &4.90 to 4.97 &-1.864 to -2.229 &0.35 to 0.25& transition region \\
$>$13.5&$>$3.1 &$>$1.36 &$<$3.485 &$>$4.97 &$<$-2.229 &$\le$0.25 &good fit\\
\hline %
\end{tabular}
\begin{quote}
$^1$ G$_{\rm abs}$ and BP-RP refer to the observed stellar sequence, while the other quantities are according to the MESA isochrone for the corresponding BP-RP colour.
\end{quote}
\end{table*}
The solar scaled MESA isochrones and stellar tracks\footnote{We use the \detokenize{MIST_v1.2_vvcrit0.0_UBVRIplus} packaged model grid, dated 2020-12-04, which includes updated synthetic photometry for {\it GAIA} EDR3 based on \cite{Riello2021}. The grid steps are 0.25 dex in the range -2.00 $\le $ [Fe/H] $\le$+0.50, and 0.05 dex in the range 5.0 $\le \log_{10}$(age [yr]) $\le$ 10.3} use a Ledoux plus mixing length theory prescription of convection, with $\alpha_{\rm ML} = 1.82$ tuned to fit the Sun \citep{Choi2016}.
In Figure \ref{CMDfull} we overlay two MESA isochrones for [Fe/H]=+0.25, and no rotation (v/v$_{\rm crit}$ = 0) on the colour-absolute magnitude diagram of the Hyades. As presented in \cite{Gossage2018}, the best fitting MESA isochrones in the optical yield systematically higher metallicities than the best fitting MESA isochrones in the NIR for the Hyades, Praesepe, and Pleiades clusters.\footnote{\cite{Gossage2018} attribute this to the small number of stars in their optical samples ($<$40 stars for the Hyades, according to their Figure 7), and suggest that their optical samples do not provide meaningful constraints on the metallicity of either of these clusters.}
In the pre-computed grid of solar-scaled MESA isochrones, [Fe/H]=+0.25 is closest to [Fe/H]=+0.24$\pm$0.01 as deduced by \cite{Gossage2018} from the best fitting MESA isochrone in the {\it TYCHO} B$_{\rm T}$ ,V$_{\rm T}$ photometric system. The choice of non-rotating stellar models is based on the dearth of rapid rotators among single stars with masses $\ge$0.3\,M$_\odot$ in the Hyades \citep{Douglas2016}. They find that stars with masses of $\approx$0.4\,M$_\odot$ from the sample defined by \cite{Kopytova2016} have typical rotational periods of $\approx$20\,days.
We find that the isochrone for $\log_{10}$(age [yr]) = 8.85 ($\approx$710\,Myr) provides a better fit for stars with masses $>$1.35\,M$_\odot$ than the next younger (630\,Myr) or older isochrones (795\,Myr). This age is in good agreement with isochronal age determinations by \cite{Kopytova2016} and \cite{Gossage2018}. \cite{Tognelli2021} only considered stars with masses $<$1.5\,M$_\odot$ for the age determination, and were thus less sensitive to the rapid evolution of stellar luminosity near the upper end of the main sequence.
For stars between 0.25 and 0.85\,M$_\odot$, the 710\,Myr isochrone tends to underpredict the stellar luminosity. In the colour range BP-RP = 1.2 to 2.6\,mag, the 55\,Myr isochrone (dash-dotted line) provides a good fit to the observed sequence, but it overpredicts the stellar luminosity for BP-RP $\ge$2.6\,mag.
In Figure \ref{CMDfull_metal} we overlay three isochrones for an age of 710\,Myr, and for [Fe/H] = 0.00, +0.25, and +0.50. The highest metallicity isochrone (dash-dotted line) provides a good fit the colour range BP-RP = 1.2 to 2.6\,mag, but overpredicts the stellar luminosity for bluer and redder stars.
Table \ref{StellarColourRegions} lists the corresponding B-V colour, effective temperature, $\log$ g, $\log$ L, and stellar mass according to the MESA isochrone (age = 710\,Myr ([Fe/H]=+0.25, and v/v$_{\rm crit}$ = 0) at the boundaries of the four BP-RP colour regions marked by the vertical dotted lines in Figures \ref{CMDfull} and \ref{CMDfull_metal}. None of the single-age, single-metallicity isochrones is capable of fitting the entire single star sequence.
\section{Discussion}
In Figures \ref{CMDfull} and \ref{CMDfull_metal} we have marked the BP-RP colour regions where the 710\,Myr, [Fe/H] = +0.25 isochrone provides a good fit to the observed sequence, and where it significantly deviates from the observed sequence by predicting fainter (underluminous) stars.
The good fit of the MESA isochrones for solar-type stars with masses above 0.85\,M$_\odot$, and for very low-mass stars with masses between 0.1 and 0.25\,M$_\odot$ is highly encouraging, and speaks for the maturity of stellar model grids.
Potential problem areas in modelling CMDs of the Hyades have been noticed in the literature. In general, the challenges considered for the Hyades stellar sequence are related to stellar rotation, elemental abundance ([Fe/H] and $\Delta {\rm Y}/\Delta {\rm Z}$) and nuclear energy production rates, and the description of convection. \cite{Gossage2018} discusses the effect of rotation and variations in $\alpha_{\rm ML}$ for stars above 1.2\,M$_\odot$ in the Hyades. They conclude that a larger $\alpha_{\rm ML} = 2.0$ would provide a better fit to the giant stars in the Hyades. For Hyades members with masses less than 0.85\,M$_\odot$, \cite{Castellani2001} noted a discrepancy between observed and theoretical optical colour and brightness, in particular in the mass range where superadiabatic convection dominates the outer convective zone of a star. They suggest to consider $\alpha_{\rm ML}$ as a free parameter in this mass range which could be tuned to a (lower) value to better describe the efficiency of superadiabatic convection. Based in part on the ideas initially explored by \cite{Stevenson1979} and \cite{MacDonald2014}, \cite{Ireland2018} study the effects of rotation and magnetic fields on convection, and investigate how they could be parameterized by a depth dependent $\alpha_{\rm ML}$.
The good match of the synthetic photometry of the MESA isochrone to the observed GAIA data for BP-RP $>$3.1\,mag suggests that there is no generic issue in the ATLAS12 and SYNTHE conversion \citep{Choi2016} of luminosity and temperature to synthetic photometry for cool stellar photospheres. \cite{Choi2016} use different sets of opacity tables for the mass ranges 0.1 to 0.3\,M$_\odot$, 0.3 to 0.6\,M$_\odot$, and $>$0.6\,M$_\odot$ as boundary conditions for the atmospheres. The discrepancy for 1.2\,mag$<$BP-RP$<$3.1\,mag (T$_{\rm eff}$ = 3050 to 4750\,K, m = 0.25 to 0.85\,M$_\odot$), thus might warrant a review of the opacity tables and their transitions in this parameter range. \cite{Choi2016} also point out systematic differences in the evolutionary track of a 0.3\,M$_\odot$ star between the MIST and Lyon \citep{Baraffe1998,Baraffe2003,Baraffe2015} models on the one side, and the PARSEC \citep{Giradi2002,Marigo2008,Bressan2012} models on the other side. They attribute this difference to the modified temperature-Rosseland mean optical depth (T-$\tau$) relation \citep{Chen2014} employed for low-mass stars by the PARSEC models.
The colour range around BP-RP $\approx$2.80 to 2.95\,mag (G$_{\rm abs} \approx 10.5$ to 11\,mag) stands out in the CMD as the observed stellar sequence shows a larger scatter than for stars with bluer colours or BP-RP $>$ 3.1\,mag (G$_{\rm abs} > 11.0$\,mag, Figures \ref{CMDfull} and \ref{CMDvlm}). According to the MESA tracks, this correspond to stellar masses just below 0.35\,M$_\odot$, which roughly coincides with the fully convective boundary. As discussed by \cite{Baraffe2018}, for stars in this mass range the energy productions rate of the proton-proton I branch ($^3$He + $^3$He $\longrightarrow$ $^4$He + 2 p) is crucial for a proper description of the stellar luminosity. Depending on the precise stellar properties and the initial He abundance, $^3$He in Hyades members in this mass range might not yet have reached its equilibrium abundance. An overabundance in $^3$He resulting in an enhanced energy productions rate could explain the observed overluminosity of stars compared to the 710\,Myr isochrone. The increased scatter in the stellar magnitude-colour sequence could suggest the presence of an instability in the stellar luminosity for this particular mass and age range. \cite{vanSaders2012} identified a $^3$He-driven instability for stars near the fully convective boundary, which they referred to as {\it convective kissing instability}. \cite{Baraffe2018} confirmed the existence of this instability using a different evolutionary code. Caveats are that the observed increase in the scatter of G$_{\rm abs}$ by $\approx$0.2\,mag (20\%) is about a factor 2 to 4 larger than the variations in luminosity according to the models by \cite{vanSaders2012}. For stellar interior models of solar metallicity, the convective kissing instability seems to be restricted to a relatively narrow mass range of 0.34 to 0.37\,M$_\odot$ \citep{vanSaders2012,Baraffe2018}.
In the Hyades, a stellar mass of $\approx$0.30\,M$_\odot$ marks the boundary between very low mass stars with rotational periods ranging from a fraction of a day to 5 days, and more massive single stars with rotational periods in the range of 10 to 20\,days \citep{Douglas2016,Douglas2019}. Fast rotation and the associated stellar activity has been associated with radius inflation \citep{Somers2015a,Somers2017}. As discussed by \cite{Feiden2014} and \cite{Feiden2016} a significant radius inflation requires strong interior magnetic fields in the range of 10\,MG, which might be difficult to maintain over the age of the Hyades. \cite{Douglas2016} discuss that poloidal fields in Hyades members in the mass range 0.3 to 0.6\,M$_\odot$ resulted in effective magnetic braking, and strongly reduced stellar activity, while fully convective stars of lower mass primarily rely on their (weak) stellar winds for shedding angular momentum. We suggest that a future study could focus on the rotation periods and activity levels of the stars in the mass range 0.25 and 0.35\,M$_\odot$, and look for, e.g., correlations with their luminosity, or photometric variability.
As of mid 2022, \cite{Choi2016} had more than 1200 citations, with the MESA models being used to assess ages, metallicity, radii, effective temperatures, luminosity, surface gravity, etc.\ for individual stars and (complex) stellar populations.
The primary science of more than 130 of these articles is on exoplanets, where in general planetary properties are derived relative to the astrophysical properties of the host star. Biased stellar properties could thus directly bias the deduced properties of exoplanets.
The example of the Hyades single star sequence highlights the potential perils in the analysis of low-mass (late-type) stellar populations. Even in the presence of {\it GAIA} high-precision parallax and photometric information, missing supplemental information could result in biased conclusions on absolute stellar ages, metallicity, or the intrinsic spread of these properties. A `blind' isochronal analysis of the Hyades sample in the BP-RP colour range 1.2 to 2.6\,mag might underestimate the true age by more than a factor of 10 (55 vs 710\,Myr, see Figure \ref{CMDfull}), or result in a significant overestimate of its metallicity ([Fe/H] = +0.50 vs. +0.25, see Figure \ref{CMDfull_metal}).
The updated single star sequence of the Hyades cluster with its accurate {\it GAIA} EDR3 distance and photometric measurements could serve as a reference to tune modelling parameters like, e.g., $\alpha_{\rm ML}$ to the efficiency of super-adiabatic convection, or to tune astrophysical parameters like, e.g., $\Delta {\rm Y}/\Delta {\rm Z}$ (and in particular $^3$He abundances) to reflect actual energy productions rates.
We suggest that future grids of solar-scaled evolutionary models should be tested against the Hyades single star sequence presented in Table \ref{SingleStar}, or against comparable data sets for the Pleiades or Praesepe open clusters.
\section*{Acknowledgements}
We thank H.-W.\ Rix for the initial discussion, which prompted this research. We thank the anonymous referee for constructive comments, which helped to improve the paper.
This work has made use of data from the European Space Agency (ESA) mission
{\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}
Data Processing and Analysis Consortium (DPAC,
\url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC
has been provided by national institutions, in particular the institutions
participating in the {\it Gaia} Multilateral Agreement.
\section*{Data availability}
The data underlying this article are available in the article and in its online supplementary material. The online version of Table \ref{SingleStar} includes coordinates, which makes objects discoverable in VizieR.
\bibliographystyle{mnras}
\bibliography{lit} %
\bsp %
\label{lastpage} |
Title:
DarkMix: Mixture Models for the Detection and Characterization of Dark Matter Halos |
Abstract: Dark matter simulations require statistical techniques to properly identify
and classify their halos and structures. Nonparametric solutions provide
catalogs of these structures but lack the additional learning of a model-based
algorithm and might misclassify particles in merging situations. With mixture
models, we can simultaneously fit multiple density profiles to the halos that
are found in a dark matter simulation. In this work, we use the Einasto profile
(Einasto 1965, 1968, 1969) to model the halos found in a sample of the Bolshoi
simulation (Klypin et al. 2011), and we obtain their location, size, shape and
mass. Our code is implemented in the R statistical software environment and can
be accessed on this https URL
| https://export.arxiv.org/pdf/2208.04194 |
\title{DarkMix: Mixture Models for the Detection and Characterization of Dark Matter Halos}
\correspondingauthor{Llu\'is Hurtado-Gil}
\email{[email protected], [email protected]}
\author[0000-0001-9674-1345]{Llu\'is Hurtado-Gil}
\affiliation{eDreams ODIGEO \\
C/ Bail\`en 67-69,\\
08009 Barcelona, Spain.}
\affiliation{Observatori Astron\`omic \\
Universitat de Val\`encia \\
C/ Catedr\`atic Jos\'e Beltr\'an 2 \\
E-46980, Paterna, Spain}
\nocollaboration{5}
\author[0000-0002-0631-7514]{Michael A. Kuhn}
\affiliation{California Institute of Technology \\
Pasadena, CA 91125, USA}
\author[0000-0003-0791-7885]{Pablo Arnalte-Mur}
\affiliation{Observatori Astron\`omic \\
Universitat de Val\`encia \\
C/ Catedr\`atic Jos\'e Beltr\'an 2 \\
E-46980, Paterna, Spain}
\affiliation{Departament d'Astronomia i Astrof\'isica \\
Universitat de Val\`encia \\
E-46100, Burjassot, Spain}
\author[0000-0002-5077-6734]{Eric D. Feigelson}
\affiliation{Department of Astronomy \& Astrophysics, \\
Penn State University \\
University Park, PA 16802, USA}
\affiliation{Center for Astrostatistics, \\
Penn State University,
University Park, PA 16802, USA}
\author[0000-0002-9937-0532]{Vicent Mart\'inez}
\affiliation{Observatori Astron\`omic \\
Universitat de Val\`encia \\
C/ Catedr\`atic Jos\'e Beltr\'an 2 \\
E-46980, Paterna, Spain}
\affiliation{Departament d'Astronomia i Astrof\'isica \\
Universitat de Val\`encia \\
E-46100, Burjassot, Spain}
\affiliation{Unidad Asociada Observatorio Astron\'omico (IFCA-UV) \\
E-46980, Paterna, Spain}
\keywords{Dark matter distribution (356), Galaxy dark matter halos (1880), Spatial point processes (1915), Mixture model (1932)}
\vfill
\section{Introduction}
\label{sec:intro}
\subsection{Dark Matter Halos}
In 1952, Neyman \& Scott proposed the first statistical model of large-scale galaxy distribution \citep{1952ApJ...116..144N, 1953PNAS...39..737N, 1954PNAS...40..873N}. This model interpreted luminous matter to be distributed in the universe according to a stochastic process, where a discrete distribution of galaxies is aggregated into overdense clusters that themselves are distributed in space as a Poisson process. This model needs three main components to be built: the distribution of galaxies within each cluster, the size distribution of the clusters, and a description of the clustering of clusters. While their original formulation was too simple, this framework is still a valid approach for the baryonic distribution in the universe and it can be extended to dark matter distribution \citep{2002PhR...372....1C}. Here, dark matter is a continuous field of particles that collapses from overdensities into strongly clustered structures, which are called halos. The identification and morphology of these halos is the main topic of the present work.
Analytic models and numeric simulations show how the initial dark matter field, which is made of particles, evolves from an initially smooth state to a highly clustered final condition, which results in a complex cosmic web of knots (the halos), filaments, sheets and voids \citep{1985ApJS...58....1B}. Simulations show that the halo mass \citep{1999MNRAS.310.1147M, 1996ApJ...462..563N}, abundance and spatial distribution \citep{1997ApJ...484..523C, 2001MNRAS.321..372J} are highly dependent on the initial conditions. The final structure within a halo can be reasonably assumed to be in virial equilibrium. The nature and evolution of the galaxies that have formed inside halos is strongly dependent on the parent halo's properties \citep{1999MNRAS.303..188K, 1999MNRAS.310.1087S, 2001MNRAS.327.1041B, 2000MNRAS.319..209C}, with more galaxies formed in more massive and clustered halos. However, the galaxy distribution is biased toward stronger clustering conditions \citep{2003MNRAS.344..847M, 2011ApJ...736...59Z, 2016ApJ...818..174H}.
The spherical collapse model is a classic approximation for initial conditions leading to dark matter halos \citep{1972ApJ...176....1G, 1984ApJ...281....9F, 1985ApJS...58....1B}. Here, dark matter overdensity collapses from a tophat density perturbation into a halo that, depending on the overdensity mass and density, virializes when a certain size is reached. The final density of the halo is much higher that the prediction of a linear model because the evolution of the halo clustering is governed by nonlinear processes, where the densest regions are populated by the most massive halos \citep{1980PhRvD..22.1882B, 1984ApJ...284L...9K} and the dark matter follows a log-normal distribution \citep{1991MNRAS.248....1C, 2017A&A...601A..40H, 2017MNRAS.466.1444C}.
The shape of the collapsed halos, \cite{1985ApJS...58....1B} and \cite{1984ApJ...281....9F} suggests that their density profile around the center depends on the initial density distribution of the parent overdense region. Although more massive halos arise from denser peaks in the initial fluctuation field \citep{1984ApJ...284L...9K, 1985ApJ...297...16H}, these dense peaks are also less centrally concentrated \citep{1980PhRvD..22.1882B}. Massive virialized halos are thereby less centrally concentrated than less massive halos \citep{1996ApJ...462..563N}.
Several halo density profiles have been proposed, including the \citet{1990ApJ...356..359H}, Navarro-Frenk-White \citet{1996ApJ...462..563N, 1997ApJ...490..493N}, and \citet{1968PTarO..36..414E} profile. In sections~\ref{sec:NFW} and~\ref{sec:einasto}, we introduce the two later profiles and we justify our final selection of the Einasto profile.
\subsection{Finding Structure with Mixture Models and Other Methods}
The growth in volume and detail of astronomical observations and simulations requires automated tools to characterize the properties and evolution of halo structures. These tools should be robust and objective, and should not depend on heuristic choices or subjective judgment.
Most of the widely-used clustering algorithms for galaxy clustering analysis are nonparametric and are based on dissimilarity distances, which is a metric that is used to decide if two particles are sufficiently close to belong to the same cluster. Astronomers commonly use the Friends-of-Friends algorithm, which is known in statistics as single-linkage hierarchical agglomerative clustering \citep{gower1969minimum}. This method is highly performant and is scalable, which is crucial for large volume data sets, such as dark matter numerical simulations \citep{2020arXiv200311468W}. However, this method is prone to `chaining' of unrelated groupings \citep{everitt2011cluster}. The resulting clusters depend strongly on the choice of density or size threshold value. Other nonparametric clustering procedures---such as Ward's hierarchical clustering, $k$-means algorithm, DBSCAN, kernel density estimation bump hunting, and their many extensions---also have heuristic thresholds or stopping rules with a corresponding loss of objectivity and robustness. These procedures also have the limitation of giving `hard' classifications where data points are strictly classified on one cluster or another without regard to the reliability of this decision. Therefore, the characterization of merging or blended clusters is limited.
In contrast, parametric methods assume a particular shape to the structures in the population. They also have the advantage that modeling can be based on maximum likelihood estimation or (if prior information is available) Bayesian inference, without requiring the addition of heuristic thresholds or stopping rules. These methods are typically based on probability density functions. This allows us to perform statistical tests (e.g., significance tests on the parameters) and goodness of fit validations (e.g., the coefficient of determination) \citep{rao1973linear}. Furthermore, they give `soft' probabilities for each point belonging to each cluster. Hard membership classifications can be decided afterwards using heuristic decision rules.
These benefits motive us to use finite mixture models to detect and characterize dark matter structures \citep{peel2000finite, mclachlan2000mixtures, everitt2005finite, fruhwirth2006finite, mclachlan2007algorithm, everitt2011cluster}. This technique has been widely used in astronomy and astrophysics with considerable success \citep{2014ApJ...787..107K, fruhwirth2019handbook, KuhnFeigelson19}.
We understand the dark matter distribution to be structured in halos, which can be described by a parametric surface-density distribution. The halos and other structures are described by the mixture model components, which are summed to create the surface density function. Mixture models are widely used in parametric modeling of point processes, usually with Gaussian function components \citep{fraley2002model}. As previously explained, we will instead use an astrophysically motivated function, such as the Einasto profile.
The mixture model is then estimated following a three step method: first, the number of components is chosen by the user; second, the properties of these components are obtained through maximum likelihood parameter estimation (MLE); and third, the component assignments for dark matter particles are determined using posterior probabilities from the fitted models \citep{everitt2011cluster}.
Mixture models are dependent on three main choices: the chosen number of components, the convergence criteria towards the surface density distribution, and the selected profile. The first is assessed using model selection, such as Bayesian and Akaike Information criteria \citep{schwarz1978estimating, akaike1998information}. The model goodness-of-fit will be measured using well-known statistics, such as the coefficient of determination, or residual maps between the input data and model predictions \citep{2014ApJ...787..107K, 2017MNRAS.472.2808D}. We will justify the election of the Einasto profile in section~\ref{sec:einasto}. Once the model is estimated, we may want to give a final particle classification in the clusters. Mixture models allow for a probability based classification, but as we explain in section~\ref{sec:mem} we recommend the application of a heuristic threshold for background particles. This threshold should be based on the merging conditions of our data.
Aside from the mixture model alternatives, several algorithms are widely used by the community to detect structures in dark matter N-body simulations and classify their particles. We mention the Bound Density Maximum algorithm \citep{1997astro.ph.12217K, 2013AN....334..691R} in section~\ref{bdm}, which is based on density maximums and spherical halos. Another alternative is the ROCKSTAR algorithm \citep{2013ApJ...762..109B}, `based on adaptive hierarchical refinement of friends-of-friends groups in six phase-space dimensions and one time dimension'. With this method, the algorithm can provide an effective particle classification and can even detect small subhalos in merging conditions. The halo finder VELOCIraptor \citep{2019PASA...36...21E} has also been proven to be able to `identify sub-halos deep within the host that have negligible density contrasts to their parent halo'. Both methods are meant to be used on large N-body simulations, providing a full catalog of halos, sub-halos and even tidal features.
In contrast, our method is a parametric based approach. Although this limits its use to small-sized samples of particles, it provides a parametric fitting of the halos density profile. In this work, we will focus on the advantages of obtaining such a profile-based description.
This paper is organized as follows. Section~\ref{sec:mm} presents the finite mixture modelling technique, the Einasto profile and different applications for the estimated results. Section~\ref{sec:mle} illustrates the MLE calculation and section~\ref{sec:ra} describes the tools that we used to validate the best-fit model. In section~\ref{sec:over} we summarize the steps of our code \texttt{darkmix} and we apply it in section~\ref{sec:val} to a set of generated realizations of a simulated dark matter distribution with Einasto profile and fit them with our software to validate it. Section~\ref{sec:data} presents a data set from the Bolshoi simulation, and section~\ref{sec:res} shows the analysis and results of our mixture modeling and validation. In section~\ref{sec:con}, we summarize our main conclusions and outline future work.
\section{Finite Mixture Models for Dark Matter Halos}
\label{sec:mm}
Finite mixture densities are a family of probability density distributions that combine multiple components into a single probability function. Each component has a probability density function (e.g., the Einasto profile) and the final mixture model is the weighted sum of $c$ components. All components are continuous, and can be evaluated at any location of the data space, occupied or not by a particle. Mixture models admit as many different components as desired, as long as they can be defined as probability distributions. In this work we will consider two kinds of components: $k$ halos, and a single background component containing all of the particles that are not associated with a halo. The total number of the mixture model is $c = k + 1$.
Given a data sample of $N$ dark matter particles from a simulation, we define $\mathbf{X}$ as a three column matrix containing the coordinates of our particles. Each component will be modeled by a profile function $\rho_j(\mathbf{r}_i, \vec{\theta}_j)$, $j=1,\dotsc,c$, where $\vec{\theta}_j$ is the parameters vector for component $j$ and $\Theta$ is the matrix of the $c$ vectors $\vec{\theta}_j$. The sum of these components is weighted by the mixing proportions $\vec{w} = \{w_j\}$, $j=1,\dotsc,c$, which are non-negative. In point process statistics, the model is defined over a window $W$ containing the sample of $N$ points $\mathbf{X} = \{\mathbf{r}_i\}$, where $\mathbf{r}_i = (x_i, y_i, z_i)$.
Together, the finite mixture model $\Sigma$ can be written as
\begin{equation}\label{fun:mm}
\Sigma(\mathbf{X} | \vec{w}, \Theta) = \sum_{j=1}^{c} w_j \cdot \rho_j(\mathbf{X} | \vec{\theta}_j) = \sum_{j=1}^{c} \sum_{i=1}^N w_j \cdot \rho_j(\mathbf{r}_i | \vec{\theta}_j)
\end{equation}
Our implementation of the mixture model for the dark matter distribution will follow the strategy of \citep{2014ApJ...787..107K}, and therefore we chose our notation after this paper.
Functions $\rho_j$ represent the profile functions of our dark matter structures, which are summed into the surface density function $\Sigma$ and the probability density function that we use as a model and will fit to our data. These functions typically create a multimodal distribution that matches the clusters that are present in our data. Therefore, individual dark matter particles are used to obtain the overall density distribution of the structure that they belong to. No learning can be obtained from the internal distribution (i.e., the relative position of the particles inside their structure). The total finite mixture model $\Sigma$ is the weighted sum of these components and models the dark matter density distribution as generated by the N-body simulation.
The profile of a mixture model component can be as irregular as we are able to model it. However, since this method is generally used as a cluster classification method, we will model two different components---the halos and the background.
We will introduce the NFW and the Einasto profiles in the next section. However, as we justify, we will only make use of the latter. We will also introduce the final version of our mixture model before deducing several quantities and functions of interest.
\subsection{The NFW Profile}\label{sec:NFW}
The NFW profile \citep{1996ApJ...462..563N, 1997ApJ...490..493N} has the following shape
\begin{equation}
\rho(r|M) = \frac{\rho_s(M)}{\left[ c(M) \frac{r}{r_{\rm vir}(M)} \right] \left[ 1 + c(M) \frac{r}{r_{\rm vir}(M)} \right]^2} \; ,
\end{equation}
truncated at the virial radius $r = r_{\rm vir}(M)$.
This profile has a logarithmic slope of $-1$ at small scales ($r \ll r_{\rm vir}/c$) and of $-3$ at large scales ($r \gg r_{\rm vir}/c$).
Here, $\rho_s(M)$ is a normalization factor that is fixed from the condition that the total mass integrated to $r_{\rm vir}(M)$ must be equal to $M$ and $c(M)$ is the concentration parameter. We note that the NFW profile or power-law profiles cannot be used in this context because they do not have a finite integral. Instead, we will use the Einasto profile for our study.
\subsection{The Einasto Profile}\label{sec:einasto}
The Einasto profile \citep{1965TrAlm...5...87E, 1968PTarO..36..414E, 1969Afz.....5..137E} for three-dimensional density profiles is similar to the S\'ersic profile used for two-dimensional galaxy brightness profiles \citep{1963BAAA....6...41S,1968adga.book.....S}. Assuming spherical symmetry, the Einasto profile describes the density $\rho$ of halo $j$ as a function of distance from the halo center $\mathbf{r_{0,j}}$. The S\'ersic index $n$ is a free parameter that is used to measure the shape of the halo density profile: larger values of $n$ create centrally concentrated profiles. As we will see in section~\ref{sec:res}, large values of this parameter (above $n \sim 8$) degenerate the profile into a power law-like function. The other free parameter of the profile is Einasto's radius $r_e$, which defines a volume containing half of the total mass that can be used to understand the size of the halo.
Given these parameters, we can define $d_n$, which is a function of $n$ such that $\rho_e$ is the density at the radius $r_e$, and we have defined $\rho_e$. The factor $d_n$ can be obtained by solving
\begin{equation}
\Gamma(3n) = 2\gamma(3n,d_n)
\end{equation}
\noindent where $\Gamma$ is the complete gamma function and $\gamma$ is the lower incomplete gamma function\footnote{The gamma functions are extensions of the factorial function to complex numbers.}.
We are now ready to define the Einasto profile as
\begin{equation}
\rho(r) = \rho_e \exp{\Big(-d_n \Big[ (r/r_e)^{1/n} - 1\Big]\Big)}
\end{equation}
\noindent where $r = ||\mathbf{r-r_{0}}||$ is the distance between a chosen location $\mathbf{r}$ and the center of the halo $\mathbf{r_0}$. As we see in eq.~\ref{fun:mm}, this profile will be multiplied by parameter $p_j$, which is the mixture coefficient. This makes it impossible for us to estimate $p_j$ and $p_e$ separately. The results that we provide for $p_j$ in Tables~\ref{dens} and~\ref{tab:6k} should be read as $p_j \cdot p_e$.
Before moving on to the next section, we define the profile of the background component as a constant density, having value 1 at all locations of the window $W$. Since the background profile is multiplied by its our mixture coefficient $p_b$, we expect it to be close to the mean density of the data set. In a large data set, where conditions are comparable to the universe, we would expect it to be around the mean density of the universe. This is equivalent to defining the background as a homogeneous Poisson distributed point process, with profile function
\begin{equation}
\rho_b(\mathbf{r}) = 1
\end{equation}
\subsection{Dark Matter Halo Mixture Model}
The mixture model for our problem is a weighted sum of the background component ($\rho_b$) plus the $k$ halo components ($\rho$), hence $c = k + 1$ components (see eq.~\ref{mmsigma}). The parameters for each halo profile are the three vectors representing the halo center $\mathbf{r_0}$, the size parameter $r_e$, and the shape parameter $n$. These parameters are collected into a five-dimensional parameter vector $\vec{\theta} = (\mathbf{r_0},r_e,n)$.
The contribution of each component to the final density distribution is uneven. The mixture proportions $\vec{w}=\{w_j\}$, $j=1,\dotsc,c$ are used as weights to normalize each halo's contribution to the mixture model. Based on equation (\ref{fun:mm}), the resulting model is
\begin{equation}\label{mmsigma}
\Sigma(\mathbf{r} | \vec{w},\Theta) = \frac{N}{M} \Big(w_{b} \cdot \rho_b(\mathbf{r}) + \sum_{j=1}^{k} w_j \cdot \rho(\mathbf{r}-\mathbf{r_{0,j}} | r_{e,j},n_j) \Big)
\end{equation}
\noindent where the weights $w_j$ give the mixture proportions $p_j = N\cdot w_j/M$ in equation~(\ref{fun:mm}) and $M$ is the total mass given by
\begin{equation} \label{mmmass}
M = \int_W \Big(w_{b} + \sum_{j=1}^{k} w_j \cdot \rho(\mathbf{r}-\mathbf{r_{0,j}} | r_{e,j},n_j) \Big) d\mathbf{r}
\end{equation}
The term $N/M$ works as a normalizing constant so that the integral of the model $\Sigma$ is always the number of particles $N$. Note that we ignore function $\rho_b(\mathbf{r})$ because it is constant 1. Function $\Sigma$ is our probability density function for a mixture model problem as in equation~\ref{fun:mm}. Since the density of the universe is constrained to not be infinite, we can model the density of the components relative to each other. In equation \ref{mmsigma}, the mixture proportions are defined so that $\sum_{i=1}^c p_i = N$, and by definition we can set $w_1 = 1$.
This normalization of the model by the total number of objects $N$ has other advantages. The integral of $\Sigma(\mathbf{r})$ in a region $A \subset W$ gives the number of model objects in $A$. The integration of a chosen model component over the entire volume $W$ gives the number of model objects belonging to this component. The statistical model can thus be understood as an inhomogeneous Poisson distribution with intensity $\Sigma(\mathbf{r})$.
Notice how the quantity $\rho_e$ in the Einasto profile will be masked into the mixture proportion as a consequence of the mixture model because both are factors to the profile.
\subsection{Halo Identification and Characterization}
\label{sec:su}
Once the mixture model is fitted (section~\ref{sec:mle}), analysis of each Einasto-shaped component representing a halo can follow. Due to the additive nature of a mixture model, it is easy to determine the dominant component at every location. Here, we apply a simple decision rule to assign individual data points to a halo: given a particle, we calculate its probability to belong to each component. We then randomise the membership of the particle using the probabilities.
Each model halo component can be compared to the empirical halo profile in the context of the total mixture model of overlapping halos. Component $j$ has a three-vector $\mathbf{r_{0,j}}$, giving its location, and three Einasto parameters $w_j$, $r_{e,j}$ and $n_j$. We define the empirical halo profile as the number of real particles per volume, the particle density, in concentric shells centred at $\mathbf{r_{0,j}}$ with volumes $S_j(r) = V_j(r + \text{d}r) - V_j(r)$. The empirical halo profile is
\begin{equation}\label{pro:emp}
\hat{\delta}_{j}(r) = \frac{1}{S_j(r)}(n_j(r+\text{d}r)-n_j(r))
\end{equation}
\noindent where $n_j(r)$ is the number of particles in a sphere of radius $r$ and center $\mathbf{r_{0,j}}$. Note this has the same units, number of particles per unit volume, as the mixture model in equation~\ref{mmsigma} by $N/M$. The estimated density profile of the mixture model centered at $\mathbf{r_{0,j}}$ is
\begin{equation}\label{pro:fit}
\hat{P}(r | \mathbf{r_{0,j}}) = \int_{S_j(r)} \Sigma(\mathbf{r} | \vec{p}, \Theta) d\mathbf{r}.
\end{equation}
The integral of $\Sigma$ at any volume $V$ gives us the estimated number of particles. Since profile $\hat{P}(r)$ is evaluated at the concentric shells $V_j(r)$ with thickness $\text{d}r$, it should be understood as a density profile over radius $r$. Component $j$ in the previous integral can be isolated from the total model profile as
\begin{equation}\label{pro:com}
\hat{\rho}(r | \mathbf{r_{0,j}}) = \int_{S_j(r)} p_j\rho(\mathbf{r - r_{0,j}} | r_{e,j}, n_j) d\mathbf{r}.
\end{equation}
We can now compare the observed profile of $\hat{\delta}_{j}$ with the full model estimator $\hat{P}(r | \mathbf{r_{0,j}})$ and with the isolated estimated profile $\hat{\rho}(r | \mathbf{r_{0,j}})$, which may show important departures from the total profile. For short distances, $\hat{\rho}$ and $\hat{P}$ should have similar values. However, for distances at which our component $j$ starts to increasingly overlap with other components, these functions will start to diverge as other structures contribute more to $\hat{P}$.
Parametric models offer the possibility of easily generating new samples following the model distribution. For a mixture model, this is done with an inverse transform sampling for the estimated number of objects per component $N_j$.
\begin{equation}\label{npart}
N_j = \int_W p_j\rho(\mathbf{r - r_{0,j}} | r_{e,j}, n_j) d\mathbf{r}.
\end{equation}
For the possible interest of the user, software implementations of these functions are included in the $R$ language and are included in our repository.
\section{Model Fitting} \label{sec:fitting}
\subsection{Estimation of Model Parameters and Model Selection}\label{sec:mle}
The optimal mixture model for a data set is calculated by maximum likelihood estimation (MLE) for the log-likelihood
\begin{equation}\label{loglikn}
\log {L(\vec{p},\Theta | \mathbf{X})} = \sum_{i=1}^N \log {\Sigma(\mathbf{r}_i | \vec{p},\Theta)} - \int_W \Sigma(\mathbf{r'} | \vec{p}, \Theta) d\mathbf{r}'
\end{equation}
\noindent where $\mathbf{X} = \{\mathbf{r}_i\}$ contains the point process distributed in the window $W$ with surface density distribution $\Sigma$. Note that the right-hand side term is the mass $M$ for the parameters $\Theta$. The MLE and Bayesian best-fit model parameters $\Theta$ are calculated for a chosen number of $c$ components. Model selection among models of different complexities is based on minimising two commonly-used penalized likelihood measures, the Bayesian Information Criterion (BIC) and the Akaike Information Criterion (AIC) \citep{schwarz1978estimating, akaike1998information},
\begin{align}
\text{BIC}(k) & = -2 \log L + 6(c-1) \log N \label{eq:bic}\\
\text{AIC}(k) & = -2 \log L + 12(c-1) \label{eq:aic}
\end{align}
\noindent where $6(c-1)$ is the number of parameters in $c-1$ halo components plus the background component and $\text{log} L$ is the log-likelihood for the best fit parameters.
The relative strengths of AIC and BIC is widely debated, although both are founded on powerful theorems \citep{lahiri2001model, konishi2008information, burnham2002model, kass1995reference, everitt2011cluster}. The BIC has a well-accepted valuation for relative model merit: one model is strongly (very strongly) favored over another when $\Delta(BIC) > 6$ ($>10$) \citep{kass1995bayes}. For the mixture model problem, \citet{2014ApJ...787..107K} found that the AIC was more sensitive to the presence of sparse clusters in the presence of rich clusters because its penalty for complexity is weaker when $N$ is large.
\subsection{Goodness-of-Fit} \label{sec:ra}
A best-fit model with optimum complexity selected with a likelihood-based criterion is not guaranteed to be a good fit to the data. A complex clustered spatial distributions cannot be effectively fitted with a mixture model of a few halos. It is therefore necessary to assess the overall quality of the fit for the entire pattern by a study of the residuals to identify departures of the model from the real data density distribution. Several such tests are outlined here. Studies involving residual analysis of astronomical spatial point processes for goodness-of-fit evaluation of maximum likelihood mixture models include \citet{2014ApJ...787..107K} and \citet{2017MNRAS.472.2808D}.
The residual analysis that is used here is described in \citet{RSSB:RSSB519, baddeley2015spatial} as a `raw residuals'. Raw residuals are defined as the absolute difference between the real number of points in a region $A$ and our estimation for the same region. For our mixture model,
\begin{equation}\label{res}
R(A) = n(\mathbf{X} \cap A) - \int_A \Sigma(u | \vec{p}, \Theta) \text{d}u
\end{equation}
\noindent where $n(\mathbf{X} \cap A)$ is the number of data points in the region $A$. For well-fitted models, the sum of the raw residuals should approach zero when integrated over $W$, and the residual map should approach a random spatial distribution with no correlations between the values of the residuals and the locations of the data points (spatial white noise). Residuals should have low amplitude when compared with the surface density function based on the data.
The calculation of the residuals is made using a quadrature or grid of dummy points $\mathbf{Q} = \{u_i\} \subset W$, $i=1,\dotsc,T$ \citep{RSSB:RSSB519}. Each point defines a small cell where the residuals will be calculated. These residuals create a sparse distribution that is 1 when the cell contains a data point $\mathbf{r}$ and $-\Sigma(u | \mathbf{X}, \vec{p}, \Theta)$ at empty locations in $W$, which is negative and typically close to zero.
To effectively visualise the spatial distribution of the residuals, they have to be smoothed. With grid $\mathbf{Q}$ dense enough to approximate an integral, we can obtain the smoothed residual map $s(u)$ with
\begin{equation}\label{eq:su}
s(u) = \int_W \kappa_{\omega}(u-v) \text{d}R(v) = \sum_{i=1}^N \kappa_{\omega} (u - \mathbf{r}_i) - \int_W \kappa_{\omega}(u-v) \cdot \sum (u | \mathbf{X}, \vec{p}, \Theta) \text{d} v
\end{equation}
\noindent where $\kappa_{\omega}$ is a kernel function and $R_{\Theta}(v)$ is the raw residual in a cell $v$.
It is also useful to define the relative residuals $e(u)$, which are defined as the residual map normalized by the model intensity. The model intensity $\Sigma^{\dag}_{\omega}$ is the right-hand term of eq.~\ref{eq:su}. Hence the definition of these two functions is
\begin{equation}\label{model}
\Sigma^{\dag}_{\omega}(u | \vec{p}, \Theta) = \int_{W} \kappa_{\omega}(u-v) \Sigma(u | \vec{p}, \Theta) \text{d}v
\end{equation}
\begin{equation}\label{relerr}
e(u) = s(u)/\Sigma^{\dag}_{\omega}(u | \vec{p}, \Theta)
\end{equation}
As we will see in the following sections, this function can be used to detect data structures that have not been modeled by any model component. For any structure that is properly mapped by the model, $s(u)$ will be a small quantity and the values of $e(u)$ in its region will also be small. However, if a structure in the data is not included in the model, then the value of function $\Sigma^{\dag}_{\omega}(u | \vec{p}, \Theta)$ will be close to zero for any location close to that structure and the error value in $s(u)$ will be amplified in $e(u)$ and unfitted structures can be easily detected.
The kernel function that is used in this work is the Gaussian filter, which is commonly used in cosmology \citep{2002sgd..book.....M}, where $\omega$ is the smoothing radius or bandwidth. The appearance of the smoothing strongly depends on the choice of this quantity. The bandwidth can be selected using cross-validation or other techniques and we select it heuristically to give informative residual maps.
Finally, we use the coefficient of determination \citep{rao1973linear} to assess the global goodness of fit. Using the expected proportionality between the model density ($\Sigma_{\omega}^{\dag}$) and the data density ($\Sigma_{\omega}^{*}$) functions, we estimate a simple linear regression between them and use the resulting $R^2$ coefficient as a measure of goodness of fit \citep{2017MNRAS.472.2808D}.
The data $X$ can be similarly smoothed with the same kernel. This function is not used in the fitting or model validation process, which would be incorrect given the loss of data structure detail. However, we include it in our code repository for completeness and visualization purposes (as in Fig~\ref{data_model_dens} top left-hand panel):
\begin{equation}\label{ker}
\Sigma^*_{\omega}(u) = \sum_{i=1}^N \kappa_{\omega}(u-\mathbf{r}_i) \qquad \mathbf{r}_i \in W \\
\end{equation}
\subsection{Available Software Packages}
It is not our aim to compare the performance of our code with other packages available for users. In contrast, our solution is adapted to the particular scenario of dark matter halos with Einasto profile, while most of the public solutions use GMM or only accept two-dimensional data. However, we find it interesting to provide a small summary of the available solutions.
Substantial code devoted for mixture model analysis are available in the R public domain statistical software environment such as CRAN packages {\it mixtools}, {\it mclust}, and {\it EMCluster} \citep{mixtools2009, mclust16, Chen2015EMClusterpackage}. The Python Machine Learning library \textit{scikit-learn} \citep{scikit-learn} includes Gaussian mixture models (GMM) algorithms. Another option is $EMMIX$, which is written in Fortran \citep{RePEc:jss:jstsof:v:004:i02}, where the Expectation-Maximization (EM) algorithm is used to find the MLE \citep{krishnan1997algorithm,mclachlan2008algorithm}. However, estimation of best fits in complex data sets can be difficult to achieve thanks to the multiple peaks. A more robust procedure makes use of the stochastic EM algorithm \citep{celeux1992classification}, where randomization of the steps seeks to avoid trapping in the first found local maximum. Even with this technique, it is advisable to repeatedly run estimation algorithms using different starting values to avoid convergence to a non-optimal local maximum. If the same best fit solution is achieved every time, then we can be more confident of reaching an absolute maximum.
\subsection{Software Repository}
Our solution necessarily departs from the previous packages. As explained, while most mixture model software assumes Gaussian shapes and use the EM algorithm for parameter estimation, we will adopt different algorithms. Mixture model solutions with non-Gaussian functions such as the Einasto profile are not so easily achieved with this method, due to the reduced curvature in the derivative.
Following \citet{2014ApJ...787..107K}, who calculate two-dimensional mixture models with the isothermal ellipsoid shapes, we will maximize the log-likelihood function using the Nelder-Mead simplex algorithm \citep{nelder1965simplex} as implemented in function \textit{optim} within CRAN package \textit{stats} \citep{team2015r}. Testing with different initial values is recommended to avoid trapping in local maxima. Since the models have high dimensionality, a strategy of systematic freezing and thawing parameters during estimation can be useful. Once the Nelder-Mead algorithm has finished a first attempt at estimation, the results can often be improved by freezing some of the parameters and repeating the calculation for the remaining free parameters. This technique is valuable in mixture models because the parameters of distant halos tend to be uncorrelated. It can also be helpful to obtain a good estimates of halo centers $\mathbf{r_0}$ before attempting the fitting of the rest of the parameters. Confidence intervals on MLE parameters can be estimated from the Fisher Information Matrix.
MLE can present additional problems that depend on the profile functions $\rho_i(\mathbf{X},\Theta)$. It is possible for singularities to exist for certain values of $\mathbf{X}$ or $\Theta$ where the likelihood becomes infinite. This might happen when the number of parameters to be estimated is high when compared with the sample size. This problem can be addressed with a Bayesian approach where the posterior distribution of the parameters is mapped instead of maximizing the log-likelihood function. As priors, we use Gaussian functions with large variances for the centers of the halos, and log-normal distributions for the size and shape parameters $r_e$ and $n$. These calculations are performed using Markov chain Monte Carlo (MCMC). However, even if aided by MLE results that were previously obtained by the Nelder-Mead algorithm, sufficient MCMC evaluations of high-dimensional parametric models can be computationally expensive.
The software that we have used for our Bayesian calculations is the CRAN package \textit{LaplacesDemon} \citep{LAP1,LAP2,LAP3,LAP4}, which provides more than 40 different MCMC algorithms. These algorithms make different decisions regarding the next combination of parameters $\Theta$ to be tested to efficiently map the normalized log-likelihood function~\ref{loglikn}. Adaptive MCMC algorithms, which use the previous evaluations to choose the next $\Theta$, are often more efficient in finding the overall distribution of the function but we must always finish with a long run of a non adaptive algorithm to ensure convergence. In this work, we make use of the \textit{Adaptive Metropolis-Within-Gibbs} (AMWG, \cite{doi:10.1198/jcgs.2009.06134}) and the \textit{twalk} \citep{christen2010general} algorithms. These calculations start with a preliminary MLE solution that is obtained with the Nelder-Mead algorithm.
Our software implementation of this mixture model decomposition of the dark matter distribution is named {\it DarkMix} and is written in the R statistical software environment, which has many tools relating to spatial point processes, mixture models, likelihood calculations, model assessment, and graphics. Our work extends that developed by \citet{2014ApJ...787..107K} to identify star clusters in two dimensions. We present a new library to model three-dimensional data sets \citep{darkmix_zenodo}, which is publicly available on Github\footnote{https://github.com/LluisHGil/darkmix \label{github.url}}.
Its use is explained in sections~\ref{sec:over} and~\ref{sec:res}.
Additional documentation for the code can be found online\footnote{https://darkmix.readthedocs.io/}.
\section{\texttt{Darkmix} Overview}\label{sec:over}
With the equations from section~\ref{sec:fitting} the \texttt{darkmix} code has the following capabilities:
\begin{enumerate}
\item Create data and model objects that are compatible with the R library \texttt{spatstat}. This is a powerful library to work with point processes and its capabilities can be combined with our code.
\item Parametric estimation with two available methods: the Nelder-Mead simplex algorithm \citep{nelder1965simplex} and MCMC.
\item Functions are incorporated for model selection (through AIC and BIC calculations) and goodness-of-fit (through $R^2$ and residuals plots).
\item Generation of model outputs, which comprise the soft classification of the particles, the extraction of individual profiles for components, the generation of model realizations and the visualization of the components' sizes.
\end{enumerate}
\section{Model Validation}\label{sec:val}
Before we introduce our real case data set in section~\ref{sec:data} and estimate a mixture model, we must first validate the performance of our algorithm.
To do so, we will estimate the mixture model in three different configurations: 3, 9 and 16 dark matter halos with background particles in a cube of 25 length units side. For each configuration, we generate 20 realizations with three different particle densities: $0.25$, $0.5$ and $0.75$ particles per volume unit.
The halos are modeled using the Einasto profile and the background components is uniformly distributed, as assumed by our model. The true parameters defining the location, size and shape of the halos, together with the number of particles, are generated randomly (see Tables~\ref{tab:val9} and~\ref{tab:val16}). We then obtained the mixture coefficients using eq.~\ref{mmmass} knowing that $w_0 = 1$. When using three halos, the configuration is a simple set of components around the center of the volume, and its fitting presented no problem. We will use the more complex cases of 9 and 16 halos to study merging halos or locate halos near the volume window boundary.
\begin{deluxetable}{lrrrrrrr}
\caption{True parameters for model validation: 9 halos} \label{tab:val9}
\tabletypesize{\small}
\tablehead{
\colhead{k} & \colhead{$x_0$} &\colhead{$y_0$} &\colhead{$z_0$} &\colhead{$r_e$} &\colhead{$n$} &\colhead{$\log w$} &\colhead{$N$} }
\startdata
$1$ & $2.9$ & $21.0$ & $21.7$ & $1.1$ & $2.4$ & $0.0$ & $256$ \\
$2$ & $8.2$ & $6.5$ & $18.6$ & $0.9$ & $1.4$ & $0.68$ & $544$ \\
$3$ & $8.7$ & $14.9$ & $16.0$ & $1.4$ & $1.7$ & $-0.86$ & $66$ \\
$4$ & $10.1$ & $16.2$ & $4.4$ & $1.3$ & $1.9$ & $-0.64$ & $92$ \\
$5$ & $16.1$ & $7.7$ & $5.9$ & $2.0$ & $1.7$ & $-0.42$ & $518$ \\
$6$ & $16.4$ & $22.8$ & $19.5$ & $2.2$ & $2.9$ & $-0.65$ & $454$ \\
$7$ & $18.4$ & $16.3$ & $22.3$ & $1.4$ & $1.9$ & $-0.08$ & $403$ \\
$8$ & $20.3$ & $6.1$ & $13.9$ & $0.7$ & $1.7$ & $1.09$ & $717$ \\
$9$ & $21.7$ & $14.9$ & $7.9$ & $1.1$ & $2.3$ & $0.20$ & $415$ \\
\enddata
\tablecomments{These parameters have been randomly generated in a cube of side 25 to describe a configuration of 9 Einasto halos (here they are sorted by $x_0$) with density 0.25 particles per volume unit. The background contains 442 particles, which gives $\log w_b = -2.4$. The populations $N$ for densities 0.5 and 0.75 can obtained multiplying the values in the table by 2 and 3.}
\end{deluxetable}
\begin{deluxetable}{lrrrrrrr}
\caption{True parameters for model validation: 16 halos} \label{tab:val16}
\tabletypesize{\small}
\tablehead{
\colhead{k} & \colhead{$x_0$} &\colhead{$y_0$} &\colhead{$z_0$} &\colhead{$r_e$} &\colhead{$n$} &\colhead{$\log w$} &\colhead{$N$} }
\startdata
$1$ & $1.3$ & $6.2$ & $5.2$ & $1.3$ & $1.9$ & $0.0$ & $204$ \\
$2$ & $2.9$ & $21.0$ & $21.7$ & $1.1$ & $2.4$ & $-0.21$ & $83$ \\
$3$ & $5.4$ & $15.7$ & $5.2$ & $1.1$ & $1.8$ & $0.76$ & $729$ \\
$4$ & $5.7$ & $20.0$ & $6.9$ & $2.0$ & $2.1$ & $-0.57$ & $212$ \\
$5$ & $7.9$ & $15.4$ & $12.8$ & $1.0$ & $1.4$ & $0.33$ & $179$ \\
$6$ & $8.2$ & $6.5$ & $18.6$ & $0.9$ & $1.4$ & $0.25$ & $108$ \\
$7$ & $8.7$ & $14.9$ & $16.0$ & $1.4$ & $1.7$ & $-0.56$ & $69$ \\
$8$ & $9.2$ & $21.3$ & $19.4$ & $0.9$ & $1.6$ & $0.29$ & $130$ \\
$9$ & $10.1$ & $16.2$ & $4.4$ & $1.3$ & $1.9$ & $-0.58$ & $55$ \\
$10$ & $15.5$ & $10.6$ & $5.7$ & $1.0$ & $1.8$ & $0.71$ & $481$ \\
$11$ & $16.1$ & $7.7$ & $5.9$ & $2.0$ & $1.7$ & $-0.4$ & $293$ \\
$12$ & $16.4$ & $22.8$ & $19.5$ & $2.2$ & $2.9$ & $-0.84$ & $154$ \\
$13$ & $18.4$ & $16.3$ & $22.3$ & $1.4$ & $1.9$ & $-0.5$ & $83$ \\
$14$ & $20.3$ & $6.1$ & $13.9$ & $0.7$ & $1.7$ & $0.91$ & $255$ \\
$15$ & $21.7$ & $14.9$ & $7.9$ & $1.1$ & $2.3$ & $0.22$ & $230$ \\
$16$ & $23.8$ & $15.5$ & $16.8$ & $1.2$ & $2.3$ & $0.55$ & $571$ \\
\enddata
\tablecomments{These parameters have been randomly generated in a cube of side 25 to describe a configuration of 16 Einasto halos (here they are sorted by $x_0$). Components 2, 6, 7, 9, 11, 12, 13, 15, and 16 are copied from the 9 halos configuration (see Table~\ref{tab:val9}). The background contains 68 particles, which gives $\log w_b = -2.85$. The populations $N$ for densities 0.5 and 0.75 can obtained multiplying the values in the table by 2 and 3.}
\end{deluxetable}
Once these realizations are generated, we estimate the best fit parameters using our \texttt{darkmix} algorithm. As explained in section~\ref{sec:over}, this algorithm starts with an approximation of initial values, which is later optimized using the Nelder-Mead algorithm.
The three-halos case is easy to model and the true parameters lie inside the confidence intervals of our best fit. The samples with nine-halos are more challenging, and we will present several results and plots to show the performance of our model, which we consider acceptable. With 16 halos, the model is no longer able to find and correctly model some of the extra halos because they are too small and faint to be distinguished from the background.
As we can see in Fig.~\ref{plot:val}, the parameters for a population of nine components is generally correctly estimated. The center of the halos is found. This allows for correct estimations of the radius $r_e$ and the SГ©rsic index $n$, only in two cases do we see one of these parameters to be outside the 1-sigma error bars. The mixture coefficients tend to be slightly overestimated, but a correction of these values does not greatly improves the maximum likelihood. Consequently, the estimation of the components population is rather accurate, which allows for a correct classification of the particles. Even in the case of 16 halos, when several components are incorrectly estimated, the halos' population is close to the true values.
An additional validation analysis was performed with the nine-halo configuration and low density: the same realization of particles (one of the 20 samples generated for the previous analysis) was estimated using mixture models of different components, from 6 to 12 halos (see Fig.~\ref{plot:aic9}). The results were shown to be satisfactory: for $k \leq 8$, the model finds the real halos and tries to estimate the Einasto profile. However, it is not until we input the right value of $k = 9$ halos that the halos are not only found but correctly estimated, as seen in Fig.~\ref{plot:val}. When $k \geq 10$, the model tries to estimate spurious over-densities in the background population, which creates halos with large radius and SГ©rsic index. With such values, the mass of the halo is concentrated around a few close particles, which adds a minimal contribution to the maximum likelihood of the model. This contribution is negligible, and the AIC and BIC clearly show a regression for such models.
We conclude that our model is fully validated for configurations of at least nine halos with background component and densities equal or greater than 0.25 particles per volume unit. In Fig.~\ref{plot:maps}, we provide a four-panel plot with the data and model densities, plus the raw and relative residuals. The plot shows the close agreement between data and model. Only two halos (top left-hand a bottom left-hand panels) seem to be overestimated (blue raw residuals) and underestimated (red in raw residuals). The relative residuals show an uncorrelated pattern with the data, which means that no component is excluded from the model.
The case of 16 halos is still interesting and valid conclusions can be obtained for the richer halos, although the fainted halos might add biased results. All of the results and related data files can be found in our Git Hub repository.
\section{Data}\label{sec:data}
We perform this analysis for a case study data set from the Bolshoi simulation \citep{2011ApJ...740..102K}. This cosmological N-body simulation offers the necessary conditions and a high particle resolution that allows us to apply our methodologies over a sample of scientific interest. The MultiDark Database \citep{2013AN....334..691R} hosts two 8.6 billion particle cosmological N-body simulations: the already mentioned Bolshoi simulation \citep{2011ApJ...740..102K} and MultiDark Run1 simulation (MDR1, or BigBolshoi) \citep{2012MNRAS.423.3018P}. The Bolshoi simulation can be used to study both the large scale structure of the universe and the properties of dark matter halos. In this work we focus on the latter, which agreed with the assumptions made for our model.
The MultiDark Database allows us to use a SQL (Structured Query Language) query interface to extract the desired sample. In this paper, data has been extracted from the Bolshoi simulation. The simulation has been performed in a volume of $250 \, h^{-3} \text{Mpc}^3$, having a mass resolution of $1.35 \times 10^8 \, h^{-1}$ M$_{\odot}$, and a force resolution of physical (proper) scale of $1 \, h^{-1} \text{kpc}$ \citep{2011ApJ...740..102K}. The cosmological parameters that we have used for this simulation are $\Omega_{m} = 0.27$, $\Omega_{b} = 0.0469$, $\Omega_{\Lambda} = 0.73$, $\sigma_8 = 0.82$, spectral index $n_s = 0.95$, and $H_{0} = 100 \, h$ km $\text{s}^{-1} \text{Mpc}^{-1}$ with $h=0.70$. The snapshot that we have used is at redshift $z=0$.
Since this work is a case study, we select a small region of interest from the table \texttt{Bolshoi.Particles416} at \texttt{https://www.cosmosim.org/}. We are interested in a volume $W$ containing an interesting structure with halos of different sizes and merging cases. With this purpose, we select a flat cuboid with an squared face to facilitate the two-dimensional examination. This sample contains three halos that are among the 100 most massive, plus several other halos of smaller size. The final sample contains 2081 particles in a volume of $4375 \, h^{-3}$ Mpc$^3$ (defined by a box of $25 \, h^{-1}$ Mpc$\times 25 \, h^{-1}$ Mpc$ \times 7 \, h^{-1}$ Mpc). We wish to remark here that we tested the method on other more sparse galaxy samples. However, success is not reached when the data sparsity of the sample is high. Evaluating which is the minimum needed structure density to be properly described by our model is outside the scope of this work, but this can serve as a reference. The Bolshoi simulation also provides a catalog of halos that have been categorized with the BDM algorithm \citep{1997astro.ph.12217K, 2013AN....334..691R}. In section~\ref{bdm}, we make a comparison between these halos and our findings. It is worth noting here that neither the BDM catalog of halos, nor any other catalog, has been used in this work apart from at the appendix.
An image of the selected sample can be seen in Figure~\ref{md3d}, notice the abundant structure and variations in the shape and size of its clusters.
\section{DarkMix Application to the Bolshoi Simulation}
\label{sec:res}
\subsection{The Fitting Procedure}
This section serves as an example of how our code can be used to estimate a model for the data set presented in section~\ref{sec:data}. All of the calculations and results presented in this section have been obtained with \texttt{darkmix} code and a full walk-through can be found in the code documentation \footnote{https://darkmix.readthedocs.io/en/latest/darkmix\_steps.html}.
The data can be easily loaded into a \texttt{spatsat} point process object, while the model and the parameters are defined as arrays of functions and values. The code contains functions for Einasto profiles and the background component (a constant function) but the user can easily define additional functions to similarly model a structure of interest. Regarding the integration grid, this object is used to calculate the model mass and in eq.~\ref{mmmass}. After testing different grid sizes, the authors recommend using a relation of 2:1 (i.e., we divide the sides of length 25 of our window into 50 parts). Thinner grids do not produce a different result. We only recommend using a thinner grid for plotting purposes (in this work, we use 128 grid points per side of 25 units).
Once these objects are created, we can proceed to estimate the parameters. It is advisable to start with an initial guess of the number of halos and their centers. We recommend using function \texttt{centers}, which outputs a list with the $k$ densest locations in the data set and will be used as the initial guess for the halo components.
The remaining parameters are defaulted to $1$ for the radius $r_e$ and $3$ for the S\'ersic index $n$. The mixing coefficients are in logarithmic form: $1$ for the halos weights and $-2$ for the background component.
We will start with a $c = 11$ components model and the Nelder-Mead simplex algorithm via the R function \texttt{optim}, which maximizes the likelihood function of our mixture model. However, the model needs several iterations to achieve our best fit. To improve the estimation, additional functions have been created to freeze some parameters while estimating the rest. While algorithms spend little time finding the center of the halos, most of the computation is devoted to the estimation of the three Einasto model parameters. The radius of a component is generally independent of the center of a distant location and reducing the total number of parameters per optimization can greatly improve the fit. Each function fixed one of the different parameter types: centers, radii, S\'ersic index and mixture coefficients. It is advisable to repeat this procedure until convergence and ending by calling the \textit{optim} function with no frozen parameters for a final fit. For the data set that we have used in this work, the full procedure might take around 30 minutes to complete on a commonly-available laptop.
The results for our data set in Figure~\ref{md3d} for an assumed $c=11$ number of components are shown in Table~\ref{dens}. The expected number of particles per component can be obtained directly from the mixture model: the component's mass is integrated independently and normalized to the total number of particles. The last column of Table \ref{dens} shows the expected number of particles per component.
Function \texttt{optim} and the other R routines that are designed to estimate the maximum likelihood provide the hessian matrix of the best fit set of parameters. This matrix can be used to obtain confidence intervals for the parameters. However, the numerical approximation of the hessian matrix obtained in our problem was ill-defined and produced negative variances. Given the impossibility of calculating the confidence intervals, we decided to run a MCMC routine with a simpler model and estimate them here. We explain this with more detail in \S\ref{simpler.sec} .
The mixture model should be fitted for several values of $c$, and the AIC and BIC (eq.~\ref{eq:aic} and~\ref{eq:bic}) functions should be calculated for model selection. Figure~\ref{fig:aic} shows the log-likelihood and information criteria values from fits of the Bolshoi simulation data set for models with $c=4$ to $15$ components.
As expected, the log-likelihood increases monotonically with the number of components, while both the BIC and the AIC reach a minimum value for $c=11$. Consequently, this is the preferred model according to both criteria.
We can interpret the evidence in favour of this model according to both criteria using the scale introduced by \citet{kass1995bayes}.
In the case of BIC, the $c=11$ model is very strongly favoured in all cases, as $\Delta(BIC) > 10$ for all other models.
However, when considering the AIC, the models with $c = 12, 13$ cannot be discarded. This is a consequence of the AIC's lower penalty for increased model complexity (see eqs. \ref{eq:bic}, \ref{eq:aic}).
Following the BIC result, and choosing the most parsimonious among the accepted models by the AIC, we stick to the model with $c=11$.
As we discuss in \S\ref{simpler.sec}, it is possible for scientific considerations to prefer a non-optimal model, such as $k = 6$ rather than $k=10$. Generally, the most populous halos will appear in all models and increasing $k$ will identify small, sparser halos. However, it is also possible that small changes in $k$ may lead to major changes in the structure of the best-fit model. In particular, spatial mixture models can identify a large diffuse halo that encompasses or overlaps smaller, denser halos. This will occur when the clustering of points has a complicated hierarchical structure, rather than exhibiting distinct halo structures. Since the model for each value of $k$ is a maximum likelihood fit, they are all statistically valid. Whether to use the BIC and AIC for model selection or another choice of $k$ that gives a more parsimonious or more complete model of the particle distribution is a scientific decision.
\begin{deluxetable}{lrrrrcrrrrrrr}
\caption{Maximum Likelihood Fit to the 10 Halo Model} \label{dens}
\tabletypesize{\small}
\tablehead{
& \multicolumn{4}{c}{Initial estimates} && \multicolumn{6}{c}{Best fit parameters} &\\
\cline{2-5} \cline{7-13}
\colhead{k} & \colhead{x} &\colhead{y} &\colhead{z} &\colhead{$\rho$} && \colhead{$x_0$} &\colhead{$y_0$} &\colhead{$z_0$} &\colhead{$r_e$} &\colhead{$n$} &\colhead{$\log w$} &\colhead{N} }
\startdata
1 & $33.5$ & $178.5$ & $99.5$ & $0.078$ && $33.5$ & $178.9$ & $99.7$ & $2.1$ & $14.1$ & $0.00$ & $688$ \\
2 & $36.5$ & $192.5$ & $98.5$ & $0.060$ && $36.7$ & $192.4$ & $98.8$ & $0.9$ & $2.9$ & $0.95$ & $354$ \\
3 & $39.5$ & $174.5$ & $97.5$ & $0.023$ && $39.5$ & $174.3$ & $97.5$ & $1.7$ & $28.5$ & $-0.37$ & $138$ \\
4 & $25.5$ & $189.5$ & $98.5$ & $0.018$ && $26.0$ & $189.4$ & $98.9$ & $2.0$ & $26.2$ & $-0.14$ & $449$ \\
5 & $20.5$ & $192.5$ &$100.5$ & $0.015$ && $20.4$ & $192.8$ & $100.9$ & $1.6$ & $28.3$ & $-0.53$ & $94$ \\
6 & $38.5$ & $193.5$ & $96.5$ & $0.011$ && $38.7$ & $193.9$ & $96.3$ & $1.4$ & $4.0$ & $-0.06$ & $119$ \\
7 & $37.5$ & $175.5$ & $99.5$ & $0.005$ && $37.4$ & $175.4$ & $99.5$ & $8.7$ & $9.2$ & $-2.75$ & $47$ \\
8 & $33.5$ & $172.5$ & $100.5$ & $0.005$ && $33.1$ & $190.4$ & $99.4$ & $2.0$ & $26.9$ & $-1.43$ & $23$ \\
9 & $32.5$ & $185.5$ & $99.5$ & $0.005$ && $32.8$ & $185.5$ & $100.0$ & $3.8$ & $30.0$ & $-1.92$ & $48$ \\
10& $21.5$ & $190.5$ & $98.5$ & $0.004$ && $25.1$ & $181.9$ & $95.3$ & $4.1$ & $25.3$ & $-1.91$ & $48$ \\
Bk & \nodata & \nodata & \nodata & \nodata && \nodata & \nodata & \nodata & \nodata & \nodata & $0.13$ & $73$
\enddata
\tablecomments{Input point process shown in Figure~\ref{md3d}. $k$ identifies the halo component. Initial $\mathbf{r_0}$ values from kernel density estimator in order of decreasing maximum density $\rho$. Best fit parameters give the halo center, Einasto parameters $r_e$ and $n$, mixing coefficient, and number of dark matter particles.}
\end{deluxetable}
\subsection{Particle Membership in Halos}\label{sec:mem}
Once we have our estimated mixture model, we may want to classify the data set particles into its different components. This comes naturally with a soft classifier method such as a mixture model: given a particle, we evaluate each model component at the particle location and we then normalize the obtained quantities to one. The resulting values can be understood as membership probabilities and used in a multinomial distribution to assign one particle to $c$ components with the $c$ probabilities. If the model is correct, then the number of particles assigned per component should match, on average, to that shown in Table~\ref{dens}.
We recommend adapting this criteria to our scientific interests. Halos with heavy tails might populate areas with particles far from the real halo boundaries, hence overestimating the halo population. In contrast, under merging circumstances, several halos might be competing for the same particle and the model will assign low probabilities to each of them. Assuming same size halos, the border between two merging halos will be populated by particles with 0.5 probability of belonging to each halos, 0.33 for three halos, etc. The multinomial criteria might end up assigning these particles to the background instead, especially when the number of merging halos is high. To compensate for this, we provide two modifications to the multinomial criteria. First, the background component will not compete with the halos and the particle will be directly assigned to the background whenever the background probability is higher than that of any halo. Second, the user can input a threshold value such that the particle is assigned directly to the background component if all probabilities are below this value.
In our case study data set, we detect cases of the merging halos (see Figure~\ref{mdpg}). Following the rule given above, we recommend a threshold of 0.3, which is sufficient to assign to the background all particles in an undecided situation.
In Table~\ref{memb.tbl}, we provide an example of this classification procedure for the first five particles in our data set, showing the probability of each particle (rows) to belonging to each component (column). A final column gives the identifying number of the halo component for each particle according to the chosen decision rule. This table is used to plot Figure~\ref{mdpg}, with each component given in a different color.
\begin{deluxetable}{crrrrrrrrrrrrrrrc}
\caption{Halo Membership Probabilities for Bolshoi Dark Matter Particles} \label{memb.tbl}
\tablehead{
\colhead{Part} & \multicolumn{3}{c}{Location} && \multicolumn{10}{c}{Halo Component} & \colhead{Bkgd} & \colhead{Memb} \\ \cline{2-4} \cline{6-15}
& \colhead{x} & \colhead{y} & \colhead{z} &&\colhead{1} & \colhead{2} & \colhead{3} & \colhead{4} & \colhead{5} & \colhead{6} & \colhead{7} &
\colhead{8} & \colhead{9} & \colhead{10} && }
\startdata
1 & 39.125 & 192.997 & 95.635 && 0.001 & 0.020 & 0.000 & 0.001 & 0.000 & 0.969 & 0.000 & 0.000 & 0.000 & 0.000 & 0.007 & 6 \\
2 & 35.139 & 191.504 & 98.906 && 0 004 & 0.942 & 0.000 & 0.007 & 0.000 & 0.010 & 0.000 & 0.022 & 0.003 & 0.000 & 0.010 & 2 \\
3 & 34.817 & 191.624 & 98.213 && 0.006 & 0.896 & 0.000 & 0.013 & 0.001 & 0.021 & 0.000 & 0.038 & 0.004 & 0.002 & 0.018 & 2 \\
4 & 32.199 & 189.673 & 99.982 && 0.028 & 0.029 & 0.001 & 0.112 & 0.003 & 0.004 & 0.002 & 0.730 & 0.044 & 0.002 & 0.042 & 8 \\
5 & 31.930 & 189.982 & 99.307d && 0.024 & 0.029 & 0.001 & 0.127 & 0.003 & 0.005 & 0.001 & 0.732 & 0.032 & 0.001 & 0.021 & 8 \\
\enddata
\tablecomments{This only shows a portion of the membership table. The full table is available in \url{https://github.com/LluisHGil/darkmix/blob/master/Output/membership.txt}.}
\end{deluxetable}
\subsection{Goodness-of-Fit and Residual Analysis}
Once our best fit parameters are estimated, they can be displayed and analyzed for astrophysical properties. Some halo components show degenerate values for $r_e$ and $n$. As can be seen in Figure~\ref{mdpg}, each component is numbered according to Table~\ref{dens} and the circles have radius $r_e$. Halos 7 to 10 have extremely large radius. From Table \ref{dens}, we also note that these components have mixing coefficients $>20$ times lower than the other components---they have low central densities with large radii and few particles. This is a sign that these components do not follow an Einasto profile and cannot be correctly modeled by our model. In addition, the S\'ersic index $n$ is also unreliable---while values around 3 are expected in astrophysics \citep{2006AJ....132.2685M}, much greater values are found instead. In \S\ref{simpler.sec}, we provide confidence intervals for the parameters and a covariance matrix can be used to evaluate the reliability of the coefficients.
An Einasto profile with a high $n$ mimics a power law with a strong concentration of points around the center and weak tails. Optimally, we would use a different function to model this kind of structure, but we can choose between including degenerated halos or neglecting them and incorporate their particles to the background.
For our best fit model of 11 components, we have a coefficient of determination $R^2 = 0.92$. In this calculation, the kernel density estimator of the data (eq.~\ref{ker}) is compared to the model density field (eq.~\ref{model}). Even if these fields are expressed in different units, they must follow a linear relation if the fitting is perfect. A linear regression between the densities is fitted in each spatial tile and $R^2$ is reported as a measure of goodness-of-fit.
These two fields, $\Sigma_{\omega}^*$ and $\Sigma_{\omega}^{\dag}$, can be seen in the left-hand panel of Fig.~\ref{data_model_dens}. We can see a clear agreement in both the distribution, size and range of densities of the halos. The main differences arise from asymmetries in the real structure, which are difficult to capture with spherical halos.
We can see the probability density distribution of the smoothed raw residuals in Fig.~\ref{res_dens}. The residuals are skewed ($g_1 = -5.4$) at low values away from the halo peaks and exhibit heavy tails at high values near halo peaks. The reason for the heavy tails is evident in the lower left-hand panel, which shows the absolute ($s(\mathbf{r})$) from equations~\ref{eq:su}. While errors have mean zero by construction, a strong red-blue pattern around one of the biggest halos is responsible for the largest residuals. This suggests that the shape is poorly fitted by an Einasto profile. It is difficult to assess if these errors are within the expected shot-noise errors of the model or if we should consider a change in the model, such as a parameter fit refining or a different profile. However, the bottom right-hand panel of Fig.~\ref{data_model_dens}, which maps the relative raw residuals ($e(\mathbf{r})$), shows that these model errors are small compared to the background, which is covered by red areas when data is present. This implies that, in relative terms, the background has stronger fitting problems that the halo components.
This panel is particularly effective at revealing structures that are not included in the model. The most prominent structure missing in the model is at the bottom where, due to truncation by the window edge, an Einasto profile of a sparse cluster did not fit well. Finally, on Fig.~\ref{mdpg} we present a randomized classification of the particles based on our model. The plot shows how a soft classifier mixes the particles of different components, specially under merging conditions.
Another interesting visualization analysis is the profile extraction. With eq.~\ref{pro:fit} and~\ref{pro:com} we can plot the one-dimensional profile of the whole mixture model and also that of any individual halo component. In Fig.~\ref{mdpc}, we see the case for components 2 and 4 in our classification. We can see that the Einasto profiles (red curves) match the empirical profile (black dots) out to $\sim 2$~Mpc, beyond which additional halos appear. The full model (green curves) follows the empirical profile remarkably well, with peaks associated with other clusters. With this method, mixture models can be used to disentangle the real profile of merging halos, recovering the distribution of these structures where the empirical profile does not allow it.
\subsection{Comparison with BDM Calculations}\label{bdm}
The tables provided by the MultiDark Database \citep{2013AN....334..691R}, including the Bolshoi simulation used in this work, include a catalog of halos that have been detected with the Bound Density Maximum (BDM) algorithm \citep{1997astro.ph.12217K, 2013AN....334..691R}. This algorithm detects local density maxima and defines a spherical halo that removes unbound particles. This algorithm allows for the detection of subhalos, which are smaller structures inside parent halos.
Our work makes use of a low density data sample, which does not permit the detection of small structures, such as the less massive halos. Therefore, we compare our detected halos with the most massive BDM structures. In table \texttt{Bolshoi.BDMV} from the Multidark Database, and inside the region of our data set, 26 halos can be found with more than 30,000 particles. Since we have 10 halo components in our best fit model, we select the 10 halos laying closer to our centers. The resulting list can be found in the Github repository, file \texttt{bdm\_halo.txt}. In Figure~\ref{bdm_plot} we compare both sets of halos. As can be seen, the selected BDM halos lie very close to our halo centers and have been calculated to contain a number of particles that highly correlates with our estimation (see Table~\ref{dens}, column $N$). Since we are using just a sample of particles to illustrate the methods, our numbers are only a fraction of the real number of particles.
\subsection{A Simpler, Bayesian Model} \label{simpler.sec}
In Fig.~\ref{fig:aic} we see how the AIC and BIC showed a plateau after $c=6$, Fig.~\ref{mdpg} shows how the 7$^{th}$ component has a radius $r_e$ that extends around other halos and does not follow an Einasto profile, and components $8-10$ are sparse. It may thus be scientifically useful to examine a simpler model with 7 rather than 11 components.
This simplified model still has 36 parameters and was estimated using the MCMC sampling approach with maximum a posterior best fit parameters, where 3,600,000 iterations are needed to satisfactorily map the distributions. Although it is not included in darkmix.R and shown here, we recommend the user to examine graphical and scalar (e.g., the Gelman-Rubin statistic) diagnostics of convergence of the MCMC chains. CRAN package \textit{CODA} is widely used for this purpose. The long calculation time can be a considerable handicap and the MLE procedure that was given earlier may be operationally more feasible for large data sets. The advantage of a Bayesian approach is to map non-Gaussianities in the posterior distributions and curved relationships in bivariate parameter confidence intervals. However, for many science applications these capabilities are not needed and the MLE approach would be preferred.
\begin{deluxetable}{crrrrrrr}
\tablecaption{Bayesian Fit to a Simpler c=7 Model} \label{tab:6k}
\tablehead{
\colhead{$k$} & \colhead{$x_0$} & \colhead{$y_0$} & \colhead{$z_0$} & \colhead{$r_e$} & \colhead{$n$} & \colhead{$\log w$} & \colhead{$N$} }
\startdata
1 & $33.46 \pm 0.01$ & $178.80 \pm 0.02$ & $99.70 \pm 0.02$ & $1.05 \pm 0.04$ & $2.4 \pm 0.2$ & $1.5 \pm 0.9$ ~~~~ & $599 \pm 22$\\
2 & $36.67 \pm 0.02$ & $192.33 \pm 0.01$ & $98.75 \pm 0.02$ & $0.77 \pm 0.05$ & $3.0 \pm 0.3$ & $2.0 \pm 1.0$ ~~~~ & $413 \pm 19$\\
3 & $26.00 \pm 0.03$ & $189.35 \pm 0.03$ & $98.87 \pm 0.02$ & $1.11 \pm 0.07$ & $2.5 \pm 0.3$ & $0.8 \pm 0.4$ ~~~~ & $376 \pm 19$\\
4 & $38.60 \pm 0.04$ & $193.88 \pm 0.03$ & $96.18 \pm 0.03$ & $0.75 \pm 0.08$ & $2.2 \pm 0.4$ & $0.7 \pm 0.5$ ~~~~ & $108 \pm 11$\\
5 & $39.47 \pm 0.02$ & $174.22 \pm 0.02$ & $97.44 \pm 0.03$ & $0.90 \pm 0.10$ & $3.2 \pm 0.6$ & $0.4 \pm 0.3$ ~~~~& $127 \pm 12$\\
6 & $20.40 \pm 0.02$ & $192.72 \pm 0.02$ &$100.81\pm 0.02$ & $0.56 \pm 0.08$ & $3.2 \pm 0.9$ & $1.0$~~~~~~~~~& $74 \pm 8$\\
Bk & \nodata & \nodata & \nodata & \nodata & \nodata & $0.14 \pm 0.004$ & $384 \pm 26$ \\
\enddata
\tablecomments{Values are the mean and standard variation of the posterior distribution. Component 6 has been used as the first component in the model, and therefore $w_6 = 1$. }
\end{deluxetable}
In Table~\ref{tab:6k} we summarize the estimated maximum a posterior estimated parameters with error bars.
Compared to the $c=11$ mixture model, the removal of components 7 to 10 has an impact on the estimation of the other parameters. The $R^2$ coefficient is 0.936, which is similar to the more elaborate model. We show in Figure~\ref{fig:6k} the absolute and the relative residuals ($s(\mathbf{r})$ and $e(\mathbf{r})$). As can be seen, the structures that were previously modeled by components 7 and 8 appear now in red colors. The absence of model components leave these structures underestimated. However, its intensity is similar to that of some other regions in the data set and a visual inspection alone might not be enough to detect them. We use the relative residuals in Figure~\ref{fig:6k} bottom panel to detect their presence.
\section{Conclusions}\label{sec:con}
We have demonstrated the use of finite mixture models to detect and characterize a sample of simulated dark matter halos. The maximum-likelihood solution to a parametric model of the particle distribution produces a maximum likelihood probability density function. Two results emerge from the model: a list of halos with their properties (Table~\ref{dens}), and the probability that each particle is a member of each cluster (Table~\ref{memb.tbl}). This is a `soft classification' measure and additional decision rules are needed to assign membership probabilities to each particle. A variety of graphical and statistical measures of goodness-of-fit are provided. This mixture model approach has several important differences from other conventional clustering techniques.
\begin{enumerate}
\item The user specifies a radial profile function based on previous astronomical experience or astrophysical insight. Here, we chose an Einasto profile, which is the three-dimensional analog of the S\'ersic profile and a generalization of the de Vaucouleurs profile for elliptical galaxies. This permits an estimation of parameters with a physical interpretation, such as the size, shape and particle abundance of a halo. In addition, this estimation is done directly on the three-dimensional data set, without the need to integrate the one-dimensional profile and the loss of information that it involves.
\item No threshold densities, maximum distance scales, minimum membership population, or other arbitrary parameters are needed, as in nonparametric clustering procedures such as the `friends-of-friends' algorithm (single-linkage hierarchical clustering) or DBSCAN. A unique MLE for each value of $k$ is calculated without any free parameters. Model complexity (i.e., the value of $k$) is chosen based on penalized likelihood information criteria combined with scientific judgment. The location, size and population of each halo is not subject to arbitrary parameter choices but are outcomes of a likelihood calculation based on an astrophysically reasonable profile function. Only the membership of each particle can be enhanced using a threshold determined by the number of merging halos. When two or three halos overlap, a value of 0.3 is advised.
\item As a soft classification method, we have a measure of the reliability that a given dark matter particle is a member of a given halo. Researchers who are interested in merging scenarios might use these results in various ways to obtain more informative descriptions of the interactions. For example, sparse satellite halos can be identified and their merging history into large protogalaxies can be traced. The ability of the mixture model to identify clustered ensembles of halos (see Figure~\ref{mdpg}) may be useful. Mixture modeling of time sequences of the dark matter distribution can reveal, in a statistically rigorous fashion, when and where merging occurs. Alternatively, decision rules can be constructed to identify the most isolated dark matter particles to understand the evolution of particles that have not merged into equilibrated halos.
\item The parametric mixture model would be challenged in situations where halos with a wide range of populations $N$ are present. If, for example, the data set here was dominated by a large halo with $N \sim 10,000$ particles, then the likelihood may not be sufficiently improved by the addition of a small halo with $N \sim 100$. However, most nonparametric clustering techniques have similar difficulties. Methods such as hierarchical density-based clustering seek to address this difficulty \citep{Campello13}.
\end{enumerate}
Mixture models have been discouraged for use to understand the galaxy large-scale structure distribution because it is dominated by interconnected curved filamentary structures rather than centrally condensed clusters \citep{KuhnFeigelson19}. However, the dark matter collapses into distinct equilibrated halos early in the evolution of cosmic structures and is more amenable to mixture analysis. The biggest handicap of these models is the calculation costs for large data sets and large $k$. This method is best applied to the understanding of dark matter evolution within small boxes, rather than analysis of a full multi-billion particle simulation. There is also the possibility that convergence will be difficult, either for the optimization of the MLE or for the MCMC chains of the Bayesian calculation. Other algorithms, such as MultiNest \citep{skilling04, ferozskilling13} or PolyChord \citep{handley15a,handley15b}, that are specially meant to estimate high-dimensional multimodal likelihood functions might be tried. Finally, we reiterate that all of the R functions used in this work can be downloaded from our GitHub repository \texttt{https://github.com/LluisHGil/darkmix}.
\section*{Acknowledgements}
This work has been funded by the project PID2019-109592GB-I00/AEI/10.13039/501100011033 from the Spanish Ministerio de Ciencia e Innovaci\'on - Agencia Estatal de Investigaci\'on, by the Project of excellence Prometeo/2020/085 from the Conselleria d'Innovaci\'o, Universitats, Ci\`encia i Societat Digital de la Generalitat Valenciana, and by the Acci\'on Especial UV-INV-AE19-1199364 from the Vicerrectorado de Investigaci\'on de la Universitat de Val\`encia.
The CosmoSim database used in this paper is a service by the Leibniz-Institute for Astrophysics Potsdam (AIP). The MultiDark database was developed in cooperation with the Spanish MultiDark Consolider Project CSD2009-00064. The Bolshoi and MultiDark simulations have been performed within the Bolshoi project of the University of California High-Performance AstroComputing Center (UC-HiPACC) and were run at the NASA Ames Research Center. The MultiDark-Planck (MDPL) and the BigMD simulation suite have been performed in the Supermuc supercomputer at LRZ using time granted by PRACE. E.D.F. thanks Penn State's Center for Astrostatistics for an environment where cross-disciplinary research can be effectively pursued.
\bibliography{Mixture_models_accepted}
\bibliographystyle{aasjournal}
|
Title:
Energy wrinkles and phase-space folds of the last major merger |
Abstract: Relying on the dramatic increase in the number of stars with full 6D
phase-space information provided by the Gaia Data Release 3, we discover
unambiguous signatures of phase-mixing in the stellar halo around the Sun. We
show that for the stars likely belonging to the last massive merger, the
(v_r,r) distribution contains a series of long and thin chevron-like
overdensities. These phase-space sub-structures are predicted to emerge
following the dissolution of a satellite, when its tidal debris is given time
to wind up, thin out and fold. Additionally, the observed energy and angular
momentum (E, L_z) distribution appears more prograde at high energies, possibly
revealing the original orbital angular momentum of the in-falling galaxy. The
energy distribution of the debris is strongly asymmetric with a peak at low E
-- which, we surmise, may be evidence of the dwarf's rapid sinking -- and
riddled with wrinkles and bumps. If these small-scale energy inhomogeneities
have been seeded during or immediately after the interaction with the Milky
Way, and are not due to the spatial restriction of our study, then making use
of the (v_r,r) chevrons to constrain the time of the merger becomes cumbersome.
Nonetheless, we demonstrate that similar phase-space and (E,L_z) sub-structures
are present in numerical simulations of galaxy interactions, both in bespoke
N-body runs and in cosmological hydrodynamical zoom-in suites. The remnant
traces of the progenitor's disruption and the signatures of the on-going
phase-mixing discovered here will not only help to constrain the properties of
our Galaxy's most important interaction, but also can be used as a novel tool
to map out the Milky Way's current gravitational potential and its
perturbations.
| https://export.arxiv.org/pdf/2208.11135 |
\label{firstpage}
\begin{keywords}
stars: kinematics and dynamics -- Galaxy: evolution -- Galaxy: formation -- Galaxy: abundances -- Galaxy: stellar content -- Galaxy: structure
\end{keywords}
\section{Introduction}
The most striking global feature of the Milky Way's stellar halo is the radial density break around $20-30$ kpc from the Galactic Centre \citep[see e.g.][]{Watkins2009,Deason2011,Sesar2011}. This dramatic steepening of the halo star counts was interpreted by \citet{Deason2013} to conclude that the accretion history of the Galaxy had been dominated by a single, ancient and massive merger. This hypothesis was tested with the arrival of the {\it Gaia} data \citep[][]{Gaia} which revealed the prevalence in the inner halo of the Milky Way of relatively metal-rich stars on highly eccentric orbits, attributed to a dwarf galaxy with a total mass of order of $10^{11} M_{\odot}$ accreted some 8-11 Gyr ago \citep[][]{Belokurov2018,Helmi2018}. This merger event, known today as {\it Gaia} Sausage/Enceladus (GS/E), has by now been mapped out in the chemical and kinematic space \citep[e.g.][]{Haywood2018,Mackereth2019,Necib2019,Lancaster2019,Das2020,Feuillet2021,Carrilo2022}. Taking advantage of the unprecedented quality of {\it Gaia}'s astrometry, the stellar halo break has been shown to be created by the apocentric pile-up of GS/E stars turning around on their orbits \citep[][]{Deason2018}.
The global 3D shape of the Milky Way's inner halo has been charted with the RR Lyrae stars and shown to be triaxial with a major axis lying in the plane of the Galactic disk more or less aligned with the crossing of the orbital plane of the Magellanic Clouds \citep[][]{Iorio2018}. Subsequently, using the {\it Gaia} Data Release 2 proper motions, \citet{Iorio2019} demonstrated that the bulk of the RR Lyrae inside 30 kpc of the Galactic Centre move on similar, highly radial orbits and are likely all part of the same accretion event, namely the GS/E. Viewed projected onto the $x-z$ plane, the intermediate axis of the inner stellar halo is slightly tilted out of the Galactic disk plane (see Figure 3 of \citealt{Iorio2019}). The Sun's position is only $\sim20^{\circ}$ off the intermediate axis, but this is enough to see the close side of the debris cloud spanning a larger projection on the sky compared to the one further away, in effect similar to the sky projection of the Galactic bar. The outer density contours show clear deviations from the simple ellipsoidal shape corresponding to the previously known debris ``clouds" \citep[][]{Simion2019,Balbinot2021}.
There have been several early signs that at least in the Solar neighbourhood, the GS/E tidal debris dominate the accreted portion of the stellar halo \citep[e.g.][]{Brook2003,Meza2005,Nissen2010}. Today, the dominance of the GS/E debris locally and throughout the inner MW halo has been well established. For example, most recently, \citet{Myeong2022} demonstrated that the GS/E is the only significant accreted component amongst local stars on eccentric orbits. Curiously, three other halo components they find are all of in-situ nature, i.e. born in the MW proper. These include {\it Splash}, i.e. heated high-$\alpha$ disk of the Galaxy \citep[][]{Bonaca2017,Gallart2019,Dimatteo2019,Belokurov2020}, {\it Aurora}, i.e. the pre-disk quasi-spheroidal early Galaxy \citep[][]{Aurora,Conroy2022}, as well as {\it Eos}, a new in-situ halo component linked to the low-$\alpha$ disk \citep[see][]{Myeong2022}.
The census of the most significant substructures in the Solar neighbourhood reported by \citet{Myeong2022} can be interpreted with the detailed analysis of a massive satellite in-fall. \citet{Amorisco2017} and \citet{Vasiliev2022} demonstrate that if the satellite-host mass ratio is sufficiently high, for a certain range of central densities of the two galaxies, the interaction proceeds in an unexpected and previously poorly understood way. Instead of sinking in the host's potential with ever-increasing circularity as predicted by the Chandrasekhar's Dynamical Friction (DF) prescription \citep[][]{Chandra1943,Jorge2004}, the satellite's orbital eccentricity rapidly ramps up causing it to stall, drop to the centre of the host and fall apart in an accelerated, explosive fashion. \citet{Vasiliev2022} show that the satellite's orbital radialization is caused by a complex mix of distortion and reflex motion of the host as well as satellite's self-friction. These factors, not considered in the classical DF picture, help the satellite to sink and disrupt faster, thus inundating the Solar neighbourhood with its debris. The satellite's and the host's properties need to be just right for the radialization to happen efficiently. If, for example, the central density is too low, the disruption happens too quickly, before the satellite arrives to the heart of the Milky Way. Dialling the densities up will reduce the satellite's mass loss and therefore will lessen the self-friction. It will also subdue the reflex motion and consequently will slow down the radialization or even completely reverse it. Profound radialization can thus be considered a giveaway that the satellite survived more or less intact (at least its stellar component) until arriving close to the Solar neighbourhood.
For a progenitor whose orbit does not evolve significantly during the disruption, the tidal debris behaviour is best summarised in the space of integrals of motion \citep[][]{Johnston1998, HelmiWhite1999}. Viewed either in the action space \citep[][]{Eyre2011} or in the energy and angular momentum space, stars stripped from the in-falling galaxy, form characteristic bow-tie shapes \citep[see e.g. Figure~1 in][]{Gibbons2014}. The opposite ends of the tie are the leading and the trailing debris, with correspondingly lower and higher energy compared to the progenitor. While these shapes get somewhat smeared by both the host's evolution over time and the measurement errors, they remain recognisable and can be used to decipher the accretion history of the Galaxy \citep[see][]{Helmi2000}. The situation is much more complex for a rapidly sinking satellite. In this case, how fast the satellite plunges in the host's potential and the rate at which it loses stellar mass and angular momentum will now control the appearance of its tidal debris. While for a stationary orbit, the debris unbound at each stripping episode contributes to the same footprint in e.g. $(E,L_z)$ space, the bow-tie shape of a plunging satellite is constantly distorted and dragged to lower energies. As a result, this leads to the formation of a twisted and stretched (along $E$) column of debris \citep[see e.g.][]{Koppelman2020,Amarante2022,Khoperskov2022}. It is therefore a generic expectation that the energy distribution of the tidal debris of a satellite on a rapidly decaying orbit should contain multiple bumps and wrinkles corresponding to the individual stripping episodes along the satellite's journey down the potential well. Moreover, each such episode should produce at least two energy pile-ups, corresponding to the leading and trailing debris.
Once packs of stripped stars are deposited into the host's gravitational potential, in principle, their distribution in the space of integrals of motion barely changes, but their phase-space density constantly evolves. Small differences in stars' orbital frequencies eventually translate into orbital phase offsets that accumulate with time in the process known as phase-mixing. As the stellar debris cloud spreads over the Milky Way, the increase of its spatial extent is balanced by the thinning out of the velocity distribution, keeping the phase-space density constant in accordance with Liouville's theorem \citep[see][]{HelmiWhite1999}. As the debris cloud continues to stretch in the phase-space, it eventually folds onto itself leading to a formation of a winding pattern, which resembles a spiral for orbits close to circular. Such phase-space spiral was uncovered recently in the disk stars around the Sun and is believed to be produced by a relatively recent perturbation of the Galactic disk by a massive body \citep[see][]{Antoja2018}. For an eccentric orbit, as viewed for example in the phase-space spanned by the Galactocentric spherical polars $v_r$ and $r$, the folds of the debris cloud appear as nested chevrons, but topologically are nonetheless a spiral, albeit severely distorted \citep[e.g. Figure~6 in][]{Quinn1984}. In the early stages of phase-mixing, the chevrons expand radially, while creating sharp, caustic-like density enhancements around their apocentric radii known as shells \citep[][]{Sanderson2013}. Each stripping episode creates its own set of connected folds. As folds seeded at different times stretch, thin out and expand, they tend to run into each other and overlap \citep[see][]{DongPaez2022}, creating something akin to super-chevrons.
Thanks to phase-mixing, the coarse-grained phase-space substructure, i.e. lumps left behind by each episode of tidal stripping, is broken down into finer folded filaments and is, eventually, erased. At later stages, the phase-space density appears more uniform but is in fact made up of tightly packed series of folds. These may be difficult or impossible to tell apart when the entirety of the tidal debris is analysed. However, if a small spatial region, e.g. the Solar neighbourhood, is isolated, individual folds can be revealed \citep[see][]{McMillan2008,Gomez2010}. This is because given the same (or similar) time of release from the parent, only stars with certain orbital frequencies will have completed the right number of orbits around the Milky Way to enter the region around the Sun. Therefore, spatially localized views of the tidal debris in either phase-space or any space spanned by integrals of motion is always expected to be lumpy. The size of the clumps and the spacing between them can in fact be used to deduce the time of accretion \citep[see][]{McMillan2008,Gomez2010}. Alternatively, the gravitational potential can be constrained by the requirement to bring the integrals of motion clumps into a sharp focus \citep[][]{DongPaez2022}.
In this paper, we report the first observational evidence for a sequence of folds in the phase-space of the Milky Way's stellar halo. After introducing our input dataset in Section~\ref{sec:data}, we explore two complementary views on the phase-space distribution of halo stars: energy vs. the $z$-component of angular momentum (Section~\ref{sec:energy}) and Galactocentric distance vs. radial velocity (Section~\ref{sec:phase-space}). We then examine the dependence of the phase-space sub-structure on the stellar metallicity to reveal the presence of the in-situ halo signal. We compare the {\it Gaia} DR3 observations to numerical simulations in Section~\ref{sec:sims}, and summarize our findings and their interpretation in Section~\ref{sec:conc}.
\section{Data and sample selection}
\label{sec:data}
We use data from the Radial Velocity Spectrograph (RVS) sample \citep[][]{gdr3_rvs} of the Data Release 3 from the {\it Gaia} space observatory \citep[][]{Gaia}. The astrometric solutions for these stars were provided previously \citep[][]{Lindegren2021} as part of the {\it Gaia} Early DR3 \citep[][]{gaia_edr3}. We use geometric distances as estimated by \citet{BJ2021}, however switching instead to the inverse of the parallax does not change our results noticeably. We apply only basic selection cuts, i.e we require that stars have a relative precision better than 10\% ( $\varpi/\sigma_{\varpi}>10$), and that they lie not too far from the Sun at $D<15$ kpc, leaving $\sim$25.9 million objects out of the original $\sim$33.7 million with non-zero line-of-sight velocities. Additionally, we remove stars projected within 1.5 degree distance from the centres of known globular clusters within 5 kpc of the Sun. Our final sample size is $\sim$25 million stars with full 6-D phase-space information. Converting the observed heliocentric stellar coordinates into the Galactocentric left-handed reference frame, we assume that the Sun is at $X=R_{\odot}=8$ kpc from the Galactic Centre \citep[although a slightly larger value was recently reported by][]{GRAVITY2022} and lies in the Galactic plane, at $Z_{\odot}=0$. Following \citet{Drimmel2022}, we assume that the Sun's velocity is $v_{\odot}=\{-9.3, 251.5, 8.59\}$ km s$^{-1}$. We have checked that changing any of the Galactic parameters within their systematic uncertainties does not change any of the conclusions reported below.
\section{Energy wrinkles}
\label{sec:energy}
Figure~\ref{fig:elz} presents the behaviour of the local {\it Gaia} DR3 RVS stars in the space spanned by the total energy $E$ and the vertical component of the angular momentum $L_z$. Total stellar energies $E$ are computed using a three-component (bulge, disk and DM halo) Galaxy potential similar to the \texttt{MWPotential2014} of \citet{Bovy2015} but with the DM halo's mass of $M_{\rm vir}=10^{12} M_{\odot}$ instead of $0.8\times 10^{12} M_{\odot}$. We use a slightly higher concentration of the DM halo ($c=19.5$) to match the circular velocity at the Solar radius $v_{\rm circ}=235$ km s$^{-1}$. Energy is in units of $10^5\,\mathrm{km}^2\,\mathrm{s}^{-2}$. The top left panel of the Figure shows the logarithm of the stellar density distribution with a prominent vertical structure around $L_z=0$ corresponding to the GS/E tidal debris. A noticeable retrograde clump at $L_z<-10^3$ and $E>-1$ is the Sequoia structure \citep[][]{Myeong2019,Matsuno2019}. Note that adopting a different gravitational potential for the Galaxy may shift the patterns discussed below up and down slightly but is not going to change our overall conclusions.
To reveal the density variations across the GS/E debris cloud we estimate the behaviour of the smooth (well mixed) background as follows. In the $L_z$ range marked by two vertical dotted lines ($|L_z|<0.85\times10^3$, left panel of Figure~\ref{fig:elz}) the density is linearly interpolated using the values just outside the region of interest. To smooth the background variation, for each $E$ bin considered, we model the average of 9 $L_z$ profiles, i.e. additionally considering 4 rows of pixels above and below the current one. The background represents the well-mixed component of the local stellar halo. The bulk of this background component is contributed by the in-situ halo for which the assumption of mixedness may well be a good one: the in-situ halo stars are either already more phase-mixed (e.g. {\it Splash}) or has had the longest to phase-mix (e.g. {\it Aurora}). Subtracting the distribution shown in the right panel of the top row of Figure~\ref{fig:elz} from that shown in the left gives the difference of the logarithms of stellar densities (data-background) displayed in the bottom left panel. The bottom middle panel of the Figure shows the linear over-density residual. Note that given the choice of the size of the interpolated region, the background estimate is only valid down to energies of order of $E\approx-1.65$. Curiously, the residual corresponding to the GS/E only reaches $E\approx-1.4$. This could indicate a genuine drop in the number of GS/E stars at low energies, or, alternatively, problems with our background estimate below $E=-1.4$. The top right panel of Figure~\ref{fig:elz} indicates that the latter is indeed possible as the background quickly gets quite complex at low energies due to i) the contribution of the in-situ halo and ii) the {\it Gaia} DR3 RVS selection biases. If there are any stars belonging to the merger at lower energies they are not contributing to the GS/E signal analysed in this Section. The majority of other known (lower mass) halo sub-structures do not contribute many stars to the $L_z$ range of interest \citep[e.g.][]{Myeong2018,Myeong2019,Koppelman2019b}.
In the $E, L_z$ space, the 2-D GS/E over-density (Figure~\ref{fig:elz}, bottom row, left and middle; also contours in the top middle panel) has an elongated and inclined shape: its top portion (at high energy) leans towards $L_z>0$, while its bottom portion is more or less symmetric with respect to $L_z=0$, with a slight preference towards $L_z<0$. We interpret the inclination of the debris cloud in Section~\ref{sec:mergersim} and link it to the original angular momentum of the satellite at the in-fall. The debris cloud is also lumpy with a number of small-scale features such as i) a core at $L_z=0$ and $E\approx-1.4$, i.e. the bottom edge of the energy distribution, and ii) an energy depletion around $E\sim-1.1$, i.e. approximately through the middle of the over-density; this dent is most visible in the prograde portion of the cloud, i.e. at $L_z>0$. The shape of the $(E,L_z)$ overdensity resembles an avocado, with the bulk of the stars residing in the ``pit" in the broad bottom part of the debris cloud. The asymmetric, lopsided distribution of GS/E energies could potentially be an artefact of the background subtraction where some of the sharp density variations (to do with the contribution of the in-situ halo and the {\it Gaia} DR3 RVS selection function) have not been absorbed by the model. However, on close inspection, in the $(E, L_z)$ distribution shown in the top left panel of the Figure, no clear overdensity centered on $L_z=0$ below $E\approx-1.4$ is visible. Therefore the bottom-heavy energy distribution may be a genuine feature of the GS/E debris cloud and a direct observational evidence for progenitor's rapid sinking in the host's potential (see Section~\ref{sec:mergersim} for a comparison with a tailored N-body simulation).
The bottom right panel of Figure~\ref{fig:elz} shows four energy histograms (smoothed by convolving with an Epanechnikov kernel with a size of 1.3 pixel) corresponding to four slices through the GS/E cloud along $E$ in the $L_z$ bins marked with coloured vertical lines in the previous panel. All four histograms are noticeably asymmetric with peaks at lower energy levels, supporting the avocado picture introduced above. Moreover, the peak of the most prograde slice (blue) is at significantly higher energy compared to the most retrograde slice (red), confirming the tilted shape of the debris cloud in the $(E, L_z)$ space. While the two profiles corresponding to the lowest $|L_z|$ are relatively smooth (yellow and green), the other two distributions corresponding to the most retrograde (red) and the most prograde (blue) edges of the GS/E debris show a number of wiggles. For example, the retrograde slice peaks just below $E\sim-1.3$ but has noticeable wrinkles (changes of curvature) between $E\sim-1.25$ and $E\sim-1$, together with an additional bump around $E\approx-0.8$. The prograde edge is clearly bimodal with bumps at $E\sim-1.15$ and $E\sim=-0.95$, on either side of the energy dent mentioned above and several additional wrinkles.
There are multiple reasons why the energy distribution can be wrinkled. Major overdensities are seeded at different energy levels as the disrupting satellite sinks in the host potential, producing at least two distinct energy clumps at each stripping episode. Additionally, if the tidal debris has had time to phase-mix, limiting stars to a relatively small spatial region would carve out portions of the energy distribution. Finally, subsequent rapid and strong perturbations of the Galaxy's potential can add extra wrinkles to the already complicated picture. We discuss some of these phenomena in Section~\ref{sec:sims} below.
\section{Phase-space folds}
\label{sec:phase-space}
We now turn to the analysis of the phase-space, which does not depend on the assumptions about the potential. %
Figure~\ref{fig:phsp1} shows the $(v_r, r)$ phase-space behaviour of the high-quality sample of the {\it Gaia} DR3 RVS stars in the Solar neighbourhood. Here the top (bottom) row gives the logarithm of stellar density (column-normalised stellar density). The left column shows the phase-space density of all stars in the sample, while the middle panel corresponds to the stars selected to have $|L_z|<0.7\times10^{3}$, matching the properties of the GS/E debris as discussed in the previous section. This angular momentum selection gives $\approx$270,000 stars on eccentric (halo-like) orbits. The right panel shows the comparison sample with $|L_z|>10^{3}$. Hints of a striation pattern are already visible in the left column where low-level overdensities and underdensities move diagonally from high $|v_r|$ at low $r$ to low $|v_r|$ at high $r$. This pattern is amplified in the middle column where the stars within the GS/E's range of $L_z$ are selected. We interpret these phase-space feature as folds of the GS/E tidal debris as it stretches and winds up due to phase-mixing in the MW gravitational potential. The patterns in the first two columns can be compared to the right column where the stars at higher $L_z$ are shown. No obvious striation is visible for stars in the comparison sample in the right, indicating that the over- and under-densities are not an artefact of {\it Gaia} DR3 (e.g. due to the RVS selection function).
Figure~\ref{fig:phsp2} splits the GS/E sample into two portions, one with $L_z>0$ (left column) and one with $L_z<0$ (middle column). The logic of splitting the signal into two samples with different angular momentum is two-fold. First, the leading and trailing debris typically have distinct enough $E,L_z$ properties. Second, and most importantly, the in-situ halo contamination is a strong function of $L_z$, decreasing below $L_z=0$ (see the top right panel of Figure~\ref{fig:elz}). The difference in the amount of in-situ halo stars in $L_z>0$ and $L_z<0$ samples is obvious in the first two panels of the top row of Figure~\ref{fig:phsp2}. The left panel shows a prominent overdensity with $|v_r|<150$ km s$^{-1}$ corresponding to the {\it Splash} (with some contribution from {\it Aurora}), which is almost invisible in the middle panel ($L_z<0$).
The bottom row of Figure~\ref{fig:phsp2} shows the phase-space density after subtraction of a smooth background, similar to the view of the disk's phase-space overdensity pattern presented in e.g Figure 6 of \citet{Laporte2019}. The GS/E phase-space folds, i.e. quasi-linear over-dense and under-dense chevron-like regions are clearly visible in the bottom row of Figure~\ref{fig:phsp2}. In the $L_z>0$ view (bottom left), there are two families of chevrons: those limited to $V_r<150$ km s$^{-1}$ and $r<8$ kpc and those with higher radial velocity amplitude present across the entire range of $r$. Because the first family is not detected in the $L_z<0$ sample we attribute these folds to the in-situ stellar halo (see Section~\ref{sec:sims} for further discussion) and focus the discussion in this Section on the high amplitude chevrons that we number 1 to 5 in the bottom row of Figure~\ref{fig:phsp2}. It is clear that while most of these high amplitude chevrons are present in both the $L_z>0$ and the $L_z<0$ views, their relative strength is different in the two samples. Chevrons 1, 3 and 4 are best seen in the $L_z>0$ view (left column), while chevrons 2 and 5 stand out more clearly above the background in the $L_z<0$ picture (middle panel). Curiously, in terms of the clarity of signal, the negative $v_r$ (moving towards the Galactic Centre) portions of the chevron pattern appear more coherent compared to their positive $v_r$ counterparts in both $L_z>0$ and $L_z<0$ samples. The right panel of the bottom row of the Figure combines the $L_z>0$ and the $L_z<0$ views and shows at least five clear and tightly packed phase-space folds.
Figure~\ref{fig:phsp3} summarises the properties of the GS/E phase-space substructure. The left panel of the Figure gives a slice (at $9.0<r$(kpc)$<9.5$) through the background-subtracted density distributions shown in the bottom row of Figure~\ref{fig:phsp2}. The corresponding velocities of each fold (as approximated by straight lines in the bottom row of Figure~\ref{fig:phsp2}) are shown as vertical dotted lines. At negative $v_r$, there is a perfect correspondence both between $L_z>0$ and $L_z<0$ slices and the predicted locations of the folds (vertical dotted lines). As mentioned above, all five chevrons are detected in both slices. Moreover, chevron 1 is bifurcated into two in the $L_z>0$ view, with only one portion of the bifurcation present in the $L_z<0$ slice, which could be due to the presence of the in-situ stars in the $L_z>0$ sample. The middle panel of Figure~\ref{fig:phsp3} shows the $(v_r, r)$ phase-space colour-coded by the median energy of the stars in each pixel. As expected chevrons run through the phase-space at approximately constant energy. The strongest chevrons 1 and 2 have energies around $-1.4$ and $-1.2$ roughly matching the locations of the strongest peaks in the $E$ distributions shown in the bottom right panel of Figure~\ref{fig:elz}.
Finally, the right panel shows the same space colour-coded by the median $L_z$ and reveals the alternating positive/negative $L_z$ pattern of the GS/E folds detected. As the middle panel demonstrates, the phase-space follow (approximately) the lines of constant energy \citep[see][for an extensive discussion]{DongPaez2022}. Using a very crude straight line fit through the detected folds as shown in the bottom row of Figure~\ref{fig:phsp2}, we can estimate the apocentric distances of the stars that make up the chevrons. Starting from the lowest energy chevron, the stars in the five folds reach their apocentres around 11.5, 15.5, 21, 23 and 25 kpc from the Galactic centre. Pile-ups of the tidal material in the packs of folds can cause noticeable changes in the stellar halo radial density profile. The last three folds appear to be associated with the stellar halo break detected previously \citep[see e.g.][]{Watkins2009,Deason2011,Sesar2011}. The chevrons 1 and 2 are likely to introduce another break in the stellar halo density profile, at around $10<r$(kpc)$<15$. This is consistent with the predictions of a "double-break" halo in \citet{Naidu2021}.
Figure~\ref{fig:vrr_max} presents a sketch of the phase-space properties of the GS/E debris. Following the notation introduced in the previous figures, the detected chevrons are shown in blue ($L_z>0$) and red ($L_z<0$) colour. Their behaviour is compared to the overall shape of the radial velocity distribution as given by thin grey lines marking the 0.5, 2, 5, 10, 20 and 99.5, 98, 95, 90, 80 percentiles of the $V_r$ velocity as a function of $r$. Dotted (solid) lines are for all stars with $|L_z|<0.5\times10^3$ (retrograde stars with $-0.5\times10^3<L_z<0$). The amplitude of the radial velocity variation mapped by dotted lines reaches maximum around 5 kpc from the Galactic centre, while the solid curves reach the peak at around 3 kpc. We surmise that the difference in the behaviour is due to the contribution of the in-situ halo (see the discussion in Section~\ref{sec:cosmo}), which is significantly lower for the retrograde selection. Overall, the evolution in the radial velocity amplitude matches the behaviour of the detected chevrons. This can be compared to the inferred radial velocity shape of the GS/E debris as mapped by the {\it Gaia} DR2 RR Lyrae reported in \citet{Iorio2021} and shown as a grey band. The radial velocity model of \citet{Iorio2021} is bi-modal, similar to that used in \citet{Lancaster2019} and \citet{Necib2019}. The $v_r$ distribution is approximated by two Gaussians whose separation in km s$^{-1}$ is allowed to vary with Galactocentric radius to mimic the phase-space density of a radial merger. The two grey bands in Figure~\ref{fig:vrr_max} show the inferred locations of the centers of the two Gaussians. As the stars on eccentric orbits approach their pericentres their radial velocity reaches its maximal value before dropping quickly at the turn-around radius. Thus for the GS/E debris the maximal $v_r$ amplitude is expected to be reached close to the overall debris pericentre. The trends in $v_r(r)$ shape reported here and by \citet{Iorio2021} are in good agreement. However, our retrograde selection (solid lines) imply that the maximum $v_r$ is reached closer to the Galactic centre and thus likely point at a slightly smaller pericentre ($r<3$ kpc).
Many of the chevrons detected are not symmetric with respect to the $v_r=0$ line. This is demonstrated in Figure~\ref{fig:phsp_asym} where the red (blue) curves mark the approximate locations of the $L_z>0$ ($L_z<0$) chevrons as gauged by the signal in the $v_r<0$ portion of the phase-space. These model chevron tracks are symmetric with respect to the $v_r=0$ line. However, on inspection of the Figure, it is clear that the symmetry is broken in the data. The clearest example is the behaviour of the chevron 1 in the $L_z<0$ sample (right panel) where it can be seen to turn around at $v_r\approx-50$ km s$^{-1}$. Such behaviour is only possible because chevrons are not orbits but are agglomerations of debris with different energies. Similarly, there is no clear counterpart to chevron 1 at $v_r>0$ in the $L_z>0$ sample (left panel). Also, the track for chevron 3 runs in between the two tentative detections at $v_r>0$. At $L_z<0$, the track for chevron 2 runs above the strongest signal at $v_r>0$.
\subsection{Metallicity dependence of the local phase-space structure}
\label{sec:met}
In this Section we study the dependence of the detected phase-space susbtructure on metallicity. Because only $\sim$ 16\% of the {\it Gaia} DR3 RVS sample have {\tt gspspec} metallicities reported (based on the RVS spectra), we instead use the metallicities derived from the BP/RP spectra. Unfortunately, the {\tt gspphot} metallicities, provided as part of the {\it Gaia} DR3, show significant biases (see left panel of Figure~\ref{fig:feh_measurements} for a comparison with the APOGEE DR17 abundances). This is likely caused by the overestimation of the dust extinction for some stars, which in turn affects temperature and [Fe/H] measurements. To mitigate these biases, we re-derive the metallicities from BP/RP spectra, using a data-driven approach calibrated to the APOGEE DR17 data \citep{apogee_dr17}. Specifically, we cross-match a subset of the {\it Gaia} DR3 sample with continuous mean-sampled BP/RP spectra ($\sim$ 220 million stars ) with the APOGEE DR17 measurements. We only use stars with accurate enough BP/RP reconstructions, i.e. {\tt bp\_chi\_squared <} 1.5 * {\tt bp\_degrees\_of\_freedom} and {\tt rp\_chi\_squared <} 1.5 * {\tt rp\_degrees\_of\_freedom}, leaving approximately 546,000 stars. Furthermore we exclude stars with extinction $E(B-V)>0.2$ from \citet{sfd98}, resulting in $\approx$344,000 stars.
We than use the random forest regression implemented in the \texttt{sklearn}\footnote{\url{https://scikit-learn.org}} package in order to map the array of 110 BP/RP spectral basis function coefficients onto [Fe/H]. In order to make the BP/RP coefficients independent from the brightness of stars, we divide each coefficient by the G-band flux of the star. To take into account the dust extinction, we also apply a linear extinction correction of the form $X_0 = X -(C_0 + C_1 X_{\rm sub}) E(B-V)$, where $X_0 $ is the extinction corrected coefficient vector, $X$ is the original G-flux normalized BP/RP coefficient coefficient vector and $X_{\rm sub}$ is a subset of 10 leading coefficients from BP and from RP, and $C_0$ is a 110 element extinction vector, while $C_1$ is the 110$\times$10 extinction matrix. This extinction correction is only expected to be appropriate for small extinction values $E(B-V)\lesssim 0.5$ because at higher reddening values extinction stops being linear in the coefficient space. We use default parameters of the regressor and ignore the BP/RP coefficient uncertainties.
The right panel of the Figure~\ref{fig:feh_measurements} shows the resulting [Fe/H] values estimated with the random forest regression (RFR), plotted against the APOGEE measurements. Note that this shows the result of a cross-validation with 5 folds, in other words, the comparison should not be affected by over-fitting. The typical accuracy of the RFR (based on 16/84-th percentiles of the residuals) is $\sim 0.1$ dex for the sources with the magnitude distribution similar to the APOGEE sample. We note however that the stars in the {\it Gaia} DR3 RVS are typically brighter than the APOGEE DR17 targets.
We use the spectrophotometric metallicities computed as discussed above to split the low-$|L_z|$ sample into two parts: the in-situ and the accreted halo. Note that with only [Fe/H] estimates in hand it is not possible to obtain a pure accreted selection, as the in-situ stars on halo-like orbits exists down to very low metallicities \citep[see][]{Aurora,Conroy2022,Myeong2022}. On the other hand, in the Solar neighbourhood, currently there is no strong evidence for GS/E members or any accreted stars with metallicities above [Fe/H]$=-0.7$. Therefore, we use [Fe/H]$=-0.7$ as the boundary between the accreted and the in-situ halo populations.
Figure~\ref{fig:phsp_feh} shows the behaviour of the phase-space density for stars with [Fe/H]$>-0.7$ (in-situ sample, left panel) and [Fe/H]$<-0.7$ (accreted sample, middle and right panels). The range of variation of the in-situ radial velocities is lower compared to that of the accreted halo (i.e GS/E debris), in agreement with previous analysis \citep[][]{Belokurov2020}. However, the in-situ halo does not appear completely phase-mixed: its phase-space distribution shows strong density variations matching the location of Chevron 1. The fact the in-situ stars must have contributed to Chevron 1 in Figure~\ref{fig:phsp2} is confirmed in the middle panel of Figure~\ref{fig:phsp_feh}. Here, in the accreted sample, chevron 1's signal is visibly reduced.
\section{Comparison to Numerical Simulations}
\label{sec:sims}
\subsection{Cosmological Zoom-in Simulations}
\label{sec:cosmo}
The Auriga cosmological magneto-hydrodynamical simulations \citep{Grand2017} consist of a sample of 30 MW-mass haloes simulated using the zoom-in technique \citep{Jenkins2013}. The haloes are selected from a parent dark-matter-only box\footnote{The parent box is the DMO counterpart of the largest box from the EAGLE project \citep{Schaye2015}} of size $(100\,{\rm Mpc})^3$, and have been constrained to be isolated and in the mass range $M_{200,c}=(1-2)\times10^{12}$ M$_{\odot}$ at $z=0$.
The simulations adopt cosmological parameters based on \citet{Planck2014} and were performed using the $N$-body magneto-hydrodynamical code Arepo \citep{springel2010}. In summary, the galaxy formation model includes homogeneous UV background radiation, gas cooling, star formation, stellar evolution feedback, black-hole accretion and AGN feedback. Analysis of this section is based on the 30 haloes simulated at the ``L4 resolution'' with dark matter and baryonic resolution elements of mass $\sim 3\times 10^5 M_{\odot}$ and $5\times10^4 M_{\odot}$, respectively.
\citet{Fattahi2019} created mock observations of the redshift $z=0$ stellar halos of the Milky Way analogs in the Auriga suite. Using a Gaussian Mixture Model \citep[see][]{Belokurov2018} they identified the Auriga hosts whose stellar halos (around the Solar radius) contained a highly elongated feature in the space of radial and azimuthal velocities. Requiring a high radial anisotropy and a high fractional contribution of the GS/E-like debris to the halo's stellar mass, \citet{Fattahi2019} selected a group of hosts whose stellar halos at Solar radius best resembled the observed local stellar halo of the Milky Way. While approximately one third of all Auriga hosts satisfied loosely the above conditions, four systems in particular, namely Au-5, Au-9, Au-10, Au-18, stood out in terms of their high radial anisotropy and the dominance of the GS/E-like debris. The stellar masses of the GS/E-like progenitor galaxy in these are between 1 and 4$\times10^9M_{\odot}$ and the accretion look-back times are between 7 and 11 Gyr. Of these four, the GS/E merger in Au-5 is the most massive and therefore contains the largest number of star particles, some three to four times more than the other three simulations. As the GS/E debris phase-mixes and thins out, the number of particles in the simulation plays a crucial role in our ability to resolve the folds \citep[see][for a detailed discussion of the role the sampling of the phase-space density plays in detecting the amount of mixedness]{Leandro2019}. We therefore focus on Au-5, as it provides just enough of a particle resolution to study the survival of the phase-space chevrons of a GS/E's numerical counterpart.
Figure~\ref{fig:Auriga_large} gives the $(v_r,r)$ phase-space density of the Au-5 halo with 200$\times$50 pixels. Left to right, the columns show the logarithm of the stellar particle density, the column-normalised density and the background subtracted density. As above, the background is obtained by simply convolving the density distribution with a Gaussian (in this case with a FWHM=7 pixels). The top row shows all of the halo particles, including both the accreted and the in-situ components. These are selected by requiring $|L_z|<0.7\times10^3$ similar to the cuts applied to the data in the previous section. The bottom row shows only particles from the GS/E-like progenitor. As the top row demonstrates, the phase-space of Au-5's halo contains a large number of nested folds. However, on the inspection of the bottom row, it is apparent that some of these belong to other accretion events. The bottom row of Figure~\ref{fig:Auriga_large} demonstrates that phase-space folds left behind by a GS/E-like event survive to the present day across a wide range of Galactocentric distances.
Figure~\ref{fig:Auriga_zoom} presents a zoomed-in view of the Au-5's phase-space density around the Solar radius, for a direct comparison with the observed behaviour discussed in the previous Section. From left to right, the columns show all halo particles, the in-situ halo particles and the GS/E only particles. The column-normalised (background-subtracted) density is given in the top (bottom) row of the Figure. The GS/E distribution contains a number of chevrons in the Solar neighbourhood, i.e at $5<r$(kpc)$<15$ (bottom right). Note however, that when all halo particles are considered, the strength of some of these is reduced (bottom left). This is partly due to the contribution of the in-situ halo (bottom centre). While the orbits of the GS/E particles reach as close as $\sim1$ kpc from the Galactic centre, at these small Galactocentric distances they are subdominant, being overwhelmed by the in-situ halo (compare the left and the right panels in the top row of the Figure). This is illustrated in Figure~\ref{fig:Auriga_vrr_max} which serves as a companion to Figure~\ref{fig:vrr_max} and shows the overall shape of the $(v_r,r)$ phase-space as indicated by the percentiles of the radial velocity distribution in bins of radius. Similarly to Figure~\ref{fig:vrr_max}, the grey lines peak around $r\approx5$ kpc. However, as clear from the behaviour of the coloured curves showing the $v_r(r)$ distribution trends for the GS/E, this radius can not be interpreted as the radius of the maximal amplitude of the GS/E radial velocity. In Au-5, the turnover in the all-halo curves at $r\approx5$~kpc is driven by the in-situ halo contribution.
\subsection{Tailored merger simulations}
\label{sec:mergersim}
In order to have a greater control on the properties of the merger remnant, we ran a number of dedicated $N$-body simulations of mergers between a Milky Way-like host galaxy and a GS/E progenitor (satellite). We broadly follow the simulation setup described in \citet{Naidu2021}: the mass ratio between the host and the satellite is 1:2.5, its initial orbit is moderately eccentric (orbital circularity $\eta\equiv L/L_\mathrm{circ} = 0.5$), and the orbital angular momentum is tilted by $30^\circ$ from the angular momentum of the host disc; however, unlike the \cite{Naidu2021} study, our satellite orbit is prograde. Both galaxies are initially set up in equilibrium, with $(1+4)\times10^6$ stellar and dark matter particles in the host and $(0.5+2)\times10^6$ particles in the satellite. A detailed description and analysis of our simulation suite will be presented elsewhere, and here we focus on one particular model, which qualitatively reproduces many of the observed properties of the GS/E population, but does not necessarily match the Milky Way in detail. In particular, the total mass and therefore the energy scale of the merger remnant are somewhat lower than in the fiducial Milky Way potential used in Section~\ref{sec:energy}. In this simulation, the satellite barely completes three pericentre passages before being fully disrupted, and as discussed in \citet{Naidu2021} and \citealt{Vasiliev2022}, the orbit of such a massive satellite quickly radializes, so that the final angular momentum of the debris is close to zero.
Figure~\ref{fig:mergersim_global} shows a sequence of four snapshots at different times, starting from the moment just after the disruption and ending roughly at the present day. We separate the stellar particles of the satellite into three different stripping episodes, with the second one being the most dramatic, and within each episode, by their location in the leading or trailing arms (based on the energy difference between the satellite centre and the particle at the moment of its unbinding). The least bound populations (in the first and the second trailing arms, magenta and green) have on average slightly positive $L_z$, inheriting it from the satellite's orbital angular momentum, and the more tightly bound debris are mostly located closer to $L_z=0$. However, some of the stars from the first leading arm (cyan) occupy the region with sufficiently negative $L_z$ to be associated with the Sequoia \citep{Myeong2019} and Thamnos \citep{Koppelman2019b} populations in the Milky Way. Given that the satellite orbit was initially prograde w.r.t.\ the host disc, it may appear surprising that the first to be stripped debris end up in the retrograde region. However, we stress that the host galaxy also moves significantly during this high-mass-ratio merger, so the angular momentum w.r.t.\ the host centre is not conserved even for particles that are no longer bound to the satellite. It is therefore plausible that these retrograde populations may come from the same progenitor galaxy as GS/E itself, as advocated by \citet{Koppelman2020} and \citet{Amarante2022}. In addition, the angular momentum of the host galaxy disc after the merger may continue to precess \citep{Dillamore2022, Dodge2022}, further complicating the interpretation of the $E$--$L_z$ distribution of the debris.
Particles stripped in each episode have a wide range of energies and orbital periods, leading to the phase mixing and winding up of individual chevrons in the $r$--$v_r$ phase-space and a corresponding stretching and flattening of ridges in the $E$--$\theta_r$ space (see Section~2 in \citealt{DongPaez2022} for an in-depth discussion). Each chevron in the $r$--$v_r$ space corresponds to a continuous segment of particles stretching from $\theta_r=0$ (pericentre) through $\theta_r=\pi$ (apocentre) to $\theta_r=2\pi$ (next pericentre). As the fourth column of Figure~\ref{fig:mergersim_global} illustrates, particles belonging to the same chevron have a gradient of energy vs. $\theta_r$, with the less bound particles having longer radial orbital periods and therefore moving slower to the right in the $\theta_r$ direction. Upon crossing the right boundary $\theta_r=2\pi$, particles reappear on the left boundary and continue moving to the right with a constant speed, increasing the number of chevrons with time.
Particles that arrive to their apocentres later have higher energies and therefore turn around at larger radii. This creates the prominent asymmetry of the outermost chevrons w.r.t.\ the sign of $v_r$, with the maximum radius corresponding to positive $v_r$ and therefore moving outward with time. Eventually, the gradients in the $E$--$\theta_r$ space decrease and the individual chevrons become more and more monoenergetic, but the distance in energy or radius between them also decreases, making it difficult to separate out individual wraps. On the other hand, the ``super-chevrons'' (agglomerations of individual folds) corresponding to the same stripping episode and arm (olive and green) remain discernible even after 10~Gyr, and their spacing is primarily determined by the difference in energy between the leading and the trailing arms of the main (second) stripping episode, which itself encodes information about the progenitor mass and structure. The apocentre radii of the super-chevrons correspond to bumps and breaks in the density profile of the merger debris \citep{Deason2018}.
We also note that the distribution of particles in energy space becomes more wrinkled within a few Gyr after the merger (third row in Figure~\ref{fig:mergersim_global}). This is likely caused by global modes in a self-gravitating merger remnant, where particles satisfying particular resonance conditions end up evolving in a similar way and reinforcing perturbations \citep[e.g.,][]{Weinberg1989}. However, these small-scale structures are eventually smoothed out and dissipate (fourth row), except perhaps in the outermost parts of the remnant.
Figure~\ref{fig:mergersim_insitu} shows that these wrinkles in the one-dimensional distribution of particles in energy are present both in the accreted and \textit{in situ} populations, and that the $r$--$v_r$ phase-space distribution of the host stars also has at least one chevron-shaped feature created during the merger (corresponding to the \textit{Splash} population in the Milky Way).
Figure~\ref{fig:mergersim_local} illustrates the spatial variations in the $r$--$v_r$ phase-space distribution at different times. Shown are three spatial regions of radius 5 kpc, all centered at 8 kpc from origin, but at different azimuthal positions; of these, the first two are similar to the actual Solar location relative to the GS/E population in the Milky Way (Figure~3 in \citealt{Iorio2019}). At all times, the chevron configuration varies with the selected location and is different above and below $v_r=0$. Although the detailed structure of chevrons depends on the local spatial region, in general a few individual folds remain visible after 10~Gyr, especially when coloured by $L_z$. The variation in $L_z$ between chevrons is inherited from the most dynamic stages of the merger, where both the host and the satellite move relative to each other, and debris stripped at different times and energies end up having different values of $L_z$ after the dust settles.
\section{Discussion and Conclusions}
\label{sec:conc}
We have used {\it Gaia} DR3 RVS data to study small-scale sub-structure in the local portion of the tidal debris from the last significant merger, known as GS/E. The {\it Gaia} DR3 RVS dataset increases the number of stars with the complete 6D phase-space information by an order of magnitude compared to other currently available spectroscopic surveys such as SDSS, RAVE, LAMOST, GALAH or APOGEE. For example, applying selection criteria similar to those used in our analysis (e.g. the quality of the astrometric distance measurement and angular momentum) to APOGEE DR17 and LAMOST yields $\approx10,000$ and $\approx30,000$ halo stars in each of these respectively compared to $\approx260,000$ analysed here. It is this ramp-up in the resolution that enables the study of the small-scale halo density variations reported here.
Armed with the {\it Gaia} DR3 RVS data, we are able to discover a large number of previously undetected sub-structures in the density distribution of the local stellar halo (identified here with a simple angular momentum selection), both in the phase-space and the integrals-of-motion space. In the energy and angular momentum $(E, L_z)$ space, the GS/E debris is revealed to have an elongated and tilted, avocado-like shape where stars with higher energy tend to have $L_z>0$. The largest number of stars are at low energies, in the pit of the avocado. The $E$ distribution is also rather wrinkly with several overdensities and depletions, visible most clearly when halo stars with extreme (either prograde or retrograde) $|L_z|$ are considered (see Section~\ref{sec:energy} and Figure~\ref{fig:elz}). Similarly, in the $(v_r,r)$ phase-space the local stellar halo density is not smooth. It shows a network of thin, nearly linear bands or chevrons that are over-dense compared to the mean background. These chevrons show different strength depending on whether stars with $L_z>0$ or $L_z<0$ are considered (Section~\ref{sec:phase-space} and Figure~\ref{fig:phsp2}). Part of the difference is due to the in-situ halo contamination which is a strong function of the angular momentum. At least 5 distinct chevrons are recognisable in the $v_r<0$ portion of the phase-space, however some of these may be further resolvable into narrower components (Figure~\ref{fig:phsp3}).
To verify the in-situ halo contribution we estimate stellar metallicities by training a random forest regressor on the {\it Gaia} DR3 BP RP spectra labelled using APOGEE DR17 abundances (Section~\ref{sec:met}). Figure~\ref{fig:phsp_feh} confirms that a large fraction of low-energy low-$L_z$ stars are metal-rich, with [Fe/H]$>-0.7$ and thus likely born in the Milky Way proper. The phase-space density of the in-situ halo exhibits several sharp changes including a chevron-like feature overlapping with chevron 1. The in-situ halo contribution needs to be taken into account when estimating the pericentre of the GS/E debris: as judged by the retrograde members only it must be within $r\approx3$ kpc of the Galactic centre (Figure~\ref{fig:vrr_max}). The most prominent chevrons 1 and 2 have energies similar to those of the main peaks of the energy distributions (compare Figure~\ref{fig:phsp3} and Figure~\ref{fig:elz}). These strong overdensities ought to influence the stellar halo's radial density profile likely causing an additional break around $10<r$(kpc)$<15$ as proposed in \citet{Naidu2021} and \citet{Han2022}.
As far as the interpretation of the discovered halo sub-structures is concerned, both the energy wrinkles and the phase-space chevrons are likely to be the consequence of phase-space mixing. In the Solar neighbourhood, i.e. away from both the pericentre and the apocentre, only the quasi-linear portions of chevron are accessible and, indeed, this is how many of these structures appear in the {\it Gaia} data. Phase-mixing is not supposed to alter the $E$ distribution. However, as a result of the mixing, stellar energies end up correlating strongly with the phase-space coordinates, i.e. stellar positions and velocities. Due to the ensued energy sorting, only stars with specific orbital frequencies will be at the right phase to be observed in a small region in the configuration space (e.g. a volume around the Solar neighbourhood) today. This would lead to a series of depletions in the energy distribution of the tidal debris which, as a result, would appear wrinkled. Similarly, even if due to phase-mixing the number and the size of chevrons evolved to be such that at the present day individual chevrons are no longer discernible, limiting the sample to a small region around the Sun would pick out a subset of chevrons, making the phase-space more striated.
The above discussion describes the situation when a single clump of stars is deposited by the disrupting satellite into the host's potential to phase-mix. Realistically, however, even in the case of an explosive dissolution of rapidly radializing satellite -- such as that expected for the GS/E progenitor -- multiple stripping episodes are predicted, each resulting in at least two clumps of stars with distinct energies (and often $L_z$) corresponding to the leading and the trailing tidal debris. As the satellite sinks in the host's potential, the energy of each deposited clump will be different, thus making the final energy distribution clumpy (Section~\ref{sec:mergersim}). This may lead to several complications in the future analysis of the {\it Gaia} data. First, any modelling techniques that rely on the assumption of a smooth stellar distribution function \citep[e.g.][]{Leonard1990} may struggle given how wrinkly the observed DF is \citep[see][]{Grand2019}. Furthermore, linking overdensities in $(E, L_z)$ space with individual accretion events, following ideas of e.g. \citet{Helmi2000}, may easily be fraught with danger because a single event can produce multiple clumps.
As the satellite loses mass, the energy spread of the stripped stars diminishes, thus the energy distribution of the debris from a sinking and disrupting dwarf galaxy should appear asymmetric, peaking at low $E$ as observed here (see Section~\ref{sec:energy}). Stars in each energy overdensity left behind will phase-mix and create their own set of chevrons, which will evolve and merge, creating thicker super-chevrons (see Section~\ref{sec:mergersim}). It appears therefore that the complexity in the $(E, L_z)$ distribution of the GS/E tidal debris makes the use of the detected chevron patterns for the timing of the merger event rather difficult. Nonetheless, the first comparison with both tailored and Cosmological zoom-in simulations are reassuring, as most of the salient features uncovered in the {\it Gaia} data have numerical counterparts in the models analysed. As demonstrated by an example from the Auriga Cosmological zoom-in suite, phase-space chevrons left behind by a merger similar to GS/E can remain detectable in the host for many Gyrs (Section~\ref{sec:cosmo}). Snapshots of our bespoke simulations show a clear prograde tilt of the $(E, L_z)$ cloud as well the asymmetric, bottom-heavy energy distribution. Thanks to the high resolution of our tailored N-body simulation of a GS/E-like merger, new details of both the disruption and the relaxation processes emerge. For example, the energy distribution gets wrinkled not only due to the bursty stripping history but also because of the emergence of what appears to be global and rapid DM density oscillations, ``sloshing'' through the entire host soon after the satellite dissolves. These ripples (Figure~\ref{fig:mergersim_insitu}) flatten out with time, and at the present day, inhomogeneities in the energy distribution are dominated by the individual mass loss events and the leading-trailing energy bimodality. The persistence of these feature over many Gyr should help trace and reconstruct the sinking of the GS/E progenitor as its orbit radialized.
The simulated local Solar neighbourhood phase-space distribution bears many a resemblance to the {\it Gaia} DR3 RVS observations. For example, the chevron pattern varies noticeable with both $v_r$ and $L_z$ (Figure~\ref{fig:mergersim_local}). These $L_z$-related variations are at least in part due to the complexity of the initial conditions with which the debris are deposited into the host. The differences in the chevron properties at positive and negative $v_r$ are possibly due to either non-zero angular momentum of the stars considered and/or the asphericity of the host potential. Both factors would ensure that once the stars have passed through the Solar neighbourhood, they are likely to miss this relatively small volume on the way back after the turn-around. Our tailored simulations convincingly demonstrate that the amount of small-scale substructure in the phase-space does not only depend on the time since the beginning of the merger but also on the observer's position inside the debris cloud. While many narrow chevrons are detectable even at 10 Gyr, the late-time phase-space density is dominated by super-chevrons. These phase-space folds observable as e.g. $(v_r, r)$ chevrons today are a powerful tool for constraining the MW's gravitational potential similar to other phenomena that arise due to long-term phase-mixing, such as stellar streams in the halo or ``snails'' (phase spirals) in the disk.
Note that phase-space folds have the advantage of covering a larger volume in the Galaxy compared to a typical stream. This can be exploited to map out the global properties of the Milky Way's mass distribution in a novel and independent way. A demonstration of this idea is presented in Dey et al (2022) who detect multiple phase-space chevrons in the halo of the Andromeda galaxy. Enjoying the wide and dense spectroscopic coverage provided by the recently launched DESI survey they show that the Andromeda's Giant Stellar Stream is a part of a complex system of tidal debris from a massive and recent accretion event. Given the merger's relatively early stage, in contrast to that of the GS/E, the chevrons identified by Dey et al (2022) are fewer, more pronounced and are set further apart from each other in the phase space compared to the sub-structures discovered here. Dey et al (2022) take advantage of the panoramic view of the entire M31 halo and use the phase-space folds to constrain the M31's potential out to $>$100 kpc from its centre.
Additionally, being as fragile as streams, the phase-space folds will blur and dissolve in reaction to time-varying potential perturbations \citep[also see][for a discussion of additional effects that can lead to the dissolution of the phase-space sub-structure]{Leandro2019evo}. Interactions with a passing mass, such as a sub-halo flying by or a rotating bar, would alter the affected stars' orbital frequencies and thus destroy the coherence of the chevrons. Therefore, the continued existence of these features implies a relatively quiet evolution of our Galaxy since the GS/E merger, although further work is needed to make more quantitative statements. We hope to address the response of the phase-space folds to satellites such as the Sgr dwarf in a forthcoming publication (see Davies et al 2022).
\section*{Acknowledgments}
The authors are grateful to Kathryn Johnston, Nicol\'as Garavito-Camargo, Emily Cunningham, Adrian Price-Whelan, David Spergel, Jason Hunt, Giuliano Iorio, Leandro Beraldo e Silva, Jason Sanders, David Hogg, Iulia Simion, Melissa Ness and the members of the Cambridge Streams and the CCA Dynamics groups for many illuminating discussions that helped to improve the quality of this work. VB thanks Kohei Hattori for enlightening conversations about phase-mixing at the Santa Barbara Gaia Sprint held at the KITP UCSB in Spring 2019. This project was developed in part at the Gaia F\^ete, hosted by the Flatiron Institute Center for Computational Astrophysics in 2022 June.
This research made use of data from the European Space Agency mission Gaia
(\url{http://www.cosmos.esa.int/gaia}), processed by the Gaia Data
Processing and Analysis Consortium (DPAC,
\url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the
DPAC has been provided by national institutions, in particular the
institutions participating in the Gaia Multilateral Agreement. This
paper made used of the Whole Sky Database (wsdb) created by Sergey
Koposov and maintained at the Institute of Astronomy, Cambridge with
financial support from the Science \& Technology Facilities Council (STFC) and the European Research Council (ERC). This work used the DiRAC@Durham facility managed by the
Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K00042X/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, Durham University and STFC operations
grant ST/R000832/1. DiRAC is part of the National e-Infrastructure.
RG acknowledges financial support from the Spanish Ministry of Science and Innovation (MICINN) through the Spanish State Research Agency, under the Severo Ochoa Program 2020-2023 (CEX2019-000920-S). AF is supported by
the UK Research and Innovation (UKRI) Future Leaders Fellowships (grant numbers MR/V023381/1, MR/T042362/1).
\section*{Data Availability}
This study uses publicly available {\it Gaia} DR3 data.
\bibliography{references}
\appendix
\label{lastpage}
|
Title:
Identifying Interstellar Object Impact Craters |
Abstract: The discoveries of two Interstellar Objects (ISOs) in recent years has
generated significant interest in constraining their physical properties and
the mechanisms behind their formation. However, their ephemeral passages
through our Solar System permitted only incomplete characterization. We
investigate avenues for identifying craters that may have been produced by ISOs
impacting terrestrial Solar System bodies, with particular attention towards
the Moon. A distinctive feature of ISOs is their relatively high encounter
velocity compared to asteroids and comets. Local stellar kinematics indicate
that terrestrial Solar System bodies should have experienced of order unity ISO
impacts exceeding 100 km/s. By running hydrodynamical simulations for
projectiles of different masses and impact velocities, up to 100 km/s, we show
how late-stage equivalence dictates that transient crater dimensions are alone
insufficient for inferring the projectile's velocity. On the other hand, the
melt volume within craters of a fixed diameter may be a potential route for
identifying ISO craters, as faster impacts produce more melt. This method
requires that the melt volume scales with the energy of the projectile, while
crater diameter scales with the point-source limit (sub-energy). Given that
there are probably only a few ISO craters in the Solar System at best, and that
transient crater dimensions are not a distinguishing feature for impact
velocities at least up to 100 km/s, identification of an ISO crater proves a
challenging task. Melt volume and high-pressure petrology may be diagnostic
features once large volumes of material can be analyzed in situ.
| https://export.arxiv.org/pdf/2208.00533 |
\title{Identifying Interstellar Object Impact Craters}
\correspondingauthor{Samuel H. C. Cabot}
\email{[email protected]}
\author{Samuel H. C. Cabot}
\affil{Yale University, 52 Hillhouse, New Haven, CT 06511, USA}
\author{Gregory Laughlin}
\affil{Yale University, 52 Hillhouse, New Haven, CT 06511, USA}
\keywords{}
\section{Introduction} \label{sec:intro}
The discoveries of `Oumuamua \citep[from the Pan-STARRS survey, ][]{Meech2017} and Comet 2I/Borisov (by G. Borisov at the Crimean Astrophysical Observatory in 2019)\footnote{www.minorplanetcenter.net/mpec/K19/K19RA6.html} have prompted intensive study of the number density, composition, and origin of ISOs. Initial upper limits on number density were placed by \citet{Engelhardt2017}, based on simulated ISO populations and their detectability by modern surveys. However, the discovery of `Oumuamua yielded an estimate for similar objects of 0.2 au$^{-3}$ \citep{Do2018}. While Comet 2I/Borisov is very similar to Solar System comets \citep{Guzik2020}, `Oumuamua's oblong shape and lack of coma \citep{Meech2017}, along with its anomalous acceleration \citep{Micheli2018} has forced reconsideration of its makeup, including materials atypical of comets and asteroids \citep[e.g.][]{Rafikov2018, Fuglistaler2018, Desch2021}. Earlier identification with the Vera C. Rubin Observatory (LSST) or even {\it in situ} analyses \citep{Snodgrass2019} would drastically improve our understanding of ISOs; specifically their relationship to the galaxy-wide population of ejected planetesimals \citep{ Trilling2017}.
The entry trajectory of `Oumuamua (at speed $v_\infty \simeq 26$ \kms; \citealt{Meech2017}) was similar to the local standard of rest (LSR) \citep{Francis2009}, consistent with expectations for ISOs. The difference between the median velocity of nearby stars (XHIP catalog; \citealt{Anderson2012}) and that of `Oumuamua's entry was only about $4.5$ \kms\ at $\sim 6^{\circ}$ \citep{Mamajek2017}. Nevertheless `Oumuamua was not comoving with any particular nearby system. While specific stars have been postulated as the origin, chaotic gravitational interactions make a precise back-tracing impossible. Unexpectedly perhaps, 2I/Borisov entered at $v_\infty \sim 32$ \kms\ at $\sim 75^{\circ}$ away from the solar apex \citep{Guzik2020}, its origin again speculative \citep{Dybczynski2019}. As pointed out by \citet{Do2018}, the detection volume of ISOs scales as $v_{\infty}^{-1}$, from multiplication between gravitational focusing from the Sun (the effective cross-section becomes $r_g^2 = r^2 [(v_{\rm esc}/v_\infty)^2 + 1]$) with the impingement rate. Therefore ISOs may be less efficiently detected if they encounter the Solar System at speeds substantially exceeding the Sun's escape velocity at $d\sim1\,{\rm au}$. The detectability of ISOs as a function of $v_\infty$ and impact parameter $b$ is quantified by \citet{Seligman2018}. ISOs with $v_\infty \gtrsim 10$ \kms\ must have $b \lesssim 5$ au if they are to be identified by LSST prior to periastron. Although `Oumuamua came serendipitously close to Earth ($r_p \simeq 0.25$ au, $b \simeq 0.85$ au), these calculations reveal the significant challenge of detecting additional ISOs.
Motivated by an encouragingly high encounter rate of ISOs, up to $\sim 7$ per year that pass within $1\,{\rm au}$ of the Sun \citep{Eubanks2021}, we consider an alternative route to characterizing these enigmatic objects: identifying ISO impact craters on terrestrial Solar System bodies. {For example, molten and vaporized projectile matter may mix with impact-modified target rock (impactites) and impart tell-tale chemical signatures. More optimistically, some projecile material might survive in solid phase. A suite of standard chemical and isotopic analyses exist for characterizing meteorites and impact melts \citep{Tagle2006, Joy2016}, which could reveal the ISO's composition.}
{Before an {\it in situ} or retrieved sample analysis is possible, we need a high-fidelity method for screening ISO craters from asteroid and comet craters.
Crater morphology and high-pressure petrology may be differentiating traits; but this premise is significantly challenged by well-known degeneracies between crater and projectile properties \citep{Dienes1970, Holsapple1982}}. {Although, some} constraints have been achieved for especially renowned and well-studied craters. For example, \citet{Collins2020} used 3D simulations to link asymmetries in the Chicxulub crater to a steeply-inclined impact trajectory; although the observations are compatible with a modest range of angles and impact speeds. Using an atmospheric-entry fragmentation model, \citet{Melosh2005} {posited} that Meteor Crater was formed by a low-speed impact, which additionally explains an anomalously low melt volume. {However, this model was challenged by \citet{Artemieva2011} on the basis of little observed solid projectile ejecta.} As another example, \citet{Johnson2016} modeled formation of the Sputnik Planum basin and found consistency with a 220 km diameter projectile; however they assumed a 2 \kms\ speed typical for impacts on Pluto. {There is a considerable amount of literature surrounding each of these craters, which raises a number of other interpretations than those listed here \citep[e.g.][]{Artemieva2009, Denton2021}, and echoes the difficulties of inferring projectile properties from their craters.} {We note that impacts} in the Solar System virtually never exceed $\gtrsim$ 100 \kms, and hence these speeds are seldom modeled in the literature. Nevertheless, we will show they are not atypical for ISO impacts, and thus warrant further investigation --- this aspect is the main focus of our study.
{If ISO craters can be identified, then surviving ISO meteorites in and around the crater could be readily analyzed for metallic content, oxygen isotope fractionation, and elemental ratios (e.g. Fe/Mn) \citep{Joy2016}; however, if} ISOs are composed of highly volatile, exotic ice \citep{Seligman2020, Desch2021}, we may expect that they undergo near-complete vaporization upon impact, and suffer the same issues in chemical-based identification as comets do \citep{Tagle2006} (a small percentage of water content may survive comet impacts \citep{Svetsov2015}). {An ISO's composition could still be investigated if its material persists in the impact melt or vapor condensates. For example, }
\citet{Tagle2006} evaluate a few methods for projectile classification, involving relative concentrations of platinum group elements (PGEs), Ni and Cr, and isotopic ratios of Cr and Os. At present, `Oumuamua's composition is highly speculative, as is the composition of the general ISO population. {Any insight into their compositions can be directly tied to formation pathways (e.g. molecular cloud cores in the case of H$_2$, or cratered ice sheets in the case of N$_2$) as well as their abundance in the galaxy \citep[][and references therein]{Levine2021}}.
Our study is outlined as follows: In \S\ref{sec:vel} we review the impingement rate of ISOs and the expected velocity distribution based on local stellar kinematics. In \S\ref{sec:crater} we present hydrodynamical simulations representative of ISO impacts on terrestrial bodies. While certain aspects of these impacts are unconstrained (most notably the projectile composition) we use well-understood materials as proxies to obtain order-of-magnitude estimates of crater size and melt volume. We restrict analysis to transient craters for simplicity; although {collapse and viscous degradation may modify their shapes} \citep[][Chapter 8]{Melosh1989}. Specific attention is given to lunar cratering in light of soon-to-be realized exploration missions; however parts of our investigation extend to other terrestrial bodies such as Mars. The simulation results are subsequently compared to predictions from crater {scaling relationships}. We discuss additional scaling relations in \S\ref{sec:melt}, with a particular focus on how melt volume may be used to infer the impact velocity. Our results are summarized in \S\ref{sec:disc}.
\section{ISO Impact Velocities} \label{sec:vel}
It is important to determine the speed at which ISOs impact terrestrial bodies in the Solar System. {A significant component} is from $v_\infty$, the speed at which the ISO encounters the Solar System. About $40$ \kms\ is added in quadrature for ISOs that come within $1\,{\rm au}$ of the Sun). {ISO impacts on the Moon can reach velocities $\geq100$ \kms; these events are the focus of our study.} We review analytic expressions for the kinematics of stars in the solar neighborhood, as well as measurements of the velocity dispersion along each principal axis. Next, we independently analyze the kinematics of stars with full phase-space measurements provided in the recent {\it Gaia} data release. These velocities are combined with the estimated number density of ISOs to obtain the encounter rate as a function of ISO speed.
\subsection{Local Stellar Kinematics: Theory} \label{subsec:veltheory}
ISOs of icy composition are expected to have a kinematic distribution reflective of their origin systems. \citet{Binney2008} show velocities in the galactic disk are well-described by a Schwarzschild distribution
\begin{equation}
f(\mathbf{v}) = S(L_z) \exp\Big[-\Big(\frac{v_R^2 + \gamma^2\widetilde{v}_\phi^2}{2\sigma_R^2(L_z)} + \frac{v_z^2 + 2\Phi_z(z, L_z)}{2\sigma_z^2(L_z)}\Big)\Big]\, ,
\end{equation}
for cylindrical velocity components $v_R$, $v_\phi$, and $v_z$ and their respective dispersions $\sigma_R$, $\sigma_\phi$, and $\sigma_z$ \citep{Dehnen1998, Nordstrom2004}. Angular momentum is denoted $L_z$. The term $\widetilde{v}_\phi \equiv v_\phi - v_c(R)\mathbf{\hat{e}}_\phi$ represents difference between the angular velocity component and the circular velocity at the star's galactic radius, $R$. The term $\gamma \equiv 2\Omega/\kappa$ arises from the guiding center approximation, where $\Omega$ is the circular frequency and $\kappa$ is the epicyclic frequency. The potential $\Phi_z(z, L_z)$ appears from an approximation to the third integral of motion. The exponential form follows from \citet{Shu1969}, and the leading term $S(L_z)$ depends on the surface density of stars. Under two approximations, first that the surface density follows an exponential disk, and second that the dispersions are relatively low compared to the circular speed (i.e. that the stars are of a ``cold" population), the solar neighborhood distribution follows a triaxial Gaussian model \citep{Schwarzchild1907},
\begin{equation}
dn \propto \exp\Big[-\Big(\frac{v_R^2 + \gamma^2\widetilde{v}_\phi^2}{2\sigma_R^2} + \frac{v_z^2}{2\sigma_z^2}\Big)\Big].
\end{equation}
If one generalizes beyond the epicyclic approximation, which also assumed that $\sigma_\phi/\sigma_R = \kappa/2\Omega$, then the solar neighborhood distribution becomes
\begin{equation} \label{eqn:velcomp}
f(\mathbf{v}){\rm d}^3\mathbf{v} = \frac{n_0 {\rm d}^3\mathbf{v}}{(2\pi)^{3/2}\sigma_R\sigma_\phi\sigma_z}
\exp\Big[-\Big(\frac{v_R^2}{2\sigma_R^2} + \frac{\widetilde{v}_\phi^2}{2\sigma_\phi^2} + \frac{v_z^2}{2\sigma_z^2}\Big)\Big]\, ,
\end{equation}
where $n_0$ is the number of stars per unit volume \citep{Binney2008}. This equation is useful under the assumption that ISOs originate predominantly from nearby, Population I stars. As discussed in the following subsection, population studies provide excellent constraints on the dispersion along each principal axis. However, the distribution for speed $|v|$ is not well described by a Gaussian or Boltzmann distribution; a log-normal model provides a reasonable fit \citep{Eubanks2021}.
The impact rate of ISOs, $\Gamma = n_{\rm ISO}\sigma_p v_{\infty}$, depends on the number density of ISOs, the cross-sectional area of the target, and the relative velocity of the two bodies. A more detailed formulation is given by \citep{Lacki2021},
\begin{equation}
\Gamma(\geq K_T) = \int_{0}^{\infty} \int_{2K_T/v_{\infty}}^{\infty} \sigma_p v_{\infty} f(v_{\infty}) \frac{dn_{\rm ISO}}{dm} dm d v_{\infty}\, ,
\end{equation}
which is the impact rate of ISOs with energy at least $K_T$. The mass distribution is probably well-described by a {power law}, which is often adopted for minor body populations in the Solar System. Order-of-magnitude estimates by \citet{Lacki2021} {yield an ISO impact rate of $6\times10^{-6}$ Gyr$^{-1}$ at Earth, restricted to projectiles with $\geq1$ YJ kinetic energy (roughly equivalent a $10^{15}$ kg projectile impacting at $45$ \kms)}. A one-dimensional, Maxwellian stellar velocity dispersion of 30 \kms\ was assumed, which is roughly the average of the three solar neighborhood dispersions measured by \citet{Anguiano2017}. {We investigate the local velocity dispersion in more detail in the next subsection}. Note the actual impact speed of the ISO is higher than the relative speed with the Solar System ($v_i > v_\infty$) due to extra energy gained by falling into the Sun's potential well, plus a small contribution {from} the target planet or {satellite}'s gravity, {notwithstanding atmospheric effects}. Also, the effective cross-section is modified by gravitational focusing.
{Our investigation hinges on the possibility that anomalously fast ISO impacts produce craters distinct from comet and asteroid impacts. Therefore, we review the distributions of impact speeds in the Solar System to determine a velocity threshold that effectively excludes comets and asteroids.} {Impacts at Earth, Venus, and Mercury commonly exceed 20 \kms\ \citep{lefeuvre2011}, with Mercury's distribution extending to 90 \kms. For the Earth/Moon system, impacts rarely occur at greater than 50 \kms\ \citep{lefeuvre2011}. The high-velocity tail mainly comprises long-period comets which may impact at speeds up to $\sim70$ \kms\ \citep[][]{Steel1998}. Cosmic velocities of ISOs, however, occasionally exceed 90 \kms\ (see below) and would therefore yield {impacts faster than expected for typical Solar System impactors} {(e.g. 100 \kms\ at Earth and up to 113 \kms\ at Mercury, taking into account the Sun's potential well and planet escape velocity)}. {Comet and asteroid impact velocities are generally lower for bodies at} larger semi-major axes. For example, the mean impact speed is 10.6 \kms\ for Mars \citep{lefeuvre2008} and 4.75 \kms\ for Vesta \citep{obrien2011}. The distribution of impacts vanishes past 40 \kms\ for Mars, and past 12 \kms\ for Vesta and Ceres \citep{obrien2011}. If craters could be linked to these impact speeds or higher, ISOs would be strong candidates for the associated projectile. Therefore, while this study is primarily concerned with impacts on the Moon, a larger range in impact speed could be associated with ISOs for craters on Mars and more distant terrestrial bodies.}
\subsection{Local Stellar Kinematics: Observed} \label{subsec:velobs}
The proper motions of nearby stars are thoroughly measured thanks to large surveys. {\it Gaia}, for example, has provided a massive catalog of 7.2 million stars with complementary line-of-sight velocities \citep{GaiaCollaboration2018a}. After filtering, their main sample contained approximately 6.4 million sources with full phase-space measurements. The vast majority of stars within the sample lie near the origin in the classic Toomre diagram \citep{Sandage1987} depicting $V$ against $(U^2+W^2)^{1/2}$ offset by the solar LSR ($U$, $V$, and $W$ refer to radial, tangential, and vertical velocity components respectively). This figure is often used to depict distinct populations of stars \citep{Venn2004}. Iso-velocity contours in the Toomre Digram delineate transitions between stellar populations; for example, \citet{Nissen2004} define 80 \kms\ and 180 \kms\ as the boundaries confining thick-disk stars, where lower speeds correspond to the thin-disk stars. \citet{Venn2004} used the Toomre Diagram to dynamically classify stars into five categories (thin-disk, thick-disk, halo, high-velocity, and retrograde), and subsequently determine chemical properties of each population.
A significant fraction of stars in the {\it Gaia} catalog have relative speeds exceeding $100$ \kms, but few lie in the Solar System's vicinity. For stars in the galactic mid-plane (extending $-200$ to $+200$ pc), velocity dispersions are of order $10-40$ \kms\ for the three components, with some variation in radial distance from the galactic center. Populations of stars that are a few kpc above and below the mid-plane exhbiit dispersions of up to $60-80$ \kms\ per component. Other survey studies also report the spatial dependence of velocity dispersion (generally increasing toward the galactic center, and away from the mid-plane) \citep[e.g.][]{Bond2010, Recio-Blanco2014}. Stellar properties such as metallicty and age are correlated with velocity dispersion \citep[e.g.][]{Stromgren1987, Nissen2004, Rojas-Arriagada2014}. Example dispersions considered by \citet{Binney2008} were based on the Geneva-Copenhagen survey \citep{Nordstrom2004} that observed F and G dwarfs. \citet{Nordstrom2004} presented age-dependent velocity dispersions. The youngest stars (within their 1 Myr age bin) had $\sigma_{\rm tot} \approx 30$ \kms, while the oldest stars (within their 10 Myr age bin) had $\sigma_{\rm tot} \approx 60$ \kms. In all bins, it was found $\sigma_U > \sigma_V > \sigma_W$.
We analyzed the dynamics of ISOs originating within the local stellar neighborhood using {\it Gaia} EDR3 data \citep{Gaia2020}, in a similar fashion as \citet{Marchetti2020} and \citet{Eubanks2021}. The Sun's peculiar velocity was taken as (11.1, 12.24, 7.25) \kms\ \citep{Schonrich2010} relative to an LSR of (0, 235, 0) \kms. The dynamics of the closest stars are probably most representative of ISO velocities, so we included only stars within 200 pc of the Sun. The Toomre Diagram for the stellar sample is shown in Figure~\ref{fig:rates}, where velocity components are as measured in the Sun's rest-frame. Each bin is rescaled to reflect its contribution to the encounter rate of ISOs. This step is accomplished by first normalizing the sum over all bins to the ISO number density, $n_{\rm ISO} \sim$ 0.1 au$^{-3}$. This value is half the estimate of \citet{Do2018}, and is used as an upper limit by \citet{Eubanks2021} who appeal to the lack of recent detections. Each bin is multiplied by its speed $|v_\infty|=\sqrt{U_{rel}^2 + V_{rel}^2 + W_{rel}^2}$, a cross-section of 1 au$^2$, and a gravitational-focusing enhancement factor of $1 + (v_{\rm esc}/v_\infty)^2$, where $v_{\rm esc}$ is evaluated at 1 au. The results do not strongly depend on population volume since we normalize the distribution to reflect the total number density of ISOs. We find the total encounter rate of ISOs within 1 au of the Sun is about $6.80$ yr$^{-1}$. The majority arrive with $v_\infty < 100$ \kms, with a rate of 6.32 yr$^{-1}$. High-speed ISOs with $v_\infty > 100$ \kms\ arrive at 0.47 yr$^{-1}$, and $v_\infty > 200$ \kms\ arrive at 0.05 yr$^{-1}$. Our results are nearly the same as those of \citet{Eubanks2021}.
Interestingly, high-speed ISOs make a non-negligible contribution to the encounter rate, despite the vast majority of nearby stars having relative speeds of $\lesssim 100$ \kms\ (the peak of the distribution lies at around $40$ \kms). Multiplying by the ratio of the target's cross section to 1 au$^2$, we find Earth and the Moon experience $\sim 12$ and $\sim 0.9$ ISO impacts per Gyr, respectively. The objects most pertinent to this study, ISOs that impact the the Moon at speeds $v_i > 100$ \kms, have encounter speeds of $v_{\infty} > 90.6$ \kms, and a corresponding impact rate of $\sim 0.09$ per Gyr. Equivalently, there is a $31\%$ chance that the Moon experienced a high-speed ISO impact in the past 4 Gyr. Repeating the above analysis for Mars yields a high-speed impact rate of $\sim 0.29$ per Gyr. These results indicate that there should be of order unity high-speed ISO impact craters {on} the Moon and Mars.
{For most remaining terrestrial bodies, the chances of identifying an ISO crater based on the projectile's extreme speed appear slim. High-speed impacts of asteroids and comets are common at Mercury's orbit \citep{lefeuvre2011}; Venus experienced a recent cataclysmic resurfacing event \citep{Schaber1992}; and Earth's geological activity has largely erased ancient craters. The Galilean Moons Io and Europa seem unlikely candidates due to their small surface areas and young surface ages of $0.3-2.3$ Myr and 60 Myr, respectively \citep{Schenk2004}. Ganymede and Callisto on the other hand have surface ages of $\gtrsim 2$ Gyr and could be potential targets.}
\section{{Transient} Crater {Dimensions}} \label{sec:crater}
It is well known that crater dimensions are highly degenerate with projectile properties (e.g. velocity, radius, density, impact angle) \citep{Dienes1970, Holsapple1987}. We simulate impacts on the Moon in order to test whether degeneracies persist at the high-velocity tail of ISO impacts. {Our selection of target materials is a subset of those simulated by \citet{Prieur2017}, characteristic of the lunar regolith and upper megaregolith.} We then compare the results to theoretical expectations for crater diameter based on late-stage equivalence \citep{Dienes1970}.
\subsection{{Simulation Overview}}
We simulate impacts with the iSALE-Dellen 2D hydrocode \citep{Wunnemann2006}, which is based on the Simplified Arbitrary Lagrangian-Eulerian (SALE) program \citep{Amsden1980} designed for fluid flow at all speeds. SALE features a Lagrangian update step, an implicit update for time-advanced pressure and velocity fields, and finally an advective flux step for Eulerian simulations. Calculations are performed on a mesh in an Eulerian frame of reference to prevent highly distorted cells. Over the years, the program has seen new physics implemented, including an elasto-plastic constitutive model, fragmentation models, various equations of state (EoS), multiple materials, and new models of strength, porosity compaction, and dilatancy \citep{Melosh1992, Ivanov1997, Wunnemann2006, Collins2011, Collins2014}. Massless tracer particles moving within the mesh \citep{Pierazzo1997} record relevant Lagrangian fields. We adopt a resolution of 20 cells per projectile radius (CPPR) which has been demonstrated to be within $\sim10\%$ of convergent spall velocity \citep{Head2002}, peak shock pressure, and crater depth and diameter \citep{Pierazzo2008}. \citet{Barr2011} show that 20 CPPR underestimates melt volume by $\sim15\%$ in simulations of identical projectile and target materials. {For our impact configurations, we found $19\%$ and $22$\% lower melt volume in 20 CPPR simulations compared to 80 CPPR, for 30 \kms\ and 100 \kms\ impacts, respectively (Appendix~\ref{sec:appmelt}). Therefore we multiply melt volumes in our main analysis by a proportionate correction factor.} The timestep is limited by the Courant-Friedrichs-Levy (CFL) criterion, which demands higher temporal resolution for faster material speeds (and faster impact velocities). {We fixed the width of the high-resolution zone, which we found to overlay roughly the inner-half of the transient crater. This layout is sufficient for determining melt volume, and for determining the transient crater diameter (Appendix~\ref{sec:appscale}).} Three-dimensional simulations are occasionally used in the literature \citep[e.g.][]{Artemieva2004}. They are prohibitively expensive for our {investigation}, and unnecessary for exploring quantitative differences in crater profiles resulting from variable impact velocity. We restrict our analysis to head-on, azimuthally symmetric impacts. More information regarding computational methods for impact simulations is discussed by \citet{Collins2012} and references therein.
We focus attention to impacts on the Moon that produce {simple} craters. {Both 2I/Borisov and `Oumuamua have effective radii upper bounded by a few hundred meters, and the radii were more likely $\lesssim$ 100m \citep[e.g.][]{Jewitt2019}, which is insufficient to yield complex craters on the Moon.} We assume a target comprised of basalt and projectile of water ice. We acknowledge that `Oumuamua was likely not composed of water ice, and that 2I/Borisov was depleted in H$_2$O. The typical composition of ISOs remains debated, however, and all recent hypotheses have specific, production-limiting aspects \citep{Levine2021}. Nevertheless, `Oumuamua's anomalous acceleration probably implies a significant volatile component, either in the form of common ices (e.g. H$_2$O, CO) or exotic ices (e.g. H$_2$, N$_2$, CH$_4$) or a combination of both. We restrict analysis to a water ice projectile since: (1) the purpose of this study is to investigate whether extremely fast ISO impacts are discernible from those of comets and asteroids, and the main parameter of interest is impact speed; (2) the bulk properties of exotic ices are poorly constrained; and (3) many material properties of H$_2$O ice are within the same order of magnitude of those of other ices. {Nevertheless, an exotic ice projectile composition could affect the crater in a variety of ways. For example, extremely low density H$_2$ ($\rho \sim 0.08$ g cm$^{-3}$) would produce a crater of lower volume, owing to a shallower {penetration} depth $d_b \propto \rho_p^{0.5}$ \citep{Birkhoff1948}. Impacts on Mars are not thoroughly investigated here, but would warrant consideration of the planet's thin atmosphere. Exotic ice projectiles would fragment and thus modify the crater morphology \citep{Schultz1985}. Highly volatile ices may also experience increased ablation at lower velocities, reducing the projectile's mass.}
\subsection{{Simulated Target and Projectile Properties}}
{Material specifications for our simulations are described as follows, and are also listed in Table~\ref{tab:params}. They are primarily based on parameters used by \citet{Prieur2017} for basalt and by \citet{Johnson2016} for water ice. Material strength is set by a Drucker-Prager model, which is most appropriate for granular targets. Required parameters include cohesion $Y_0$, coefficient of friction $f$, and limiting strength at high pressure $Y_{\rm LIM}$. The $\epsilon$-$\alpha$ compaction porosity model \citep{Wunnemann2006, Collins2011} is adopted for the target, but neglected for the projectile. The required parameters are initial distension $\alpha_0\equiv 1/(1-\Phi)$ (for porosity $\Phi$), elastic volumetric strain threshold $\epsilon_{e0}$, transition distension $\alpha_X$, compaction rate parameter $\kappa$, and sound speed ratio $\chi$. Tensile failure remains off since the target is already assumed damaged under the strength model. Acoustic fluidization is neglected since our simulations only concern simple craters. Dilatancy is also neglected since it has very small effect on transient crater dimensions \citep{Collins2014}. Low density weakening (a polynomial function of density) and thermal softening \citep{Ohnaka1995} are enabled.}
{We proceed to simplify our model by fixing several of the above variables. Porosity parameters are set to $\epsilon_{e0}=0$, $\alpha_X=1$, and $\kappa=0.98$ \citep{Collins2011}, as well as $\chi=1$ \citep{Wunnemann2006, Prieur2017}. We fix $f=0.6$ which is reasonable for sand-like materials. This value was used in early basalt target modeling \citep{Pierazzo2005}, the majority of models in a multi-layer lunar cratering study \citep{Prieur2018}, and in more recent impact studies involving basalt targets \citep[e.g.][]{Bowling2020}. Limiting strength $Y_{\rm LIM}$ has marginal effect on crater scaling parameters \citep{Prieur2017}; it is fixed to 1 GPa in our simulations. For the water ice projectile we fix $Y_0 = 0.01$ MPa, $f=0.55$, and $Y_{\rm LIM} = 147$ MPa \citep{Johnson2016}}.
{The remaining material parameters are target $Y_0$ and $\Phi$. The lunar crust has an average porosity $\Phi=12\%$ extending a few km deep \citep{Wieczorek2013}, with variations between $4-21\%$. We perform simulations for three representative values of porosity: $\Phi=0,12,20\%$. We also consider two possibilities for cohesion: $Y_0=5$ Pa and $Y_0 = 10$ MPa. The former is representative of granular targets with negligible cohesion \citep[identical to][]{Prieur2017}, while the latter is representative of more competent targets. A cohesion of $10$ MPa is the highest cohesion considered by \citet{Prieur2017}, and may overestimate the actual cohesion in the heavily fractured and brecciated upper-megaregolith; but we adopt $10$ MPa for greater contrast against the nearly cohesionless scenario. We use an ANEOS equation of state (EoS) for the basalt and a Tillotson EoS for water ice (parameter values are listed in Table~\ref{tab:params}).}
{For each target material combination ($Y_0$, $\Phi$), we simulated nine impacts spanning projectile diameters $L=40, 80, 160$ m and velocities $v_i = 10, 30, 100$ \kms. A total of 54 simulations were performed.}\footnote{However, the \{$L=40$ m, $v_i = 100$ \kms, $\Phi=12\%$, $Y_0 = 5$ MPa\} simulation was not numerically stable, and is excluded from further analysis.}
\setlength{\tabcolsep}{1.0pt}
\renewcommand{\arraystretch}{1.0}
\begin{table}
\centering
\begin{tabular}{l r r}
\hline
iSALE Material Parameter & Target & Projectile \\
\hline\hline
Material & Basalt & Ice\\
EOS type & ANEOS & Tillotson\\
Poisson ratio & 0.25$^a$ & 0.33$^b$\\
Thermal softening constant & 1.2$^a$ & 1.84$^b$\\
Melt temperature (K) & 1360$^a$ & 273$^b$\\
Simon $a$ parameter (Pa) & 4.5$\times10^{9,a}$ & 6.0$\times10^{9,c}$ \\
Simon $c$ parameter & 3.0$^a$ & 3.0$^c$ \\
$^*$Cohesion (damaged) (Pa) & (5, 1.0$\times10^7$) & 1.0$\times10^{4,b}$ \\
Friction coeff. (damaged) & 0.6$^a$ & 0.55$^b$ \\
Limiting strength (Pa) & 1.0$\times10^{9,a}$ & 1.47$\times10^{8,b}$ \\
$^*$Initial Porosity ($\%$) & (0, 12, 20) & - \\
Elastic threshold & 0.0$^a$ & - \\
Transition distension & 1.0$^a$ & - \\
Compaction rate parameter & 0.98$^a$ & - \\
Bulk sound speed ratio & 1.0$^a$ & - \\
\hline
Tillotson EoS Parameter (Ice) & & Value \\
\hline
\hline
Reference density (g cm$^{-3}$) & & 0.91$^c$\\
Spec. heat capacity (J kg$^{-1}$ K$^{-1}$) & & 2.05$\times10^{3,c}$\\
Bulk modulus (Pa) & & 9.8$\times10^{9,c}$\\
Tillotson B constant (Pa) & & 6.5$\times10^{9,c}$\\
Tillotson E$_0$ constant (J kg$^{-1}$) & & 1.0$\times10^{7,c}$\\
Tillotson a constant & & 0.3$^c$\\
Tillotson b constant & & 0.1$^c$\\
Tillotson $\alpha$ constant & & 10.0$^c$\\
Tillotson $\beta$ constant & & 5.0$^c$\\
SIE incipient vaporisation (J kg$^{-1}$) & & 7.73$\times10^{5,c}$\\
SIE complete vaporisation (J kg$^{-1}$) & & 3.04$\times10^{6,c}$\\
\hline
\end{tabular}
\caption{{Table of parameters used in our hydrodynamical simulations. The basalt ANEOS is from \citet{Pierazzo2005}. An asterisk ($^*$) denotes parameters varied in our simulations. All fixed parameters include a reference: $^a$\citep{Prieur2017}, $^b$\citep{Johnson2016}, $^c$(parameter included in the iSALE-Dellen 2D distribution). The ice Tillotson EoS parameters are listed in the bottom section of the table. SIE $\equiv$ specific internal energy.} }
\label{tab:params}
\end{table}
\subsection{Expectations from Late-Stage Equivalence}
Late-stage equivalence, established by \citet{Dienes1970}, indicates that information surrounding the projectile is lost in the late stages of crater formation. Indeed, \citet{Holsapple1987} show that volume of the resultant crater, for a fixed combination of impactor and target materials, can be estimated by treating the projectile as a point-source characterized by coupling parameter
\begin{equation} \label{eqn:couple}
C = C(L, {v_i}, \rho_p) = Lv_i^{\mu}\rho_p^{\nu}.
\end{equation}
The {power law} form follows from the requirements that $C$ remains finite as {projectile diameter} $L \xrightarrow[]{} 0$, and that $C$ must have fixed dimensionality. The convention adopted by \citet{Holsapple1987} is that $C$ has unity length units.
Impacts with equal $C$ produce transient craters with equal volumes. {\citet{Housen2011} review constraints on $\mu$ and $\nu$ from various past experiments. They indicate $\mu \sim 0.55$ for impacts into competent, non-porous rocks, which represents scaling in between momentum and energy dependence. Dry soils have $\mu \sim 0.41$, and highly porous materials are expected to have $\mu < 0.4$. Also, $\nu = 0.4$ has been shown to hold for a variety of materials, even when projectile and target bulk densities differ significantly.}
Using Pi-group scaling \citep{Buckingham1914}, one may choose dimensionless parameters:
\begin{equation} \label{eqn:pi1}
\pi_D \equiv D_{tr}\Big( \frac{\rho_t}{m} \Big)^{1/3}
\end{equation}
\begin{equation} \label{eqn:pi2}
\pi_2 \equiv \Big(\frac{4\pi}{3}\Big)^{1/3}\frac{gL}{v_i^2}
\end{equation}
\begin{equation} \label{eqn:pi3}
\pi_3 \equiv \frac{Y}{{\rho_t} v_i^2}
\end{equation}
\begin{equation} \label{eqn:pi4}
\pi_4 \equiv \frac{\rho_t}{\rho_p}
\end{equation}
\citep{Holsapple1982}, where $D_{tr}$ is the diameter of the transient crater, and $m$ is the projectile mass. {The material strength $Y$ is not precisely defined, but relates to cohesion and tensile strength.} The transient crater geometry is often used in studies of scaling relations, since it is not dependent on modification (there is also a slight distinction between rim-to-rim dimensions and `apparent' dimensions which are measured with respect to the pre-impact baseline). A properly chosen dimensionless functional relationship $\pi_D = F(\pi_2, \pi_3, \pi_4)$ often serves as a reasonable approximation for crater geometry. {\citet{Holsapple1982} provide a general scaling relation}
{
\begin{equation} \label{eqn:piD1}
\pi_D = K_1\Big[\pi_2\pi_4^{\frac{2+\mu-6\nu}{-3\mu}} + \Big( K_2\pi_3\pi_4^{\frac{2-6\nu}{-3\mu}} \Big)^{\frac{2+\mu}{2}} \Big]^{\frac{-\mu}{2+\mu}},
\end{equation}
for empirically determined scaling constants $K_1$ and $K_2$ (it is more useful to measure $K_2 Y$, rather than both individual terms).} Energy and momentum scaling {correspond to} $\mu = 2/3$ and $\mu = 1/3$, respectively, which is readily seen by taking the cube of $C$.
{Two regimes are apparent in the above equation: gravity-dominated craters (large $\pi_2$ term), and strength-dominated craters (large $\pi_3$ term). The former regime is appropriate for craters in the fine-grained lunar regolith, which is of order $10$ m deep \citep{McKay1991}. The lunar megaregolith consists of coarser-grained and heavily-brecciated material and extends tens of km deep, and cohesion likely factors into crater formation in this layer. We can use Pi-group scaling to predict which regime our simulations fall into. The transition between regimes occurs roughly when $\pi_2 = (K_2\pi_3)^{(2+\mu)/2}$, or equivalently $(K_2 Y / \rho_t v_i^2)^{1.25} \approx 1.6 g L/v_i^2$, assuming a typical $\mu = 0.5$. Approximating $K_2 Y \approx 20.9Y_0$ \citep{Prieur2017} and solving for $Y_0$, we can find the transition cohesive strength. For example, a 160 m diameter projectile striking at 100 \kms\ yields $\sim 2$ MPa. Therefore, our simulations of $Y_0 = 10$ MPa targets are in the strength-dominated regime, whereas those with $Y_0 = 5$ Pa targets are in the gravity-dominated regime. The same holds for other considered projectile diameters and velocities.}
\subsection{Results}
{The simulations closely follow trends consistent with late-stage equivalence --- power law functions of the dimensionless Pi-group scaling parameters. The results are shown in Figure~\ref{fig:fits} for both the gravity- and strength-dominated regimes. In the former case we fit a power law between $\pi_D$ and $\pi_2$, and in the latter case we fit a power law between $\pi_D$ and $\pi_3$; subsequently, we solve for $\mu$. Outcomes for the three target porosities/distentions were fit separately, since they represent distinct target materials. In the gravity-dominated regime, we find $\mu = (0.533, 0.510, 0.514)$ for $\Phi=(0\%, 12\%, 20\%)$ scenarios. In the strength-dominated regime, we find $\mu = (0.554, 0.493, 0.486)$. Across all fits, the maximum deviation of $\pi_D$ from a power law fit is $7\%$.
Discrepancies are addressed in \S\ref{sec:disc}, but the overall conformity of the dimensionless scaling parameters to a power law relationship confirms sub-energy scaling of crater diameter for projectile speeds up to 100 \kms}.
{In order to highlight the difficulty of inferring projectile characteristics from transient crater diameter alone, snapshots of two simulations are shown in Figure~\ref{fig:profiles}. One represents a slow, large projectile whereas the other represents a fast, small projectile impacting the same target material. Both simulations involved negligible target cohesive strength. Their transient diameters differ by $\sim 1\%$.} The diameters are `apparent' (i.e. measured at the level of the pre-impact surface. {There are slight differences in their (transient) profiles and depths; however, we do not explore these aspects in detail, since they will largely change in the subsequent modification stage.} Figure~\ref{fig:profiles} also shows contours of peak shock pressure, which may be used to infer melt volume. This point is investigated in \S\ref{sec:melt}.
\renewcommand{\arraystretch}{0.70}
\setlength{\tabcolsep}{14.0pt}
\begin{table*}
\centering
\footnotesize
\begin{tabular}{l l l l l l | r r}
\hline
& & & & & & & \\
$L$ (m) & $v_i$ (km s$^{-1}$) & $Y_0$ (Pa) & $\rho_t^{\rm ref}$ (g cm$^{-3}$) & $\rho_p$ (g cm$^{-3}$) & $\Phi$ ($\%$) & $D_{tr}$ (km) & $V_M^*$ (m$^3$ $\times 10^5$) \\
& & & & & & & \\
\hline\hline
& & & & & & & \\
40 & 10 & 5 & 2.86 & 0.91 & 0 & 0.55 & 0 \\
40 & 10 & 5 & 2.86 & 0.91 & 12 & 0.46 & 0 \\
40 & 10 & 5 & 2.86 & 0.91 & 20 & 0.45 & 0 \\
40 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 0.20 & 0 \\
40 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 0.18 & 0 \\
40 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 0.01 & 0 \\
40 & 30 & 5 & 2.86 & 0.91 & 0 & 0.91 & 1.95 \\
40 & 30 & 5 & 2.86 & 0.91 & 12 & 0.69 & 1.76 \\
40 & 30 & 5 & 2.86 & 0.91 & 20 & 0.67 & 1.66 \\
40 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 0.38 & 1.95 \\
40 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 0.33 & 1.76 \\
40 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 0.31 & 1.66 \\
40 & 100 & 5 & 2.86 & 0.91 & 0 & 1.41 & 13.61 \\
40 & 100 & 5 & 2.86 & 0.91 & 20 & 1.15 & 11.45 \\
40 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 0.95 & 13.59 \\
40 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 0.57 & 12.13 \\
40 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 0.55 & 11.44 \\
80 & 10 & 5 & 2.86 & 0.91 & 0 & 0.95 & 0 \\
80 & 10 & 5 & 2.86 & 0.91 & 12 & 0.45 & 0 \\
80 & 10 & 5 & 2.86 & 0.91 & 20 & 0.79 & 0 \\
80 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 0.39 & 0 \\
80 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 0.37 & 0 \\
80 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 0.36 & 0 \\
80 & 30 & 5 & 2.86 & 0.91 & 0 & 1.59 & 15.55 \\
80 & 30 & 5 & 2.86 & 0.91 & 12 & 1.27 & 14.08 \\
80 & 30 & 5 & 2.86 & 0.91 & 20 & 1.23 & 13.29 \\
80 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 0.75 & 15.63 \\
80 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 0.65 & 14.07 \\
80 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 0.62 & 13.31 \\
80 & 100 & 5 & 2.86 & 0.91 & 0 & 2.50 & 108.87 \\
80 & 100 & 5 & 2.86 & 0.91 & 12 & 2.07 & 96.84 \\
80 & 100 & 5 & 2.86 & 0.91 & 20 & 2.07 & 91.08 \\
80 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 1.59 & 108.78 \\
80 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 1.15 & 96.78 \\
80 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 1.10 & 91.35 \\
160 & 10 & 5 & 2.86 & 0.91 & 0 & 1.62 & 0 \\
160 & 10 & 5 & 2.86 & 0.91 & 12 & 1.45 & 0 \\
160 & 10 & 5 & 2.86 & 0.91 & 20 & 1.43 & 0 \\
160 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 0.79 & 0 \\
160 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 0.75 & 0 \\
160 & 10 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 0.72 & 0 \\
160 & 30 & 5 & 2.86 & 0.91 & 0 & 2.69 & 124.35 \\
160 & 30 & 5 & 2.86 & 0.91 & 12 & 2.24 & 112.85 \\
160 & 30 & 5 & 2.86 & 0.91 & 20 & 2.20 & 106.49 \\
160 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 1.54 & 124.73 \\
160 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 1.30 & 112.44 \\
160 & 30 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 1.25 & 106.43 \\
160 & 100 & 5 & 2.86 & 0.91 & 0 & 4.45 & 869.22 \\
160 & 100 & 5 & 2.86 & 0.91 & 12 & 3.81 & 773.89 \\
160 & 100 & 5 & 2.86 & 0.91 & 20 & 3.81 & 729.39 \\
160 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 0 & 3.49 & 870.34 \\
160 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 12 & 3.01 & 772.02 \\
160 & 100 & $1 \times 10^{7}$ & 2.86 & 0.91 & 20 & 2.20 & 732.64 \\
& & & & & & & \\
\hline
\end{tabular}
\caption{{Summary of hydrodynamic simulations. {Parameters left of the divider denote, from left to right, projectile diameter, impact speed, target cohesive strength, target reference density (i.e. notwithstanding porosity), projectile density, and porosity. Measured quantities right of the divider are transient crater diameter and melt volume. $^*$Reported melt volume is higher than simulation output owing to a correction ($23-28\%$ increase) that accounts for spatial resolution (Appendix~\ref{sec:appmelt}).}}}
\label{tab:simres}
\end{table*}
\section{Impact Melt Volume} \label{sec:melt}
As discussed above, there are well-known degeneracies between projectile mass, velocity, and impact angle in forming a crater. However, combinations of scaling relationships offer an opportunity to isolate variables of interest. Melt production {is of particular interest because it generally does not scale according to the point-source limit \citep{Pierazzo1997}}. {As a relatively recent example, \citet{Silber2018} simulated impacts of dunite projectiles into the Moon with $v_i$ ranging from $6$ \kms\ to $20$ \kms. They found a two order-of-magnitude difference in melt volume in craters with equal diameter, which shows the potential of using crater observables to deduce impact velocity.} In our investigation of scaling relations, we restrict analysis to vertical impacts and neglect dependence on impact angle.
\subsection{Numerical Simulations of Melt Production}
Melt volume in numerical simulations may be estimated by recording the peak shock pressure experienced by Lagrangian tracer particles \citep[e.g.][]{Wunnemann2008}. Plastic deformation from the shock wave irreversibly heats the target. If the target is shocked to a sufficient pressure, then it lies above the melt temperature following isentropic release from the rarefaction wave. A critical shock pressure {for complete melting} $P_c = 106$ GPa is adopted for basalt \citep{Quintana2015}.
The lower panels of Figure~\ref{fig:profiles} show the peak shock pressures experienced in {two representative examples of} our hydro simulations as a function of initial location in the target. The faster impact generates significantly higher peak pressures overall. {In our presentation of results,} we combine melt and vapor into a single `melt' volume {wherever peak shock pressures exceed $P_c$}. {Per Appendix~\ref{sec:appmelt}, all melt volumes were scaled by a correction factor to account for the simulation resolution of 20 CPPR. Melt volumes from all simulations are listed in the last column of Table~\ref{tab:simres}. Some immediately recognizable trends include: melt volume spans approximately three orders of magnitude, where the greatest melt volumes arise from the largest, fastest projectiles; only 30 \kms\ and 100 \kms\ impacts generated non-trivial melt volumes; holding other variables constant, target cohesion affects melt volume at a $\lesssim 10\%$ level in our simulations; and zero porosity yields $\sim20\%$ greater melt volume than the most porous materials explored.}
{Can enhanced melt volumes assist in identifying the highest-speed impacts? Presence of significant basaltic melt can immediately rule out $\leq 10$ \kms\ impacts. However, at a constant $D_{tr}$, melt volume differences between $30$ \kms\ impacts and $100$ \kms\ impacts are more subtle. For example, $100$ \kms\ impacts of $40$ m projectiles produce similar $D_{tr}$ and melt volume as $30$ \kms\ impacts of $80$ m projectiles. Figure~\ref{fig:profiles} depicts this comparison for two example simulations. Note, the larger, slower projectile does yield a larger transient crater diameter and less melt; however, if actual lunar craters exhibited these properties, the differences between these two cases are probably too small to differentiate. Therefore, melt volume may be a important metric for filtering out low-speed asteroid impacts, but is less useful at the high-speed tail of the impact speed distribution, for these specific combinations of projectile and target materials. We proceed to place the simulation results in the context of established scaling relations}.
\subsection{Scaling Relations of Crater Dimensions and Melt Volume}
\citet{Pierazzo1997} performed hydrocode simulations of impacts with various materials, and fit a {power law} of the form
\begin{equation} \label{eqn:meltmu}
\log\Big(\frac{V_M}{V_{p}}\Big) = a + \frac{3}{2}\mu'\log\Big(\frac{v_i^2}{E_M}\Big),
\end{equation}
a relation originally considered by \citet{Bjorkman1987}. {In the above, $a=\log k$, where $k$ is constant of proportionality that arises because the equation is based on dimensional analysis}, $V_p$ denotes the projectile volume, {$V_M$ denotes melt volume}, and $\mu'$ is a scaling constant. $E_M$ is the {specific} energy of melting. Values of $E_M$ for {several} materials of interest are listed by \citet{Bjorkman1987} and \citet{Pierazzo1997}, {as well as \citet{Quintana2015} for basalt}.
{In general $\mu \neq \mu'$ because transient crater diameter scales according to the point-source limit, whereas melt volume does not. Indeed,}
{\citet{Okeefe1977} and \citet{Pierazzo1997} suggested $\mu'$ is consistent with $2/3$ (energy scaling). More recent works \citep{Barr2011, Quintana2015} reaffirm energy scaling for melt numbers $v_i^2/E_M \gtrsim 30$.} {Meanwhile, $\mu< 2/3$ for many materials of interest \citep{Schmidt1987}.} {In an ideal situation and holding all other variables constant,} combined measurements of crater diameter and melt volume {can in theory} break the degeneracy between projectile mass and velocity. {This premise is elaborated upon in Appendix~\ref{sec:appmeltvol} where we derive equations for melt volume as a function of impact velocity and transient crater diameter, and demonstrate
}
{
\begin{equation} \label{eqn:meltprop}
V_M \propto D_{\rm tr}^x v_i^{3(\mu'-\mu)},
\end{equation}
for sufficiently fast impacts. The constant of proportionality depends on the materials involved. In the strength-dominated regime, $x=3$, and in the gravity-dominated regime, $x=(6+3\mu)/2$. This relationship is independent of $m$ and $L$, so one may in principle solve for $v_i$ from two crater measurements. {In practice, impact angle, target lithology, and the variable composition of ISOs and other projectiles add degeneracies which would significantly complicate efforts to find an ISO crater. Additionally, long-term modification processes may alter crater morphology and make inferences of $D_{tr}$ less accurate. However, our exploration is designed to gauge the baseline feasibility of crater identification using these two observables, which may serve as a starting point for more sophisticated models that employ other sources of data (e.g. those discussed in \S\ref{sec:disc}).}
}
{As follows, we make a theoretical quantification of melt volume, and draw comparison to our hydro simulations.} {The analysis requires determining the constant of proportionality in Equation~\ref{eqn:meltprop}, which depends non-trivially on material properties including coefficient of friction, porosity, and cohesive strength, in addition to impact angle \citep{Schmidt1987, Elbeshausen2009, Prieur2017}. We take Equations~\ref{eqn:fullscale} $\&$ \ref{eqn:fullscale2} to analytically describe melt production {in the gravity-dominated and strength-dominated regimes, respectively, and rearrange to obtain} a function for impact velocity.} We adopt a melt energy $E_M = 8.7\times10^6$ J kg$^{-1}$ for the basalt target \citep{Quintana2015} with density $\rho_t = 2.86$ g cm$^{-3}$ (modified accordingly for non-zero porosity); a water ice projectile is assumed with $\rho_p = 0.91$ g cm$^{-3}$. {In all cases we assume $\nu = 0.4$ and $g=1.62$ \ms}
{The empirical parameter $K_1$ was measured for each target material in \S\ref{sec:crater}, and is typically of order unity \citep{Prieur2017}. Finally, we find $a = -0.890$ and $\mu' = 0.535$ reasonably describes all melt volume outcomes from our simulations (see \S\ref{sec:disc} for details). In this manner, we may investigate whether the simulation results agree with theoretical scaling relations for melt volume. Further, we may use the scaling relations to extend our analysis to a broader range of materials than those simulated, and investigate conditions most amenable to crater identification.
}
\setlength{\tabcolsep}{11.0pt}
\renewcommand{\arraystretch}{1.0}
\begin{table*}
\centering
\begin{tabular}{l c c l l r}
\hline
Case & $a$ & $\mu'$ & $K_1$ & $\mu$ & Case Description \\
\hline\hline
S1 & $-0.890$ & $0.535$ & 1.366 & 0.533 & Ice projectile, basalt target, $Y = 5$ Pa, $\Phi=0\%$ (this study) \\
S2 & $-0.890$ & $0.535$ & 1.228 & 0.510 & Ice projectile, basalt target, $Y = 5$ Pa, $\Phi=12\%$ (this study) \\
S3 & $-0.890$ & $0.535$ & 1.143 & 0.514 & Ice projectile, basalt target, $Y = 5$ Pa, $\Phi=20\%$ (this study) \\
S4 & $-0.890$ & $0.535$ & 1.334 & 0.554 & Ice projectile, basalt target, $Y = 10^7$ Pa, $\Phi=0\%$ (this study) \\
S5 & $-0.890$ & $0.535$ & 1.513 & 0.493 & Ice projectile, basalt target, $Y = 10^7$ Pa, $\Phi=12\%$ (this study) \\
S6 & $-0.890$ & $0.535$ & 1.479 & 0.486 & Ice projectile, basalt target, $Y = 10^7$ Pa, $\Phi=20\%$ (this study) \\
\hline
E1 & $-0.482^a$ & $0.624^a$ & 1.6 & 0.564 & Wet sand (proxy for competent rock) \citep{Schmidt1987} \\
E2 & $-0.482^a$ & $0.624^a$ & 1.4 & 0.381 & Dry quartz sand (proxy for porous rock) \citep{Schmidt1987} \\
E3 & $-0.482^a$ & $0.624^a$ & 1.615 & 0.558 & Basalt, wet sand analog, $f=0.1$, $\Phi=0\%$ \citep{Prieur2017} \\
E4 & $-0.482^a$ & $0.624^a$ & 1.585 & 0.516 & Basalt, porous sand analog $f=0.1$, $\Phi=12\%$ \citep{Prieur2017} \\
E5 & $-0.482^a$ & $0.624^a$ & 1.984 & 0.394 & Basalt, porous sand analog $f=0.6$, $\Phi=12\%$ \citep{Prieur2017} \\
E6 & $-0.482^a$ & $0.624^a$ & 1.473 & 0.424 & Basalt, porous sand analog $f=0.6$, $\Phi=40\%$ \citep{Prieur2017} \\
\hline
\end{tabular}
\caption{{Parameter combinations {for analytically linking melt volume, transient crater diameter, and impact velocity}. Columns correspond to case number ({S denotes simulated, E denotes extended}), melt volume scaling {constant and} exponent ($a$ and $\mu'$), {transient} crater diameter scaling coefficient ($K_1$), crater diameter scaling exponent ($\mu$), and a brief description of the case study. The combinations of parameters {are derived from our simulations in the top portion, and sample} different regimes reported by \citet{Schmidt1987} and \citet{Prieur2017} {in the bottom portion}. For specific scenarios from \citet{Prieur2017}, $f$ denotes coefficient of friction, and $\Phi$ denotes porosity.}
{$^a$For case studies not simulated in this study, we adopt identical $a$ and $\mu'$ from \citet{Barr2011}, which was found to hold for a variety of materials.}}
\label{tab:meltcombos}
\end{table*}
The relationship between $D_{tr}$, $V_M$, and $v_i$ is plotted in Figure~\ref{fig:melt} for {for targets with $\Phi=20\%$ in the gravity-dominated regime.} {The 10 \kms\ impacts are excluded, since the melt number is less than 30; the cutoff is at approximately 16 \kms. We plot contour lines for 16 \kms, as well 30 \kms\ and 100 \kms.} The {difference between the} diameter scaling exponent {$\mu$ and the melt volume scaling exponent $\mu'$} determines the {velocity spread across $D_{tr}$ and $V_M$}. Increasingly significant velocity dependence manifests as a more gradual gradient in the figure, and larger separation between constant velocity lines. {Results in Figure~\ref{fig:melt} are representative of the other porosities in that $\mu \simeq \mu'$, so the distance between velocity contours is small. This trend indicates that melt volume may not be a significantly differentiating metric for inferring projectile parameters, at least for the materials simulated here.}
{Nevertheless, other impact configurations may be more conducive for breaking degeneracy with combined $D_{tr}$ and $V_M$ measurements. The parametrization $a = -0.482$ and $\mu' = 0.624$ \citep{Barr2011} is suitable for impacts of identical target and projectile materials (spanning aluminum, iron, ice, dunite, and granite).}
We calculated melt volume for several parameter combinations that span the various regimes covered in {prior studies, as follows. The specific combinations are listed in Table~\ref{tab:meltcombos}}. Parameters from \citet{Schmidt1987} are empirical, where wet sand and dry sand were used as proxies for competent and porous rock, respectively. iSALE-2D simulations by \citet{Prieur2017} assumed a basalt target with variable coefficient of friction ($f$) and porosity ($\Phi$). \citet{Elbeshausen2009} simulated oblique impacts into granite with iSALE-3D, varying $f$ and $\theta$ with fixed $\Phi=0\%$. Since their coefficients are reported in terms of volume scaling ($\pi_V$), we do not consider specific instances of their simulations. They find ${\mu} \simeq {0.469}$ for $f = 0.7$ and ${\mu} = {0.548}$ for $f = 0.0$, which are comparable to some scenarios from \citet{Schmidt1987} and \citet{Prieur2017}.
{{Impacts} involving dry sand or porous basalt have the lowest values of {$\mu$}, and the melt volume for $v_i = 16-100$ \kms\ spans approximately {0.5 dex} at fixed transient crater diameter (Figure~\ref{fig:meltExt}).} {In contrast to porous scenarios, wet sand results in the least spread, making it the most challenging for identifying ISO impact craters. We emphasize the critical importance that melt {approximately} scales {with energy for these materials}; else, velocity dependence effectively vanishes \citep{Abramov2012}. The results are encouraging for lunar melts that involve the unconsolidated regolith \citep{McKay1991} and lunar crust of porosity $\Phi \sim 10-20\%$ \citep{Kiefer2012}. In practice, $\mu$ and $K_1$ would both require tight constraints, and hence depend on whether the crater in question formed in the basaltic mare or anorthosite highlands. Additional considerations include impact angle and projectile density.}
\section{Discussion and Conclusions} \label{sec:disc}
{Intensive study of the lunar cratering record, including prospects for identifying ISO craters, will soon be forthcoming. {2020 marked the} first lunar sample return mission in nearly 45 years by the Chang'e 5 Lander \citep{Zeng2017}; this is a precursor to a modern-day surge in lunar exploration, as well as preliminary steps to establishing a permanent presence on Mars. We discuss how upcoming remote observations, return missions, and {\it in situ} analyses might assist in the identification of ISO impact craters.}
\subsection{Measuring Melt Volumes in Search of ISO Craters}
{In the previous section we showed that a high-speed ISO impact {can} yield a significantly enhanced melt volume {for certain projectile/target material combinations}. While other factors need to be accounted for, including impact angle and target material properties, melt volume can help break the degeneracy between impact velocity and projectile mass: specifically, by searching for craters that fall in the high melt volume, low diameter regime. {\it In situ} analyses \citep[e.g.][]{Grieve1982a} combine the percentage of melt in localized regions with crater geometry to obtain an overall estimate of melt volume. However, there are significant sources of uncertainty \citep{French1998}, such as strong dependence on target materials (e.g volatile content \citep[][]{Kieffer1980}), and modification processes \citep{Melosh1989}. Furthermore, detailed mapping of large melt volumes may be forbiddingly time- and resource-intensive for surveying candidate ISO craters, given there should be only of order unity high-speed ISO impact craters between the Moon and Mars (\S\ref{sec:vel}).}
{Remote sensing melt volume may be an appealing alternative to {\it in situ} analyses. Currently,} some of the best {remote-based} estimates rely on LROC images \citep{Plescia2014}, where melt pools are identifiable by low-albedo, flat crater floors. {Crater diameters may also be readily extracted from LROC images. The correspondence between final and transient crater diameters is non-trivial; however, a simple heuristic for would be to search for craters with particularly high ratios of melt volume to diameter as potentially of ISO origin.} {\citet{Plescia2014} estimate the melt volume by} fitting the crater wall profile, extrapolating the profile to depths below the melt pool, and taking the difference between the observed crater volume and that of the entire original crater. {They acknowledge the estimates are order-of-magnitude, since} additional melt may have been ejected from the crater, displaced onto the crater wall, or buried within the debris layer on the crater floor. {\citet{Silber2018} analyzed theoretical (from iSALE-2D) and observed \citep[][]{Plescia2014} melt volumes of lunar craters, with a similar goal as ours of breaking degeneracies between projectile characteristics. They were able to match the observed spread in melt volume ($\sim 2$ orders of magnitude) for a given crater diameter. Individual craters/projectiles were not investigated, and velocities only up to $20$ \kms\ were considered. {These results indicate that imaging may be a viable method of finding enhanced melt volumes --- however, given that remote sensing uncertainties are of the same order as the largest melt volume spreads for a fixed $D_{tr}$ (see \S\ref{sec:melt})}, higher precision followup measurements may be necessary; possibly {\it in situ}.}
{The precision in melt volume required to identify an ISO crater depends on target materials, apparent in the variable spread in Figure~\ref{fig:meltExt} panels. Lunar seismology (e.g. of small impacts) may soon be a feasible approach for estimating melt volumes without requiring assumptions of the subsurface crater geometry.} Arrival time anomalies of $p$ and $s$ waves are frequently used to map geological structures such as mantle plumes \citep{Nataf2000}, and are also employed for identifying and characterizing natural oil reserves. For our purposes, we note simple craters tend to have a `breccia lens' at their floors, which is a mixture of inclusion-poor breccia that formed immediately below the impact plus mixed breccia that formed due to the shear of melt sliding up the crater walls (this material collapsed during the modification stage) \citep{Grieve1987}. Appropriately placed sensors within and near {an existing} crater may allow seismic imaging of the breccia lens if the recrystallized melt has sufficiently different material properties from surrounding rock or there is a discontinuity in wave propagation between the crater wall and the breccia lens. {Seismic imaging of artificial shots/blasts has been applied extensively to the Chicxulub crater \citep{Gulick2013}, for example in identification of the top of its melt sheet \citep{Barton2010}. It was also used to measure melt volume in the Sudbury Basin \citep{Wu1995}.} {Seismic imaging could in principle extend to Moon for measuring melt volume; although it is still subject to uncertainties surrounding ejected or displaced melt during the crater's formation.}
\subsection{Petrological Considerations}
{In addition to producing more melt, faster impacts induce higher peak shock pressures. We discuss whether high-pressure petrology provides an alternative or complementary route to identifying ISO impact craters.
}
{Target material in our 100 \kms\ simulations} experienced higher peak pressures ($\sim 3$ TPa) compared to target material in the {30 \kms\ simulations (Figure~\ref{fig:profiles}). In both cases, the pressures are sufficiently high to produce {coesite}, stishovite and maskelynite \citep{Stoeffler1972, Melosh2007}, so high-pressure phases and polymorphs are probably insufficient criteria for identifying an ISO crater. However, the abundance or composition of vapor condensates might point to an ISO projectile}.
To date, only a handful of lunar vapor condensates have ever been found \citep{Keller1992, Warren2008}. {High-alumina, silica-poor material \citep[HASP,][]{Naney1976} deemed evaporation residue is complemented by {volatile-rich alumina-poor (VRAP) glasses and} gas-associated spheroidal precipitates (GASP). {These spherules are} attributed to liquid condensation droplets {(VRAP is enriched in volatiles like K$_2$O and Na$_2$O, whereas GASP is not; VRAP spherules are also about $200-400$ nm in diameter, whereas GASP spherules span roughly $2-10\,\mu$m).}} {VRAP/GASP are} identified by a distinct depletion of refractory species Al$_2$O$_3$ and CaO. The highest speed impacts (e.g. ISOs) may generate more {vapor} condensates, which may be detected in {surrounding} rock samples. Also, the {exceptionally} high pressures generated in ISO impacts may alter the composition of residues and condensates; for example, pressures may be sufficient to shock vaporize Al$_2$O$_3$, CaO, or TiO$_{2}$, depleting them from HASP and enhancing them in {condensates}. Predicting the constituents of vapor condensates associated with ISO impacts will require mapping the high-pressure phase space for low-volatility target materials.
{Microscopic spherules were also produced through ancient lunar volcanism \citep{Reid1973}, but these spherules can be robustly distinguished from those of impact origin. \citet{Warren2008} used a combination of Al content, as well as trends of TiO$_2$ and MgO to establish an impact origin. As another example, \citet{Levine2005} ruled out a volcanic origin for $>90\%$ of 81 spherules in an Apollo 12 soil sample, based on low Mg/Al weight ratios. They also found a large fraction had $^{40}$Ar/$^{39}$Ar isochron ages younger than 500 Myr, which is inconsistent with known periods of lunar volcanism.}
Impact speed also influences the dimensions of vapor condensates. \citet{Johnson2012a} present a model for the {condensation} of spherules {from impact-generated rock vapor}. They find that the highest impact speeds yield smaller spherule diameters owing to higher speed expansion of the vapor plume {for impact speeds greater than $\sim 28$ \kms}. The vapor plume model of \citet{Johnson2012a} invokes a simplified plume geometry, and assumes the projectile and target are both comprised of SiO$_2$. {The same authors employed this model when they estimated projectile velocities and diameters for major impact events in Earth's history \citep{Johnson2012b}. In theory, particularly small} spherules may be linked to impact speeds consistent with ISOs. {\citet{Johnson2012a} explored} velocities up to 50 \kms, but extrapolation suggests 100 \kms\ impacts may produce spherules of diameter $\lesssim 10^{-7}$ m. Degeneracy with projectile size persists, but might be reconciled with, for example, crater scaling relationships. {In regards to identifying ISO craters on the Moon, a significant concern is that vapor condensate spherules may be scattered extremely far from the impact site. For example, microkrystite condensates from the K-T impact form a world-wide spherule layer \citep{Smit1992}. Isolating the crater of origin would likely require widespread mapping and classification of spherules on the Moon, which is beyond current capabilities.}
{Could melts or condensates be used to infer an ISO's composition? It is well understood that these impact products comprise a mixture of projectile and target material. For example, \citet{Smit1992} estimated that condensate spherules from the K-T impact contain a $\sim 10\%$ bolide component from their Ir content. Although, the task might be challenging for lunar vapor condensates because the spherules are microscopic and of extremely low abundance in the Moon's crust \citep[$<0.001\%$ by volume,][]{Warren2008b}. \citet{Keller1992} and \citet{Warren2008} do not make inferences regarding the composition of the projectile(s) that generated the VRAP/GASP spherules, and to our knowledge, there has not yet been any study that links these spherules or the HASP residue to a projectile's composition.}
{Encouragingly, a number of projectiles involved in terrestrial impacts have been geochemically characterized, primarily via rocks within and near the crater. \citet{Tagle2006} review major findings and methods. Elemental ratios of PGEs (Os, Ir, Ru, Pt, Rh, Pd), plus Ni and Cr, are particularly effective if multiple impactite samples are available, since then it is not necessary to correct for elemental abundances in the target. Isotope ratios $^{53}$Cr/$^{52}$Cr and $^{187}$Os/$^{188}$Os are also commonly employed. This precedent extends to lunar impacts, as \citet{Tagle2005} used PGE ratios in Apollo 17 samples to determine that the Serenitatis Basin projectile was an LL-ordinary chondrite. Since these methods are based on refractory species, ISOs may be difficult to characterize. 2I/Borisov contains a significant volatile component \citep{Bodewits2020} as most comets do, and volatiles would explain 'Oumuamua's anomalous acceleration \citep{Seligman2019}. If ISOs have a refractory component, then elemental and isotoptic ratios could separate them from other projectile classes and offer important insights into their composition.
}
\subsection{{Influence of Impact Angle}}
{Crater dimensions are degenerate with impact angle, a parameter unexplored in this study. Indeed, the most probable impact angle of $45{^\circ}$ would yield a considerably different crater than a head-on collision, all other factors being equal. \citet{Davison2011} quantified how several crater properties depend on impact angle. For example, crater volume is approximately halved for a $45{^\circ}$ impact, but the crater remains symmetrical for impact angles $\theta$ greater than a threshold $\theta_e \sim 10-30{^\circ}$, depending on the target material. They also found crater depth scales with $\sin\theta$, and width with $\sin^{0.46}\theta$. Melt production exhibits strong dependence on impact angle, as shown by \citet{Pierazzo2000} through simulations of Chicxulub-type impacts. In their 20 \kms\ impact speed simulations, the volume of material shocked above 100 GPa at $\theta = 30{^\circ}$ was rougly half that of a head-on collision, and was trivial for $\theta < 15{^\circ}$. While melt volume scales with impact energy \citep{Bjorkman1987}, the scaling breaks down if only the vertical component is considered in oblique impacts (i.e. $(v_i\sin{\theta})^2$ \citep{Pierazzo2000}. Nevertheless, melt volume was found to be proportional to transient crater volume across variations in $\theta$, with oblique impacts producing asymmetric melts.}
{How crater properties change with joint variations in impact angle and impact speed, especially in $v_i > 100$ \kms\ regime, would be interesting for future investigation, albeit computationally expensive. The studies discussed above indicate that crater and melt asymmetries may prove useful for constraining angle of incidence. They also suggest the maximal pressures and melt volumes produced by real ISO impacts are probably lower than those attained in our simulations, and that real crater dimensions may exhibit different ratios than those of our simulated craters. The reduction in peak shock pressure may also eliminate certain pretrological indicators of a high-speed impact, such as vapor condensates.}
\subsection{{Analysis of Scaling Exponents}}
{In \S\ref{sec:crater}, we fit power law relationships to dimensionless parameters (Equations~\ref{eqn:pi1}-\ref{eqn:pi4}) to determine the transient diameter scaling exponent $\mu$. An accurate and precise $\mu$ is needed in order to gauge the efficacy of using melt volume to disentangle projectile properties (\S\ref{sec:melt}). Inspection of the top panel of Figure~\ref{fig:fits} (gravity regime) shows that data points for fixed velocity follow a local slope that deviates slightly from the global fitted slope. This effect is especially pronounced for the two porous scenarios. For example, in the $\Phi=20\%$ simulations, locally fitting a power law to outcomes of simulations with fixed projectile velocity yield $\mu$ ranging from 0.32 to 0.38 (increasing with decreasing impact velocity). This discrepancy from the global fit $\mu = 0.514$ may arise from an additional velocity dependence which is not incorporated into the Pi-group scaling framework. A similar anomaly was reported by \citet{Prieur2017} when comparing their results to those of \citet{Wunnemann2011}. Discrepancies in $\pi_D$ reached up to $10\%$ between the two sets of simulations, which were conducted at $12.7$ \kms\ and $5$ \kms, respectively. The premise that $\mu$ may depend on impact velocity has been noted before. For example, \citet[][]{Yamamoto2017} find dependence even when impact velocity greatly exceeds the target bulk sound speed. Their interpretation is that the dependency arises because the shock front pressure decays at a rate $q$, which itself depends on impact velocity. This dependency suggests that it may be necessary to run a grid of simulations, densely spanning $L$ and $v_i$ for a fixed target composition, in order to constrain allowed values for $\mu$.}
{We also comment on the melt volume scaling parameters $a$ and $\mu'$ in Equation~\ref{eqn:meltmu}. \citet{Barr2011} performed simulations of impacts involving identical projectile and target materials, and fit all outcomes simultanesouly to obtain $a = -0.482$ and $\mu' = 0.624$. Their simulations of an ice projectile striking dunite, which is the most similar scenario to our simulations, yielded $a = -1.78$ and $\mu' = 0.819$. This value for $\mu'$ is unexpectedly high since it exceeds the theoretical upper-bound of energy scaling. We attempted to independently determine these two parameters from melt volumes in our simulations. The 10 \kms\ impact velocity scenarios were excluded. While our fit involves only two velocities, all of the materials considered produce similar melt volumes for a given $L$ and $v_i$. Fitting all simulation outcomes simultaneously yielded $a = -0.890$ and $\mu' = 0.535$. The scaling exponent is considerably lower that that found by \citet{Barr2011}. It is closer to $\mu' = 0.432$ found by \citet{Pierazzo1997} for ice/ice impacts (although \citet{Barr2011} suggested this value was influenced by the choice of target temperature by \citet{Pierazzo1997}). The discrepancy between our result and that of \citet{Barr2011} could arise from our choice of basalt as a target material, or differences in the adopted EoS (Tillotson vs. ANEOS). We also evaluated $\mu'$ using the 80 CPPR simulations from Appendix~\ref{sec:appmeltvol} to make the fit robust against our melt volume correction scheme; however, we obtained a comparable $\mu' = 0.564$. To investigate $\mu'$ further, we ran additional $20$ \kms\ and $50$ \kms\ impact simulations for one configuration involving a cohesionless, porous target. Fitting all velocities $(20, 30, 50, 100$ \kms) yielded $\mu' = 0.584$, while fitting just the lowest two velocities yielded $\mu' = 0.623$. This finding suggests a possible breakdown of Equation~\ref{eqn:meltmu} for very high melt numbers for ice/basalt impacts. Still, though, $\mu' = 0.623$ remains significantly lower than the $\mu' = 0.819$ from \citet{Barr2011}. In \S\ref{sec:melt} we opt to use $a = -0.890$ and $\mu' = 0.535$. However, the uncertainty on $\mu'$ indicates that a dedicated investigation of melt volume scaling would be useful; specifically, ice projectiles under different EoS specifications, impacting various target materials at a range of velocities.}
\vfill\eject
\subsection{Conclusion}
In searching for craters produced by ISOs impacting terrestrial bodies, it is important to have a set of criteria that differentiate these craters from those produced by asteroids and comets. By analyzing local stellar kinematics, we show ISOs {that encounter} the Solar System at speeds of $\geq$ 100 \kms\ {impact the Moon and Mars at rates of $\sim 0.09$ per Gyr and $\sim 0.29$ per Gyr, respectively. Importantly, 100 \kms\ exceeds} the impact speeds of most small Solar System objects. Therefore, crater properties that depend strongly on impact speed may be especially pertinent. {Transient} crater {dimensions are} expected to obey late-stage equivalence. We compare two hydro simulations to show that it is difficult to distinguish simple craters formed by high- and low-speed impacts. Melt volume, on the other hand, {does not} follow the point-source limit \citep{Pierazzo1997}, and offers a possible avenue for identifying high-speed craters. This approach requires overcoming degeneracies with impact angle and target composition, and obtaining precise estimates of the melt volume. Alternatively, vapor condensate composition and spherule dimensions could be revealing of extremely fast impacts. Facilitated by upcoming crewed and robotic Moon missions, identifying ISO craters may soon be feasible through {\it in situ} or return analyses of impact crater samples.
\acknowledgements
We gratefully acknowledge the developers of iSALE-2D, including Gareth Collins, Kai WГјnnemann, Dirk Elbeshausen, Tom Davison, Boris Ivanov and Jay Melosh. We acknowledge generous support from the Heising-Simons Foundation through Grant \#2021-2802 to Yale
University.
\bibliographystyle{aasjournal}
\bibliography{main}
\appendix
\section{Diameter Scaling Validation for Basalt/Basalt Impacts} \label{sec:appscale}
{We perform additional simulations of a basalt projectile impacting a nearly cohesionless basalt target with $\Phi=12\%$ and $f=0.6$. These simulations serve as a foil to the ice projectile, and allow us to verify our simulation setup by independently measuring the associated transient diameter scaling relation, which was also measured by \citet{Prieur2017}. Since these craters are in the gravity-dominated regime, Equation~\ref{eqn:piD1grav} dictates the transient crater diameter. Impact speed was held constant at $v_i=12.7$ \kms, while projectile diameter took values of $L=25,100,250,1000$ m. \citet{Prieur2017} define $D_{tr}$ as the crater diameter at the time of maximum crater volume. In our simulations, crater volume as a function of time made discrete jumps as the crater grew in the extension zones. In order to make our measurement more robust to the spatial resolution, we fit a 5$^{\rm th}$-degree polynomial to volume as a function of time near its maximum value. We took the time at which the polynomial reaches its maximum as defining the transient crater. Finally, we linearly interpolated crater diameter between neighboring timestamps to obtain $D_{tr}$. Crater diameter was measured at the level of the pre-impact surface. The polynomial fit only included points within $2.5\%$ of the maximum crater volume, which helped exclude late-time data where the crater volume changes due to modification processes. Figure~\ref{fig:verify} shows a schematic of this process (left panel), and our results (right panel).}
{Our best-fit parameters are $K_D = 1.954$ and $\beta=0.164$, which agree well with the scaling from \citet{Prieur2017}, $K_D = 1.984$ and $\beta=0.165$. Values for $\pi_D$ predicted by our scaling relation and by that of \citet{Prieur2017} disagree by $<3\%$ for each of the projectiles we considered; this slight disagreement may be due to different zoning and resolution schemes in the simulation setups (e.g. we use a smaller high-resolution zone than \citet{Prieur2017}, the layer assigned to zero depth may be different, and cell sizes in the extension zone may also be different).
}
{Simulations in the strength-dominated regime (high target cohesion) required a different approach for measuring $D_{tr}$. In these cases crater volume grew during excavation and then plateaued, as opposed to reaching a maximum and subsequently decreasing. Crater diameter followed a similar trend. For these simulations, we select all timestamps in which the crater diameter is within $10\%$ of its diameter at the last simulation timestamp, and subsequently take the median of these diameters as a measurement of $D_{tr}$.}
\section{Melt Volume Dependency on CPPR}
\label{sec:appmelt}
{Simulations in this study were conducted at a resolution of 20 CPPR, which was a compromise between simulation runtime and accuracy. \citet{Barr2011} found that near 20 CPPR, dunite/dunite impacts at 20 \kms\ underestimate melt volume by $\sim 15\%$. Since this study concerns different materials and impact speeds, we performed additional simulations to determine melt volume's dependence on CPPR. We simulated 40 m ice projectiles impacting (cohesionless) basalt targets at 10 \kms, 30 \kms, and 100 \kms\ at five different resolutions. Our results are depicted in Figure~\ref{fig:cpprmelt}. The volume of melt (plus vapor) was subsequently determined using the basalt complete melting pressure $P_c = 106$ GPa \citep{Quintana2015}. In the 30 \kms\ scenario, melt volume is underestimated by $45\%$, $22\%$, $8.3\%$, and $2.9\%$ for 10, 20, 40, and 60, respectively (compared to 80 CPPR). In the 100 \kms\ scenario, melt volume is underestimated by $38\%$, $19\%$, $7\%$, and $2.5\%$ for 10, 20, 40, and 60, respectively (again, compared to 80 CPPR). We found the 10 \kms\ impact simulations do not produce any melt at 80 CPPR or lower resolution. In our main analysis, we multiply melt volume by 1.28 and 1.23 in the 30 \kms\ and 100 \kms\ scenarios, respectively, to account for the resolution dependence.
}
\section{Analytic Melt Volume Scaling}
\label{sec:appmeltvol}
{As follows, we combine Pi-group scaling for a crater's transient diameter (Equation~\ref{eqn:piD1}) with a melt volume scaling relation (Equation~\ref{eqn:meltmu}) valid for $v_i^2/E_M \gtrsim 30$ \citep{Pierazzo1997}. Together, they break the degeneracy between projectile impact velocity and projectile size. While a useful demonstration, this analysis neglects impact angle dependence and detailed target lithology.}
For simple craters formed in granular targets, {Equation~\ref{eqn:piD1}'s} dependence on $\pi_3$ is negligible, and the relation follows:
\begin{equation} \label{eqn:piD1grav}
\pi_D = K_1\pi_2^{-\mu/(2+\mu)}\pi_4^{(2+\mu-6\nu)/(6 + 3\mu)},
\end{equation}
for an empirically determined constant $K_1$. {The variables $\mu$ and $\nu$ follow from the point-source, coupling constant in Equation~\ref{eqn:couple}. They are often determined experimentally, and $\mu$ typically lies between} energy and momentum scaling ($\mu = 2/3$ and $\mu = 1/3$). {Let $\beta\equiv\mu/(2+\mu)$ and $\eta\equiv(2+\mu-6\nu)/(6 + 3\mu)$ to simplify notation.} {Next, consider the} scaling relation used by \citet{cintala1998}, which follows as a restatement of Equation~\ref{eqn:piD1grav} and multiplication of each side by the projectile volume to the one-third:
\begin{equation} \label{eqn:dtrscale}
D_{tr} = \frac{K_1}{2} \Big(\frac{4\pi}{3}\Big)^{(1-\beta)/3} \Big(\frac{\rho_t}{\rho_p}\Big)^{\eta - 1/3} L^{1-\beta}g^{-\beta}{v_i^{2\beta}}.
\end{equation}
After calculating melt volumes from hydrocode simulations, they found a {power law} relationship for melt volume that depends strongly on $D_{tr}$ and weakly on $v_i$.
By assuming the projectile is spherical, one may restate {Equation~\ref{eqn:meltmu}} as
\begin{equation} \label{eqn:vmscale}
V_M = \frac{k\pi}{6}L^3\Big(\frac{v_i^2}{E_M}\Big)^{3\mu'/2}.
\end{equation}
Finally, combining Equations~\ref{eqn:dtrscale} and \ref{eqn:vmscale}, and in the process removing $L$ dependence, we arrive at
\begin{equation} \label{eqn:fullscale}
{
V_M = \frac{k}{8}\Big(\frac{K_1}{2}\Big)^{-3/(1-\beta)}D_{tr}^{3/(1-\beta)}\Big(\frac{\rho_t}{\rho_p}\Big)^{(1-3\eta)/(1-\beta)}
E_M^{-3\mu'/2} v_i^{3\mu'-6\beta/(1-\beta)}g^{3\beta/(1-\beta)}.
}
\end{equation}
{
Similarly, we can derive a relationship between melt volume, transient crater diameter, and impact velocity in the case of strength-dominated craters. We start with}
{
\begin{equation} \label{eqn:piD1str}
\pi_D = K_1K_2^{-\mu/2}\pi_3^{-\mu/2}\pi_4^{(1-3\nu)/3}.
\end{equation}
This equation involves a separate, empirically determined constant ($K_2$) and the same scaling variables as above. To simplify notation, let $\alpha \equiv \mu/2$ and $\xi \equiv (1-3\nu)/3$. Then,
}
{
\begin{equation} \label{eqn:dtrscale2}
D_{tr} = \frac{K_1}{2} \Big(\frac{4\pi}{3}\Big)^{1/3} \Big(\frac{\rho_t}{\rho_p}\Big)^{\xi-1/3}\Big(\frac{\rho_t}{K_2Y}\Big)^{\alpha} L {v_i^{2\alpha}}.
\end{equation}
Combining the above equation with Equation~\ref{eqn:vmscale} yields
}
{
\begin{equation} \label{eqn:fullscale2}
V_M = \frac{k}{8}\Big(\frac{K_1}{2}\Big)^{-3}D_{tr}^{3}\Big(\frac{\rho_t}{\rho_p}\Big)^{-3\xi+1}\Big(\frac{K_2Y}{\rho_t}\Big)^{3\alpha}
E_M^{-3\mu'/2} v_i^{3\mu'-6\alpha}.
\end{equation}
Equations~\ref{eqn:fullscale} and~\ref{eqn:fullscale2} describe the theoretical amount of melt volume in gravity- and strength-dominated craters, respectively, accounting for different projectile and target bulk densities. Again, we emphasize that impact angle and lithology other than bulk density could influence the actual melt volume. Nevertheless, these equations give the baseline feasibility of determining a projectile's impact speed from measurements of its crater.}
|
Title:
Ionized filaments and ongoing physical processes in massive star-forming sites around l = 345.5 degree |
Abstract: Numerous research studies on dust and molecular filaments have been conducted
in star-forming sites, but only a limited number of studies have focused on
ionized filaments. To observationally study this aspect, we present an analysis
of multi-wavelength data of an area of $\sim$74.6 arcmin $\times$ 55 arcmin
around l = 345.5 degree. Using the 843 MHz continuum map, two distinct ionized
filaments (i.e., IF-A (extent $\sim$8.5 arcmin) and IF-B (extent $\sim$22.65
arcmin)) hosting ionized clumps powered by massive OB stars are identified.
Using the $^{13}$CO(2-1) and C$^{18}$O(2-1) line data, the parent molecular
clouds of IF-A and IF-B are studied in a velocity range of [$-$21, $-$10] km
s$^{-1}$, and have filamentary appearances. At least two cloud components
around $-$18 and $-$15 km s$^{-1}$ toward the parent clouds of IF-A and IF-B
are investigated, and are connected in velocity space. These filamentary clouds
also spatially overlap with each other along the major axis, backing the
filamentary twisting/coupling nature. Noticeable Class I protostars and massive
stars appear to be observed toward the common zones of the cloud components.
These findings support the collision of two filamentary clouds around 1.2 Myr
ago. The existence of the ionized filaments seems to be explained by the
combined feedback of massive stars. The molecular filaments associated with
IF-A and IF-B favour the outcomes of the most recent model concerning the
escape and the trap of the ionizing radiation from an O star formed in a
filament.
| https://export.arxiv.org/pdf/2208.08212 |
\date{ }
\pagerange{\pageref{firstpage}--\pageref{lastpage}} \pubyear{2020}
\label{firstpage}
\begin{keywords}
dust, extinction -- HII regions -- ISM: clouds -- ISM: individual object (IRAS 17008-4040, IRAS 17009-4042, S11, and IRAS 17028-4050) --
stars: formation -- stars: pre--main sequence
\end{keywords}
\section{Introduction}
\label{sec:intro}
In recent years, sub-millimeter(mm) continuum and molecular-line studies have revealed molecular and dust filaments as common features in massive star-forming regions \citep[e.g.,][]{andre10,andre14,morales19}.
The involvement of these filaments in the study of the origin of massive OB-stars (M $\gtrsim$ 8 M$_{\odot}$) has received great impetus in recent years.
In other words, multi-scale and multi-wavelength studies of the filaments are a reliable approach to deepen understanding of massive star formation (MSF) mechanisms. It has been thought that OB stars are assembled by large-scale (1--10 pc) inflow material that may be funneled along filaments \citep[e.g.,][]{tan14,Motte+2018,hirota18,rosen20}.
Such process favours the convergence of filaments toward the compact and dense hub, or a
star-forming clump surrounded by filaments or a junction of
filaments \citep[i.e., hub-filament system (HFS);][]{myers09,Motte+2018}.
A hub-filament configuration is almost universally detected in massive star-forming regions.
Furthermore, the intersection/merging/collision of filaments can also explain the formation of
massive OB stars and stellar clusters \citep[e.g.,][and references therein]{habe92,anathpindika10,fukui21}. Hence, multiple physical processes are expected to be operated in massive star-forming regions.
Apart from the molecular and dust filaments, one may also expect elongated filaments of ionized gas in
star-forming regions, but such study is very limited in the literature (e.g., LBN 140.07+01.64 \citep{karr03,dewangan21}; Eridanus filaments \citep{pon14}; Cygnus~X \citep{emig22}). Hence, the simultaneous study of the ionized, dust, and molecular filaments is still lacking
due to scarcity of the ionized filaments in star-forming regions \citep[e.g., LBN 140.07+01.64;][]{dewangan21}.
The ionizing radiation from OB-association (or OB-star complex) is thought to be responsible for the origin of ionized filaments. Such ionized filaments are found at far distances from the exciting stars/complex \citep[e.g.,][]{karr03,pon14,emig22}.
On the other hand, there is a possibility that several massive stars formed in molecular/dust filamentary clouds may locally produce the ionized filaments in the same clouds. However, such a proposal is yet to be explored in star-forming sites.
Hence, such targets offer to study not only the birth of massive OB stars, but also the origin of elongated ionized filaments.
It also enables us to study the role of filaments in MSF activities and the impact
of massive OB stars on their parent filaments.
In this context, the present paper deals with a wide target area around {\it l} = 345$\degr$.5, which contains several star-forming sites (e.g., IRAS 17008-4040, IRAS 17009-4042, IRAS 17006-4037, IRAS 17028-4050, IRAS 17027-4100, IRAS 17026-4053, and IRAS 17024-4106). Among these highlighted sources, IRAS 17008-4040 and IRAS 17009-4042 are well known massive star-forming regions. The selected target area is not very distant ($<$ 2.5 kpc), and hosts previously known H\,{\sc ii} regions powered by massive OB stars, dust filaments, clusters of young protostars, and a massive protostar in a young, pre-ultracompact H\,{\sc ii} phase. The selected sources are the potential targets to explore the role of filaments in MSF processes and the impact of massive stars on the filaments. Furthermore, such targets also seem to be very promising for investigating the ionized filaments and their molecular environments, which is a very poorly studied topic in star formation research.
Situated at a distance of $\sim$2.4 kpc, the sites IRAS 17008-4040 (hereafter i17008) and IRAS 17009-4042 (hereafter i17009) are associated with H\,{\sc ii} regions powered by B-type stars \citep{garay06,garay07,dewangan18}. Radio continuum morphologies of the H\,{\sc ii} regions at different radio frequencies (i.e., 0.61, 1.28, 1.4, and 2.5 GHz) were examined by \citet{dewangan18} (see Figure~9 in their paper).
An elongated filament hosting the sites i17008 and i17009 was reported using the APEX Telescope Large Area Survey of the Galaxy \citep[ATLASGAL;][]{schuller09} 870 $\mu$m continuum map \citep[see also][]{dewangan18}.
With the aid of the 870 $\mu$m continuum data, at least one dust continuum clump is detected toward these two IRAS sites.
Clumps clm1 (M$_\mathrm{clump}$ $\sim$2430 M$_{\odot}$; $T_\mathrm{d}$ $\sim$30 K; V$_\mathrm{lsr}$ = $-$17 km s$^{-1}$; d $\sim$2.4 kpc)
and clm2 (M$_\mathrm{clump}$ $\sim$2900 M$_{\odot}$; $T_\mathrm{d}$ $\sim$27.3 K; V$_\mathrm{lsr}$ = $-$17.3 km s$^{-1}$; d $\sim$2.4 kpc) are detected toward i17008 and i17009, respectively \citep[see][for more details]{urquhart18,dewangan18}.
The study of the {\it Herschel} sub-mm maps revealed the existence of several parsec-scale filaments directed toward the dust clump hosting each IRAS site, exhibiting HFS candidates \citep{dewangan18}. Additionally, the site i17008 hosts an infrared counterpart (IRc) of the 6.7 GHz methanol maser emission (MME), which is also associated with an extended green object \citep[EGO;][]{cyganowski08}.
The IRc has been proposed as an O-star candidate without an H\,{\sc ii} region \citep[see Figure~9 in][]{dewangan18}, which drives an outflow \citep{cyganowski08,morales09,dewangan18}. Overall, the ongoing star formation activities (including massive stars) were reported toward i17008 and i17009 using the infrared photometric data and radio continuum maps \citep{dewangan18}.
Figures~\ref{fig1}a and~\ref{fig1}b present the radio continuum emission contours
at 843 MHz from the Sydney University Molonglo Sky Survey \citep[SUMSS;][]{bock99} and the {\it Spitzer} Galactic Legacy Infrared Mid-Plane Survey Extraordinaire \citep[GLIMPSE;][]{benjamin03} 8.0 $\mu$m image of a wide area (size $\sim$74\rlap.{$'$}64 $\times$ 55\rlap.{$'$}02; centered at {\it l} = 345$\degr$.3693; {\it b} = 0$\degr$.0391), respectively, which is the target area of this paper.
At least two elongated morphologies or ionized filaments appear in the 843 MHz continuum map toward our selected area around {\it l} = 345$\degr$.5 (see two dotted-dashed boxes in Figure~\ref{fig1}a), where the extended emission in the 8.0 $\mu$m image is also traced
(see Figure~\ref{fig1}b and also Section~\ref{sec:morph} for more details).
We do not find any study of these elongated ionized filaments in the literature as well as their association with the
dust and molecular filaments.
There is no understanding of the existence of these structures and of the ongoing physical mechanisms
around {\it l} = 345$\degr$.5.
In this context, to observationally study the formation of massive stars and the origin of the ionized filaments, an extensive analysis of the multi-wavelength data sets (see Section~\ref{sec:obser}) is performed.
In particular, to study the parent molecular clouds of the ionized filaments, we analyzed the unexplored molecular line data from the structure, excitation, and dynamics of the inner Galactic interstellar medium \citep[SEDIGISM;][]{schuller17,schuller21} and the Mopra Southern Galactic Plane CO Survey \citep{braiding18}.
Section~\ref{sec:obser} presents the observational data sets discussed in this paper.
The outcomes of this paper are given in Section~\ref{sec:data}.
In Section~\ref{sec:disc}, the implications of our observed outcomes are discussed.
Finally, Section~\ref{sec:conc} gives the conclusions of this study.
\section{Data sets}
\label{sec:obser}
The data sets utilized in this work are listed in Table~\ref{tab1}, and were obtained
toward our selected area around {\it l} = 345$\degr$.5 as presented in Figure~\ref{fig1}a.
In this paper, we used the Gaia early data release 3 \citep[EDR3;][]{gaia21,fabricius21} based photogeometric distances (``rpgeo'') of point-like sources from \citet{bailer21}.
Based on the analysis of the {\it Herschel} continuum images at 70--500 $\mu$m \citep{Molinari10a}, the {\it Herschel} temperature and column density maps (resolution $\sim$12$''$) were constructed for the {\it EU-funded ViaLactea project} \citep{Molinari10b}. The Bayesian {\it PPMAP} procedure \citep{marsh15,marsh17} was applied to obtain these {\it Herschel} maps.
The {\it Herschel} temperature map is used in this paper.
The SEDIGISM $^{13}$CO/C$^{18}$O(J = 2--1) line data \citep[beam size $\sim$30$''$; pixel-scale $\sim$9\rlap.{$''$}5; rms $\sim$0.8--1.0~K;][]{schuller17,schuller21} and
the Mopra $^{13}$CO(J = 1--0) line data \citep[beam size $\sim$36$''$; pixel-scale $\sim$30$''$; rms $\sim$0.5~K;][]{braiding18} are examined in this paper.
These line data were smoothed with a Gaussian function having a width of 3 pixels.
The smoothing process gives the resultant angular resolutions of the SEDIGISM $^{13}$CO/C$^{18}$O(J = 2--1) line data and
Mopra $^{13}$CO(J = 1--0) line data to be $\sim$41\rlap.{$''$}4 and $\sim$96\rlap.{$''$}9, respectively. The Mopra $^{13}$CO(J = 1--0) line data are not available for our entire selected target area, but these observations (i.e., $\sim$60\rlap.{$'$}6 $\times$ 54\rlap.{$'$}9; centered at {\it l} = 345$\degr$.4801; {\it b} = 0$\degr$.0442) cover both the ionized filaments.
Apart from the $^{13}$CO/C$^{18}$O line data, we also studied the N$_{2}$H$^{+}$(1--0) line data from
the MALT90 survey \citep[beam size $\sim$38$''$; rms $\sim$0.2~K;][]{foster11,jackson13}
mainly toward i17008 and i17009.
\begin{table*}
\small
\setlength{\tabcolsep}{0.1in}
\centering
\caption{List of observational surveys utilized in this work.}
\label{tab1}
\begin{tabular}{lcccr}
\hline
Survey & Wavelength/Frequency/line(s) & Resolution ($\arcsec$) & Reference \\
\hline
\hline
SUMSS &843 MHz & $\sim$45 &\citet{bock99}\\
SEDIGISM& $^{13}$CO/C$^{18}$O (J = 2--1) & $\sim$30 &\citet{schuller17}\\
Mopra Galactic Plane CO survey& $^{12}$CO/$^{13}$CO/C$^{18}$O (J = 1--0) & $\sim$36 &\citet{braiding18}\\
Millimeter Astronomy Legacy Team Survey at 90 GHz (MALT90) & molecular lines near 90 GHz & $\sim$38 &\citet{jackson13}\\
ATLASGAL &870 $\mu$m & $\sim$19.2 &\citet{schuller09}\\
{\it Herschel} Infrared Galactic Plane Survey (Hi-GAL) &70--500 $\mu$m & $\sim$5.8--37 &\citet{Molinari10a}\\
{\it Spitzer} MIPS Inner Galactic Plane Survey (MIPSGAL) &24 $\mu$m & $\sim$6 &\citet{carey05}\\
Wide Field Infrared Survey Explorer (WISE) & 12 $\mu$m & $\sim$6 &\citet{wright10}\\
{\it Spitzer}-GLIMPSE &3.6--8.0 $\mu$m & $\sim$2 &\citet{benjamin03}\\
\hline
\end{tabular}
\end{table*}
\section{Results}
\label{sec:data}
\subsection{Physical environments around {\it l} = 345$\degr$.5}
\label{sec:morphx}
This section focuses to probe the distribution of dust emission, clumps, ionized emission, molecular gas, and embedded protostars around {\it l} = 345$\degr$.5, which allows us to identify various emission structures, their physical association, and signatures of star formation. Such an investigation is very useful to probe the physical conditions around {\it l} = 345$\degr$.5.
\subsubsection{Ionized clumps toward elongated ionized filaments}
\label{sec:morph}
In order to explore the ionized clumps/H\,{\sc ii} regions and the ionized filaments, we have
employed the radio 843 MHz continuum map in the direction of our selected area around {\it l} = 345$\degr$.5.
As mentioned earlier, the spatial appearance of the 843 MHz continuum emission enabled us to trace two elongated ionized filaments around {\it l} = 345$\degr$.5 (see Figure~\ref{fig1}a), which are designated as IF-A (extent $\sim$8\rlap.{$'$}5) and IF-B (extent $\sim$22\rlap.{$'$}65). In Figure~\ref{fig1}b, the extended emission traced in the 8.0 $\mu$m image can be also observed toward both the ionized filaments. The locations of at least three IRAS sources (i.e., i17008, i17009, and IRAS 17028-4050) and a previously known mid-infrared (MIR) bubble S11 \citep[l = 345$\degr$.48; b = 0$\degr$.399;][]{churchwell06} are indicated in Figure~\ref{fig1}b.
The ionized filament IF-A, hosting i17008, i17009, and the bubble S11 is traced in the northern direction, while the filament IF-B is depicted in the southern direction.
In Figures~\ref{fig1}a and~\ref{fig1}b, the location of the bubble S11 is also marked by a circle (average radius = 2\rlap.{$'$}43).
This bubble \citep[distance $\sim$2.0 kpc;][]{watson10} was identified as a complete/closed ring or a probable enclosed central star cluster with an average radius and thickness of 2\rlap.{$'$}43 and 0\rlap.{$'$}39, respectively \citep[see][]{churchwell06,hanaoka20}. From the previous work of \citet{dewangan18}, we find one ATLASGAL dust continuum clump at 870 $\mu$m \citep[i.e., clm3; M$_\mathrm{clump}$ $\sim$600 M$_{\odot}$; $T_\mathrm{d}$ $\sim$16.5 K; V$_\mathrm{lsr}$ = $-$15.8 km s$^{-1}$; d $\sim$2.4 kpc; see also][]{urquhart18} toward the bubble S11. Based on the previously reported V$_\mathrm{lsr}$ values toward the dust clumps associated with i17008 and i17009 (i.e., clm1 (V$_\mathrm{lsr}$ = $-$17 km s$^{-1}$) and clm2 (V$_\mathrm{lsr}$ = $-$17.3 km s$^{-1}$)), we may suggest that the clump clm3 (V$_\mathrm{lsr}$ = $-$15.8 km s$^{-1}$) appears to be redshifted compared to the other two clumps. But, it requires further investigation using molecular line data.
In addition to the elongated ionized structures, several peaks are visually seen in the radio continuum map.
Hence, we employed the {\it clumpfind} IDL program \citep{williams94} to identify the ionized clumps/H\,{\sc ii} regions from the SUMSS 843 MHz continuum map. The {\it clumpfind} program also allows us to obtain the total flux density ($S_\mathrm{\nu}$) of each selected ionized clump/H\,{\sc ii} region.
However, we have labeled nine ionized clumps (i.e., A--I), which are distributed mainly toward IF-A and IF-B (see Figure~\ref{fig1}a).
In the direction of IF-A, the ionized clumps A, B, and C are found toward the bubble S11, i17008, and i17009, respectively.
Six ionized clumps (i.e., D--I) are labeled toward IF-B.
In general, the observed flux value is used to compute the number of
Lyman continuum photons $N_\mathrm{UV}$ of an ionized clump/H\,{\sc ii} region,
and in this relation, one can use the following equation \citep{matsakis76}:
\begin{equation}
\begin{split}
N_\mathrm{UV} (s^{-1}) = 7.5\, \times\, 10^{46}\, \left(\frac{S_\mathrm{\nu}}{\mathrm{Jy}}\right)\left(\frac{D}{\mathrm{kpc}}\right)^{2}
\left(\frac{T_\mathrm{e}}{10^{4}\mathrm{K}}\right)^{-0.45} \\ \times\,\left(\frac{\nu}{\mathrm{GHz}}\right)^{0.1}
\end{split}
\end{equation}
\noindent where $S_\mathrm{\nu}$ (in Jy) is the total flux
density of the H\,{\sc ii} region, $D$ is the distance in kpc, $T_\mathrm{e}$ is the electron temperature, and $\nu$ is the frequency in GHz.
With the help of the equation~1, $T_\mathrm{e}$ = 10$^{4}$~K, and $D$ = [1.4 kpc, 2.4 kpc; see Section~\ref{sec:morph2} for more details], we determine $\log{N_\mathrm{UV}}$ of each ionized clump marked in Figure~\ref{fig1}a.
Using the reference of \citet{panagia73}, these clumps (i.e., A--I) are found to be powered by massive B0.5V-O9.5V type stars.
Furthermore, following the equations and analysis adopted in \citet{dewangan17a}, the typical value of the initial particle number density of the ambient neutral gas ($n_\mathrm{0}$ = 10$^{3}$ (10$^{4}$) cm$^{-3}$) leads a range of dynamical age of the ionized clumps (i.e., A--I) to be $\sim$0.1--0.3 (0.3--1) Myr. The analysis shows the presence of massive stars in both the ionized filaments, which are distributed along the filaments.
\subsubsection{Distribution of dust clumps, protostars, and molecular gas toward ionized filaments}
\label{sec:morph2}
In this section, we explore the multi-wavelength data sets to examine the embedded dust/molecular structures and protostars/young stellar objects (YSOs) against the ionized features around {\it l} = 345$\degr$.5.
Figure~\ref{fig2}a shows a 3-color composite map made using the {\it Herschel} 160 $\mu$m (in red), {\it Herschel} 70 $\mu$m (in green), and WISE 12 $\mu$m (in blue) images.
Filamentary structures and bubble-like features are clearly visible in the infrared images, which trace the dust emission.
The inset on the top right presents an area -- hosting i17008, i17009,
and S11-- in the zoomed-in view using the {\it Herschel} 160 $\mu$m image, showing the earlier reported one HFS toward i17008 and i17009. The inset on the bottom left displays an area hosting IRAS 17028-4050
in the zoomed-in view using the {\it Herschel} 160 $\mu$m image, showing the infrared bubble-like features.
We have examined the Mopra $^{13}$CO(J = 1--0) emission in a velocity range of [$-$21, $-$10] km s$^{-1}$ to study the distribution of molecular gas. Figure~\ref{fig2}b displays the Mopra $^{13}$CO(J = 1--0) integrated emission map (moment-0) overlaid with the positions of the ATLASGAL clumps at 870 $\mu$m (see circles and stars).
Note that \citet{urquhart18} also determined the reliable velocities and distances of the ATLASGAL clumps, which can be used to study the physical connection of different sub-regions in a given large area.
In Figure~\ref{fig2}b, the ATLASGAL clumps marked by stars and circles are located at a distance of 2.4 kpc and 1.4 kpc, respectively \citep[see][for more details]{urquhart18}. The ATLASGAL clumps associated with the $^{13}$CO outflows are highlighted by plus symbols (in cyan; see Figure~\ref{fig2}b).
This information is taken from \citet{yang22}, who listed the detection of the $^{13}$CO outflows. They also provided the velocity ranges of the $^{13}$CO(J = 2--1) red and blue wing-like velocity components toward the ATLASGAL clumps associated with outflows using the SEDIGISM $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) line data.
All filled symbols show the clumps with V$_\mathrm{lsr}$ of [$-$24, $-$8] km s$^{-1}$.
On the other hand, open circles and stars represent clumps with V$_\mathrm{lsr}$ of [$-$7, 0] km s$^{-1}$ and [$-$30, $-$25] km s$^{-1}$, which may not be associated with IF-A and IF-B.
Note that in the direction of both the ionized filaments IF-A and IF-B, the Mopra molecular gas is depicted in the same velocity range of
[$-$21, $-$10] km s$^{-1}$. On the basis of the distances to the ATLASGAL clumps, IF-A and IF-B appear to be located at a distance of 2.4 kpc and 1.4 kpc, respectively.
In Figure~\ref{fig2}c, the distribution of ionized emission (red contours) and the positions of 26 IRAS sources (filled pentagons) are presented against the molecular emission.
The distribution of molecular gas traces a continuous structure toward IF-A,
but a deficiency of molecular gas is seen toward the central part of IF-B (see Figure~\ref{fig2}c).
At least one molecular condensation is found toward both the ends of IF-B.
From Figure~\ref{fig2}a, we infer a continuous structure in the infrared images toward IF-B.
Based on multi-wavelength images and distribution of the ATLASGAL clumps, we suggest that there was an elongated molecular filament (see a dotted curve in Figure~\ref{fig2}c), which has been eroded by the impact of massive stars located at the center of IF-B.
We examined the {\it Spitzer}-GLIMPSE photometric data at 3.6--5.8 $\mu$m, which allowed us to identify younger protostars (i.e., Class~I YSOs) in our selected target area. The photometric magnitudes of point-like sources at {\it Spitzer} 3.6--5.8 $\mu$m bands were collected from the GLIMPSE-I Spring' 07 highly reliable catalog \citep{benjamin03}. Class~I YSOs are selected using the infrared color
conditions (i.e., [4.5]$-$[5.8] $\ge$ 0.7 mag and [3.6]$-$[4.5] $\ge$ 0.7 mag) described in \citet{hartmann05} and \citet{getman07}.
In Figure~\ref{fig2}d, we display the positions of Class~I YSOs overlaid on the Mopra $^{13}$CO map. An elongated filament traced in the 870 $\mu$m continuum map \citep[see][]{dewangan18} and
the location of the bubble S11 are also indicated by a curve and a big circle (average radius = 2\rlap.{$'$}43 or 1.7 pc at a distance of 2.4 kpc), respectively.
In this work, we focus on only those Class~I YSOs, which are distributed toward the clumpy structures in the clouds (see filled squares in Figure~\ref{fig2}d). Such selection is considered by the visual inspection of the molecular gas and dust emissions (see the {\it Herschel} 160 $\mu$m emission in Figure~\ref{fig2}a and the ATLASGAL clumps in Figure~\ref{fig2}b). Several Class~I YSOs appear to be located outside the molecular cloud boundary, which is traced using the Mopra $^{13}$CO emission contour with a level of 4.3 K km s$^{-1}$ (see Figure~\ref{fig2}d). Hence, such Class~I YSOs are unlikely to be part of the target clouds (see open squares in Figure~\ref{fig2}d).
Therefore, we have not made any attempt to study the Class~I YSOs, which are highlighted by open squares in Figure~\ref{fig2}d.
In order to get distance information of the selected protostars (see filled squares in Figure~\ref{fig2}d), we examined point-like sources in the Gaia EDR3 catalog \citep{gaia21,bailer21}.
In the direction of the clouds traced in Figure~\ref{fig2}c, the distance distribution of Gaia point-like sources peaks around a distance of 2.5 kpc (not shown here). It is expected that the optical counterparts of the selected Class~I YSOs may be faint and/or may not be detected in the Gaia EDR3 catalog. We find optical counterparts of some Class~I YSOs toward the clouds.
The distances of some of these sources are in agreement with the dust clumps. In this relation, we have displayed the GAIA optical counterparts of Class~I YSOs by cyan filled squares (d = [1.6, 1.96] kpc)
and green filled squares (d = [2.0, 2.6] kpc) in Figure~\ref{fig2}d.
This particular analysis favors that the Class~I YSOs spatially seen toward the clumpy structures in the clouds are likely to be physically connected with the ionized emission, dust clumps and molecular material. Hence in other words, in the direction of IF-A, we find an obvious correspondence among the ionized emission, dust clumps and molecular material, where Class~I YSOs are distributed.
From Figures~\ref{fig2}b and~\ref{fig2}d, dust clumps and Class~I YSOs are also seen toward the central part of IF-B,
where a deficiency of molecular gas is found.
In general, an average age of Class~I YSOs is reported to be $\sim$0.44 Myr \citep{evans09}.
Overall, the early phases of star formation activities and the presence of massive stars are evident toward the parent clouds of the ionized filaments (see Figure~\ref{fig2}d).
\subsection{SEDIGISM $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) emission}
\label{sec:gasmorphb}
In this section, we study the kinematics of molecular gas around {\it l} = 345$\degr$.5, allowing us to examine gas velocity structures. Such knowledge is essential to probe the ongoing physical processes toward the selected target area.
\subsubsection{Molecular clouds hosting IF-A and IF-B}
\label{sec:gasmorphbx}
Here we study the spatial and velocity distribution of the SEDIGISM $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) emission in the area shown in Figure~\ref{fig1}a. Figures~\ref{fig3}a and~\ref{fig3}b present the $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) integrated maps and contours, respectively, where enclosed regions indicate the areas around the ionized filaments.
The integrated intensity or moment-0 maps of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) are produced using velocity intervals of [$-$22.25, $-$10.25] and [$-$21.25, $-$12.25] km s$^{-1}$, respectively. Similar morphologies of clouds are evident in the Mopra $^{13}$CO(J = 1--0) and the SEDIGISM $^{13}$CO(J = 2--1) maps. However, one can keep in mind that the SEDIGISM molecular maps (beam size $\sim$41\rlap.{$''$}4) provide more insight into the clouds due to its relatively better resolution compared to the Mopra line
data (beam size $\sim$96\rlap.{$''$}9). In Figure~\ref{fig3}b, the C$^{18}$O(J = 2--1) emission enables us to depict denser parts in the molecular clouds
as traced by the $^{13}$CO(J = 2--1) emission (see areas of enclosed regions in Figures~\ref{fig3}a and~\ref{fig3}b).
In Figure~\ref{fig4}a, we display a two-color composite map made using the SUMSS 843 MHz continuum map (in red) and
the SEDIGISM $^{13}$CO(J = 2--1) map (in turquoise).
IF-A is embedded in the filamentary molecular cloud, which is distinctly traced in
the C$^{18}$O(J = 2--1) map (see Figure~\ref{fig3}b).
The color composite map indicates the destruction of the central part of an elongated molecular (i.e., $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1)) structure,
where IF-B is spatially traced. The color composite map also hints at the presence of two filamentary molecular clouds toward IF-B, which are indicated by two curves in Figure~\ref{fig4}a. Figure~\ref{fig4}b displays the {\it Herschel} temperature map overlaid with
the SEDIGISM $^{13}$CO(J = 2--1) emission contour.
The areas around i17008 and i17009 are saturated in the {\it Herschel} temperature map.
Warm dust emission ($T_\mathrm{d}$ $>$ 21~K) is evident toward both the ionized filaments.
In the direction of the ionized clumps ``E" and ``F" in IF-B (see Figure~\ref{fig1}a), at least
two bubble-like structures are observed in the {\it Herschel} temperature map (see blue dashed curves in Figure~\ref{fig4}b),
where the molecular gas depression is found.
Figures~\ref{fig5}a and~\ref{fig5}b present the line velocity/velocity field/moment-1 map of the SEDIGISM $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) emission, respectively. Both these moment-1 maps indicate a noticeable velocity spread toward the clouds
associated with both the ionized filaments, where higher values of line widths ($>$ 1.5 km s$^{-1}$) are found in
the intensity-weighted line width maps (moment-2) of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) (see Figures~\ref{fig5}c and~\ref{fig5}d).
Position-velocity diagrams of the $^{13}$CO(J = 2--1) emission along arrows ``aA" and ``bB" (see Figure~\ref{fig4}b) are presented in Figures~\ref{fig5}e and~\ref{fig5}f, respectively.
The contours of the C$^{18}$O(J = 2--1) emission are also drawn in both the position-velocity diagrams.
Along the direction of the arrow ``aA", the position-velocity diagram hints the existence of two velocity peaks/components around $-$11 and $-$17 km s$^{-1}$ (see arrows in Figure~\ref{fig5}e).
Note that the arrow ``bB" passes through the overlapping areas of two filamentary molecular clouds in the direction of IF-B.
Along the arrow ``bB", we find at least two velocity peaks/components around $-$16 and $-$19 km s$^{-1}$, exhibiting the presence of two distinct filaments (see arrows in Figure~\ref{fig5}f). Figures~\ref{fig6}a,~\ref{fig6}b, and~\ref{fig6}c display
position-velocity diagrams of the $^{13}$CO(J = 2--1) emission along curves ``pP", ``qR" and ``qS", which are marked in Figure~\ref{fig4}b.
In the direction of i17008, an outflow \citep[blue wing: ($-$25.2, $-$19.8) km s$^{-1}$; red wing: ($-$14.8, $-$6.2) km s$^{-1}$;][]{yang22} is evident. We can also trace an outflow \citep[blue wing: ($-$22.8, $-$20.5) km s$^{-1}$; red wing: ($-$13.5, $-$9.8) km s$^{-1}$;][]{yang22} toward i17009. Apart from the outflow activity, in the direction of i17009, two velocity components (around $-$15 and $-$18 km s$^{-1}$) are also seen (see arrows in Figure~\ref{fig6}a).
Furthermore, two distinct velocity peaks (around $-$15 and $-$18 km s$^{-1}$) are also evident along the curves ``qR" and ``qS" (see arrows in Figures~\ref{fig6}b and~\ref{fig6}c).
\subsubsection{Zoomed-in view of molecular cloud associated with IF-A}
\label{sec:gasbx}
In the direction of an area hosting i17008, i17009, and S11 (or ionized clumps ``A--C"), Figures~\ref{fig7}a,~\ref{fig7}b, and~\ref{fig7}c display the moment-0, moment-1, and moment-2 maps of $^{13}$CO(J = 2--1), respectively. In Figures~\ref{fig7}d,~\ref{fig7}e, and~\ref{fig7}f, we show moment-0, moment-1, and moment-2 maps of C$^{18}$O(J = 2--1), respectively.
The $^{13}$CO(J = 2--1) emission is integrated over a velocity range of [$-$24, $-$9] km s$^{-1}$ (see Figure~\ref{fig7}a), while the C$^{18}$O(J = 2--1) emission is integrated over a velocity range from $-$22 to $-$12 km s$^{-1}$ (see Figure~\ref{fig7}d).
A dotted circle in each panel of Figure~\ref{fig7} highlights the location of the bubble S11.
The moment-0 maps of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) show the elongated filamentary structure (see Figures~\ref{fig7}a and~\ref{fig7}d), which has almost a similar morphology as seen in the ATLASGAL continuum map at 870 $\mu$m (see Figure~\ref{fig2}d).
However, the proposed HFSs are not very clearly seen in both the moment-0 maps.
The moment-1 maps of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) show a noticeable velocity difference/gradient toward the northern direction compared to the site i17009.
In the direction of both the IRAS sites, we find higher values of line width ($>$ 2.5 km s$^{-1}$) (see Figures~\ref{fig7}c and~\ref{fig7}f).
One can also find higher line width (i.e., 1--2.5 km s$^{-1}$) toward the bubble in the moment-2 maps
of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1).
The increased line widths can suggest star formation activities and/or the presence of multiple velocity components.
Figure~\ref{fig8} shows the averaged spectra of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) over eight small circles (radius = 20$''$)
distributed toward the elongated molecular cloud (see Figure~\ref{fig7}f).
For a reference purpose, a vertical dashed line at V$_\mathrm{lsr}$ = $-$18 km s$^{-1}$ is also marked in each panel of Figure~\ref{fig8}.
The circle nos. \#1--3 are distributed toward the bubble, while the circle nos. \#4 and \#6 are located toward the sites i17008 and i17009, respectively. The spectra of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) show a single velocity peak toward four positions (\#3, 4, 5, and 8).
However, we may see at least two velocity peaks in the direction of the other four positions (\#1, 2, 6, and 7).
In particular, based on the $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) spectra toward the circle \#6, two velocity peaks are clearly found, allowing us to identify two cloud components at [$-$15.25, $-$11] km s$^{-1}$ (around $-$15 km s$^{-1}$) and [$-$22.25, $-$16] km s$^{-1}$ (around $-$18 km s$^{-1}$).
Furthermore, with respect to the reference line, we find a change in the velocity peaks of molecular spectra on moving from
the northern to southern parts of the cloud, showing the existence of a velocity gradient in the molecular cloud.
Figures~\ref{fig9} and~\ref{fig10} display the integrated velocity channel contours of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) (at velocity intervals of 1 km s$^{-1}$), respectively. Both channel maps support the presence of two molecular cloud components toward the area hosting two IRAS sources and the bubble
(see panels at [$-$19, $-$18] and [$-$16, $-$15] km s$^{-1}$). The channel maps also support the presence of the HFS toward each IRAS site (see panels between [$-$22, $-$21] and [$-$17, $-$16] km s$^{-1}$).
Figures~\ref{fig11}a and~\ref{fig11}b show the position-velocity diagrams of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) along the curve as marked in Figures~\ref{fig7}a and~\ref{fig7}d, respectively.
In Figures~\ref{fig11}c and~\ref{fig11}d, we present the latitude-velocity diagrams of $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) for a longitude range of 345$\degr$.43 to 345$\degr$.54. A continuous velocity structure is seen in all the position-velocity diagrams. These maps also suggest the presence of an outflow toward i17008
and the presence of two velocity components (around $-$15 and $-$18 km s$^{-1}$).
All these results together favour the spatial and velocity connections of two cloud components in the direction of i17008, i17009, and S11.
Figures~\ref{fig12}a and~\ref{fig12}b present the overlay of the $^{13}$CO(J = 2--1) and C$^{18}$O(J = 2--1) emission contours of two cloud components (at [$-$15.25, $-$11] and [$-$22.25, $-$16] km s$^{-1}$) on the {\it Spitzer} 8.0 $\mu$m image, respectively. The location of the elongated filament is also indicated in Figures~\ref{fig12}a and~\ref{fig12}b.
In the {\it Spitzer} 8.0 $\mu$m image, the extended emissions detected toward both the IRAS sites and the bubble (or ionized clumps ``A--C") are seen toward the overlapping areas of two clouds.
On the basis of the channel maps, we produce an integrated emission map of $^{13}$CO(J = 2--1) at [$-$17.75, $-$16.5] km s$^{-1}$ to trace the proposed HFSs (see Figure~\ref{fig12}c). In this relation, we exposed this $^{13}$CO(J = 2--1) map to an edge detection algorithm \citep[i.e. Difference of Gaussian (DoG); see][]{gonzalez02,assirati14,dewangan17b}.
In Figure~\ref{fig12}d, we display a two-color composite map made using the $^{13}$CO(J = 2--1) maps, which consists of the ``Edge-DoG" processed $^{13}$CO(J = 2--1) map at [$-$17.75, $-$16.5] km s$^{-1}$ (in red) and the $^{13}$CO(J = 2--1) map at [$-$24, $-$9] km s$^{-1}$ (in turquoise).
In Figure~\ref{fig12}d, at least five curves are marked and labeled in the direction of i17008 (or ionized clump ``B"),
while at least three curves are highlighted toward i17009 (or ionized clump ``C").
In Figure~\ref{fig12}d, one can clearly see the HFS toward each IRAS site.
In Figure~\ref{fig12}e, we present the integrated emission map (moment-0) at [$-$21, $-$14] km s$^{-1}$ of the dense gas tracer N$_{2}$H$^{+}$(1--0) from the MALT90 data sets, which were observed for the areas covering G345.504/i17008 and G345.487/i17009 (see dotted-dashed boxes in Figure~\ref{fig12}c). Note that the MALT90 line data sets are not available toward the bubble S11.
The location of the elongated filament is also highlighted in the moment-0 map.
The moment-0 map of N$_{2}$H$^{+}$ clearly displays the filamentary morphology as seen in the ATLASGAL continuum map at 870 $\mu$m.
Figure~\ref{fig12}f displays the position-velocity diagram of N$_{2}$H$^{+}$ data set along the axis as marked in the N$_{2}$H$^{+}$ map (see Figure~\ref{fig12}e). In the position-velocity diagram of the N$_{2}$H$^{+}$ line, the hyperfine components are also evident. The position-velocity diagram supports the existence of a continuous velocity structure along the selected axis or filament hosting G345.504/i17008 and G345.487/i17009, and also shows a velocity spread toward the site i17009.
The implication of all these observed findings is presented in Section~\ref{sec:disc}.
\section{Discussion}
\label{sec:disc}
In the direction of our selected target field around {\it l} = 345$\degr$.5, this paper mainly focuses on the elongated filamentary structures traced in different emissions (i.e., dust, molecular, and ionized).
One of the new findings of this work is the presence of two distinct ionized filaments (i.e., IF-A and IF-B) located at different
distances (see Section~\ref{sec:morph2}).
Interestingly, the parent molecular clouds of both the ionized filaments are depicted in the same velocity range of [$-$21, $-$10] km s$^{-1}$, and have filamentary appearances. Several ionized clumps, which are excited by massive stars, are depicted toward the ionized filaments (see Figure~\ref{fig1} and Section~\ref{sec:morph}).
In the following sections, we discuss the origin of massive stars and elongated ionized structures.
\subsection{Interacting filamentary molecular clouds}
\label{sec:zzfffx}
We investigate at least two cloud components toward the parent molecular clouds of both the ionized filaments (see Section~\ref{sec:gasmorphb}).
It is another new finding of this work. In the direction of each parent molecular cloud, we find a velocity connection of two cloud components having a velocity separation of about 3 km s$^{-1}$ (see Figure~\ref{fig6}). These filamentary clouds also spatially overlap with each other along the major axis, backing the existence of their multiple common zones. It may be considered as one of the forms of molecular/dust filamentary twisting/coupling \citep[e.g., LBN 140.07+01.64;][]{dewangan21}.
This argument is valid for both the parent molecular clouds of IF-A and IF-B.
These results together hint at the onset of the interaction or collision of filamentary molecular clouds, but do not favour a single point collision event of two molecular clouds. In the direction of the converging areas of the clouds, we find either dust clumps hosting massive stars or only ionized clumps powered by massive stars.
Earlier, signatures of the colliding flows were reported to Lupus~I \citep{gaczkowski15,gaczkowski17,krause18}.
Lupus~I is associated with the Lupus clouds, which are nearby (150-200 pc) and young (1-2 Myr) star-forming region \citep[e.g.,][]{gaczkowski15}.
Lupus~I is spatially seen between the Upper-Scorpius (USco) HI shell and the Upper Centaurus-Lupus (UCL) wind bubble \citep{gaczkowski15}.
In other words, Lupus~I is thought to be situated along a filament at the converging area
of these two bubbles, where the higher level of clumpiness is observed.
In this context, Lupus~I has been suggested to be strongly influenced by colliding flows/shocked flows, which are produced by the
expanding USco HI shell and the UCL wind bubble \citep{gaczkowski15,gaczkowski17,krause18}.
These earlier works encourage us to explore the scenario of the colliding flows in our selected target area.
Numerical simulations of the cloud cloud collision (CCC) process show the presence of massive and dense clumps/cores at the junction of two molecular clouds or the shock-compressed interface layer \citep[e.g.,][and references therein]{habe92,anathpindika10,inoue13,haworth15a,haworth15b,torii17,balfour17,bisbas17}, which is a very suitable environment for the MSF. In other words, massive stars and clusters of YSOs can be formed inside the dense gas layer produced via the strong compression at
the colliding interface.
In this relation, several observational works have been reported in the literature \citep[e.g.,][]{torii11,torii15,torii17,fukui14,fukui18,fukui21,dhanya21}.
Observationally, in the CCC process, one may expect a bridge feature in position-velocity diagrams, showing a connection of two clouds by an intermediate velocity and low intensity feature in velocity space \citep[e.g.,][]{haworth15b,dewangan17s235,dewangan18b,Kohno18,Priestley21}.
In addition, one may also expect a complementary distribution (i.e., a spatial match of ``key/intensity-enhancement" and ``cavity/keyhole/intensity-depression" features) in the collision event \citep[e.g.,][]{fukui18,dewangan18N36,Enokiya21}. However, we do not find any complementary distribution of two clouds in our target area.
Our results enable us to propose the applicability of the collision of two filamentary clouds in areas hosting IF-A and IF-B.
Previously, in a massive-star forming region S237, a cluster of YSOs and a massive clump were found toward
at the intersection of filamentary features, and the collision of these features was proposed to explain the observed cluster formation \citep{dewangan17a}. \citet{dewangan19} also identified two closely spaced (in velocity as well as in position) filamentary clouds in star-forming site AFGL 5142 and deciphered the presence of young stellar clusters by the filamentary collision/interaction scenario.
In the case of the AFGL 333-Ridge, \citet{liang21} traced two velocity components having a velocity separation of about 2.5 km s$^{-1}$.
Based on the analysis of $^{13}$CO line data, they proposed a scenario of colliding and merging of two cloud components into one molecular cloud in the AFGL 333-Ridge.
In the present study, we consider the spatial extent of the overlapping regions of the two clouds, having a velocity separation of $\sim$3 km s$^{-1}$, to be $\sim$1.75 pc.
We estimate the collision time-scale (i.e., the time-scale for which the material is accumulated at the collision zones) using the following equation \citep[see also][]{henshaw13}
\begin{equation}
t_{\rm accum} = 2.0\,\bigg(\frac{l_{\rm fcs}}{0.5\,{\rm pc}} \bigg) \bigg(\frac{v_{\rm
rel}}{5{\rm \,km\,s^{-1}}}\bigg)^{-1}\bigg(\frac{n_{\rm pstc}/n_{\rm
prec}}{10}\bigg)\,{\rm Myr}
\end{equation}
where, $n_{\rm prec}$ and $n_{\rm pstc}$ are the mean densities of the pre-collision and post-collision region, respectively.
Here, $l_{\rm fcs}$ is the collision length-scale and ${v_{\rm rel}}$ is the observed relative velocity.
In the present case, we do not know the exact viewing angle of the collision.
Therefore, a typical viewing angle of 45$\degr$ results in the collision length-scale ($l_{\rm fcs}$) of $\sim$2.5 pc (= 1.75 pc/sin(45$\degr$)), and the observed relative velocity (${v_{\rm rel}}$) of $\sim$4.2 km s$^{-1}$ (= 3 km s$^{-1}$/cos(45$\degr$)).
In this work, we do not have reliable estimates of $n_{\rm prec}$ and $n_{\rm pstc}$ values.
However, logically, we expect $n_{\rm pstc}$ $>$ $n_{\rm prec}$ in a collision process, resulting in the higher mean density ratio ($\geq$1) of the post- and pre-collision regions. Considering a wide range of the mean density ratio of 1--10, we compute a range of collision timescale of $\sim$1.2--11.7 Myr.
It implies that the collision of two clouds might have occurred $\sim$1.2 Myr ago.
In Section~\ref{sec:morph}, the dynamical ages of the ionized clumps (``A--G") are computed to be $\sim$0.1--1 Myr.
In the direction of site i17008, an O-star candidate without an H\,{\sc ii} region has also been investigated.
Also, the noticeable Class~I protostars (mean age $\sim$0.44 Myr) appear to be seen toward the parent clouds of IF-A and IF-B (see Figure~\ref{fig2}d).
Thus, considering different ages concerning signposts of star formation activities, we notice that the collision timescale is old enough to influence the star formation (including massive stars) in the parent molecular clouds of both the ionized filaments. Therefore, the star formation history in our target area seems to be explained by the collision of the two filamentary clouds.
The previously reported HFSs toward i17008 and i17009 are spatially seen in one of the cloud components (i.e., around $-$18 km s$^{-1}$; see Figure~\ref{fig12}).
The presence of HFSs may favour the onset of the global non-isotropic collapse (GNIC) scenario \citep[see][for more details]{Tige+2017,Motte+2018}.
In the smoothed particle hydrodynamics simulations related to head-on collision of two clouds, \citet{balfour15} reported the presence of a pattern of filaments (e.g., hub or spokes systems) as resultant from the collision process.
The theoretical work supports the origin of a shock-compressed layer by the colliding clouds, which fragments into filaments. Then these filaments form a network like a spider's web in the case of higher relative velocity between clouds (see also magnetohydrodynamic (MHD) simulations of \citet{inoue18}).
Recently, the review article on CCC of \citet{fukui21} also stated that the onset of the collision process can produce
hub filaments with their complex morphology. Using the SEDIGISM $^{13}$CO and C$^{18}$O line data, in the filamentary infrared dark cloud (IRDC) G333.73+0.37, \citet{dewangan2022new} presented the results in favour of CCC or converging flows, explaining the presence of the HFS and massive stars in the IRDC. Using the N$_{2}$H$^{+}$(1--0) observations, \citet{beltran22} explained the formation of the HFS and the origin of massive protocluster associated with the hot molecular core in the G31.41+0.31 cloud through the CCC. In the case of N159E-Papillon Nebula located in the Large Magellanic Cloud (distance $\sim$50 kpc), \citet{fukui19ex} provided observational results to support the scenario of the large-scale colliding flow, which was used to explain the existence of of massive stars and HFSs. These observational findings strongly support the connection of the formation of HFSs and massive stars with the CCC. Hence, our proposed collision process may also explain the existence of hubs in our target area.
Overall, the interaction of elongated molecular filaments seems to be responsible for the
birth of massive stars associated with IF-A and IF-B.
\subsection{Ionized filaments IF-A and IF-B}
\label{sec:fffx}
In recent years, a wealth of studies on dust and molecular filaments have been carried out in star-forming sites, which strongly
support their key role in the formation of stellar clusters and massive stars.
Despite the availability of numerous radio continuum surveys, so far a very limited number of studies have conducted to investigate elongated ionized structures with high aspect ratios (length/thickness) in massive star-forming regions (e.g., Lynds Bright Nebulae \citep{karr03}, Eridanus filaments \citep{pon14}, Cygnus~X \citep{emig22}), which can be referred to as ionized filaments.
In general, the study of massive star-forming regions is largely focused on H\,{\sc ii} regions powered by massive OB stars, which are often surrounded by MIR bubbles having different morphologies \citep[i.e., a complete or closed ring, a broken or incomplete ring, and a bipolar structure;][]{churchwell06}. In this context, one may not expect a very large elongated morphology of a single H\,{\sc ii} region excited by massive OB stars.
In the literature, the ionized nature of the Lynds Bright Nebulae was explained by ultraviolet photons leaking from the nearby star-forming region W5 \citep{karr03}. In the case of Eridanus filaments, \citet{pon14} reported that these filaments are
non-equilibrium structures, and might have produced when the Orion-Eridanus superbubble compressed a pre-existing gas cloud and swept up the gas into a dense ring around the outer boundary of the bubble.
In the site Cygnus~X, \citet{emig22} proposed that the energetic feedback from Cyg OB2 (i.e., ultraviolet (UV) radiation, stellar winds, and radiation pressure) may be responsible for the observed ionized filaments via swept-up ionized gas or dissipated turbulence.
Hence, the common explanation of the origin of ionized filaments is likely due to the feedback from massive stars.
Most recently, \citet{whitworth21} studied a semi-analytic model concerning ionizing feedback from an O star formed in a filament.
According to the model, the filament is generally destroyed by the ionizing radiation from the O star,
and the ionized gas disperses freely into the surroundings. We refer to this as a ``case-I" phase in this work.
In the case of relatively wide and/or relatively dense filament and/or low the rate at which the O star emits ionizing photons,
the ongoing accretion inflow on to the filament will reduce
the escape of ionized gas, and might trap the ionizing radiation from the O star.
This will slow the erosion of the filament, and the model also shows the formation of a relatively dense, compact, and turbulent
H\,{\sc ii} region around the ionizing stars. We refer to this as a ``case-II" phase.
The spatial association of molecular gas with the ionized structures seems to indicate that the filamentary molecular clouds are pre-existing structures (see Figures~\ref{fig2} and~\ref{fig4}).
Hence, the filamentary molecular clouds are unlikely to be formed by the feedback of young massive OB
stars associated with the ionized filaments.
However, the energetics of massive stars appear to have influenced their parent filamentary molecular clouds (see Figures~\ref{fig2} and~\ref{fig4}).
In other words, massive stars wreak havoc on the gas and dust in the parental filamentary molecular clouds.
The knowledge of three pressure components (i.e., pressure of an H\,{\sc ii} region ($P_{HII}$), radiation pressure (P$_{rad}$), and stellar wind ram pressure (P$_{wind}$)) driven by a massive OB star can be useful to explore the feedback of a massive star in its vicinity \citep[e.g.,][]{dewangan16}.
The equations of different pressure components are $P_{HII} = \mu m_{H} c_{s}^2\, \left(\sqrt{3N_\mathrm{UV}\over 4\pi\,\alpha_{B}\, D_{s}^3}\right)$; P$_{rad}$ = $L_{bol}/ 4\pi c D_{s}^2$; and P$_{wind}$ = $\dot{M}_{w} V_{w} / 4 \pi D_{s}^2$ \citep[see][for more details]{bressert12,dewangan16}.
In these equations, $N_\mathrm{UV}$ is defined earlier, c$_{s}$ is the sound speed of the photo-ionized gas \citep[i.e., 11 km s$^{-1}$;][]{bisbas09}, $\alpha_{B}$ is the radiative recombination coefficient \citep[= 2.6 $\times$ 10$^{-13}$ $\times$ (10$^{4}$ K/T$_{e}$)$^{0.7}$ cm$^{3}$ s$^{-1}$; see][]{kwan97}, $\mu$ is the mean molecular weight in the ionized gas
\citep[i.e., 0.678;][]{bisbas09}, m$_{H}$ is the hydrogen atom mass, $\dot{M}_{w}$ is the mass-loss rate,
V$_{w}$ is the wind velocity of the ionizing source,
L$_{bol}$ is the bolometric luminosity of the source, and D$_{s}$ is the projected distance from the position of a massive star where the pressure components are determined.
In the case of Wolf-Rayet stars, the value of P$_{wind}$ dominates over the values of $P_{HII}$ and P$_{rad}$ \citep[e.g.,][]{lamers99,dewangan16xs,baug19}. But, the value of $P_{HII}$ driven by massive zero age main sequence stars often exceeds their P$_{rad}$ and P$_{wind}$ components \citep[e.g.,][]{dewangan16}. Hence, we have computed only the values of $P_{HII}$ in the direction of IF-A and IF-B.
We find a total of N$_{uv}$ = 1.72 $\times$ 10$^{48}$ s$^{-1}$ of three clumps ``A--C'' in IF-A, while a total of N$_{uv}$ = 2.65 $\times$ 10$^{48}$ s$^{-1}$ of three clumps ``E--G'' in IF-B are estimated. Considering the elongated appearance of the filaments, we choose a value of D$_{s}$ = 3 pc for the calculations. Using $\alpha_{B}$ = 2.6 $\times$ 10$^{-13}$ cm$^{3}$ s$^{-1}$ at T$_{e}$ = 10$^{4}$~K and D$_{s}$ = 3 pc, we calculated the values of P$_{HII}$ to be $\approx$6.1 $\times$ 10$^{-11}$ and $\approx$7.6 $\times$ 10$^{-11}$ dynes\, cm$^{-2}$ for IF-A and IF-B, respectively.
Each value can be compared with the pressure exerted by the self-gravity of the surrounding molecular gas around respective ionized filament. Based on the detection of various molecular emission (e.g., C$^{18}$O (critical density $\sim$10$^{4}$ cm$^{-3}$); see Figures~\ref{fig3} and~\ref{fig5}), we assume the values of particle density $\geq$ 10$^{4}$ cm$^{-3}$ for IF-A, and $<$ 10$^{3}$ cm$^{-3}$ for IF-B.
From Table 7.3 in \citet{dyson97}, we find pressure values (P$_{MC}$) for typical cool molecular clouds (particle density $\sim$ 10$^{3}$ -- 10$^{4}$ cm$^{-3}$ and temperature $\sim$ 20 K) to be $\sim$2.8 $\times$ 10$^{-12}$ -- 2.8 $\times$ 10$^{-11}$ dynes cm$^{-2}$.
Concerning the parent molecular cloud of IF-A, an H\,{\sc ii} region powered by a massive star is traced
at its three different locations. In other words, there are three H\,{\sc ii} regions toward the parent
molecular cloud of IF-A. The combined feedback of three H\,{\sc ii} regions located in
the elongated molecular cloud has not eroded the parent cloud, and may be responsible for the elongated ionized morphology of IF-A.
It is supported by the pressure values (i.e., P$_{HII}$ $\approx$ P$_{MC}$) inferred toward IF-A.
On the basis of the C$^{18}$O emission and the dust continuum emission, we consider the parent molecular cloud of IF-A as a dense filament. Therefore, the filamentary cloud associated with IF-A resembles the ``case-II" phase as described by \citet{whitworth21}.
In a similar fashion, the existence of IF-B may be explained by the combined feedback of massive stars
powering the ionized clumps ``D--G".
Furthermore, the central part of the parent molecular cloud of IF-B hosts bubble-like structures (with $T_\mathrm{d}$ = 21--27~K) as seen in the {\it Herschel} temperature map, and seems to be destroyed by the impact of the H\,{\sc ii} regions in the ionized filament IF-B. In this relation, one may also obtain the hint from the pressure values (i.e., P$_{HII}$ $>$ P$_{MC}$) in IF-B. After the erosion of the filamentary molecular cloud, the ionizing radiation has freely streamed out,
which could lead to the elongated ionized morphology of IF-B. This implies the applicability of the ``case-I" phase as reported by \citet{whitworth21}.
\subsection{Velocity structure function toward IF-A}
\label{xxssec:data3}
It is possible that the colliding filaments and the stellar feedback from massive stars may drive internal turbulence in the parent clouds of both the ionized filaments. In this section, to examine the properties of turbulence, we study the velocity structure function of a continuous elongated molecular structure associated with IF-A.
In this connection, we determined the second-order structure function ($S_2(L)$) as reported in \citet{hacar16}, and
the square root of the second-order structure function is defined as:
\begin{equation}
S_2(L)^{1/2} = \delta V = \left<|V(r)-V(r+L)|^2\right>^{1/2} = \left<|\delta u_{l}|^2\right>^{1/2}
\label{xxshh1}
\end{equation}
Here, $\delta u_{l}$ = V(r) $-$ V(r+L) is the velocity difference between two positions separated by a lag.
Based on the earlier reported work by \citet{heyer04}, we find that the velocity structure function defined in this way is a useful tool to study properties of turbulence in molecular clouds and can be directly compared with the Larson's size-linewidth relation \citep{larson81}.
In this work, the structure functions are derived from the $^{13}$CO and N$_{2}$H$^{+}$ line data.
The lines with different critical densities give an opportunity to study gas dynamics in less dense (extended) and more dense (compact) parts of the filament. Line velocities are determined from Gaussian fitting to the profiles.
Using the $^{13}$CO line data, structure functions are constructed for two different velocity ranges of [$-$19, $-$17] km s$^{-1}$ and [$-$16, $-$14] km s$^{-1}$ as presented in Figures~\ref{fig13}a and~\ref{fig13}b, respectively. In the calculations, the $^{13}$CO integrated intensities (I($^{13}$CO)) $>$ 3 K km s$^{-1}$ and a lag of 20$''$ are considered. A range of linewidth of [1, 3] km s$^{-1}$ is adopted in order to exclude the spectra with overlapping components in the analysis.
The N$_{2}$H$^{+}$ line data are known to trace dense gas in a given star-forming region.
Therefore, to study structure functions toward the dense clumps hosting IRAS 17008-4040/G345.504 and IRAS 17009-4042/G345.487,
the N$_{2}$H$^{+}$ line data are employed. Using the N$_{2}$H$^{+}$ line data, Figures~\ref{fig13}c and~\ref{fig13}d present structure functions for an area hosting IRAS 17008-4040/G345.504 and IRAS 17009-4042/G345.487, respectively.
The calculations use a velocity range of [$-$19, $-$17] km s$^{-1}$, a lag of 10$''$, and a range of linewidth of [1, 3] km s$^{-1}$.
In general, concerning the study of turbulence in molecular clouds, the Larson's one-dimensional velocity dispersion-size relationship or Larson's scaling relation \citep[i.e., $\delta V$ = 0.63 $\times$ L$^{0.38}$;][]{larson81} can be examined.
In this relation, one can expect the dominant turbulent flow against the kinematically coupled large-scale, ordered motion in molecular clouds. Using the $^{13}$CO line data, \citet{hacar16} studied the velocity structure function for the Musca cloud (i.e., $\delta V$ = 0.38 $\times$ L$^{0.58}$),
which was found to be different from the Larson's scaling relation. This deviation was attributed due to the presence of sonic-like structure in the cloud which is decoupled with the dominant turbulent velocity structure.
In Figures~\ref{fig13}a--\ref{fig13}d, we have also shown the Larson's scaling (i.e., $\delta V$ = 0.63 $\times$ L$^{0.38}$), the relation concerning the star-forming site S242 \citep[i.e., $\delta V$ = 0.42 $\times$ L$^{0.48}$;][]{dewangan19x},
and a relationship of the Musca cloud (i.e., $\delta V$ = 0.38 $\times$ L$^{0.58}$) by dashed blue line, solid black line, and dashed red line, respectively.
Structure functions of the selected regions show nearly power-law dependencies for lower L ($\leq$ 2 pc using $^{13}$CO and $\leq$ 0.5--1 pc using N$_{2}$H$^{+}$; see the X-axis in Figure~\ref{fig13}).
For higher L, structure functions behave mostly in an irregular manner. In Figures~\ref{fig13}a--\ref{fig13}d, all the structure functions as those for S242 and Musca lie under the Larson's dependence. The power-law dependencies of the structure functions using the $^{13}$CO [$-$16,$-$14] km s$^{-1}$ (see Figure~\ref{fig13}b) and for the region G345.487 using the N$_{2}$H$^{+}$ line data (see Figure~\ref{fig13}d) turn out to be close to the one, which is also found for S242 having a power-law index of about 0.48.
A power-law index of the structure function derived from the $^{13}$CO [$-$19, $-$17] km s$^{-1}$ data is about 0.6.
These indices are higher than of the Larson's dependence, and are predicted for supersonic incompressible turbulence with intermittency \citep[e.g.,][]{she94,boldyrev02}. In order to compare our estimated indices with the predictions of the highlighted papers, one has to multiply our indices by 2 as we used a square root of the structure function. Furthermore, using the $^{13}$CO line data, we also compared the structure functions derived for the inner subregion containing the filament against its surrounding (outer) gas.
For the $^{13}$CO [$-$16, $-$14] km s$^{-1}$ range, no definite differences in the power-law indices are found.
However, for the $^{13}$CO [$-$19, $-$17] km s$^{-1}$ range, the index of the power-law dependence in surrounding gas is higher than for the filament (see Figures~\ref{fig13}e and~\ref{fig13}f).
\citet{chira19} studied the evolution of simulated turbulent clouds under the influence of various physical processes and explored how velocity structure functions change with time. Their results show a general behavior of the structure function-L dependencies similar to ours (Figures~\ref{fig13}a--\ref{fig13}d) at early stages of the cloud evolution. The power-law indices of their structure function-L
without density weighting -- which is our case -- appear to be higher than the theoretical predictions in accordance with our results. It is also found that the structure function's power-law indices decrease with time due to growing influence of systemic velocities in self-gravitating gas on small scales.
In the direction of the ionized filament IF-A, different scales of the regions, where turbulence dominates as observed from the $^{13}$CO and N$_{2}$H$^{+}$ structure functions, are most likely due to the difference in critical densities of these molecular lines.
The difference in the power-law indices of the structure function dependencies calculated using the $^{13}$CO [$-$19, $-$17] km s$^{-1}$ data for the region of the filament and for its surrounding gas could reflect the difference in turbulent properties connected with the
feedback from massive stars in the filament.
We conclude that the observed power-law dependencies of the structure functions derived from molecular line data for the ionized filament IF-A most probably reflect turbulent properties of gas. In order to confirm the properties of turbulence in massive star-forming cores, which differ from their surroundings, new observations with high sensitivity and high angular resolution using the lines with different critical densities are needed. Using the line data, the study of velocity structure functions, and their comparison with the theoretical predictions and the results of model simulations seems to be a useful tool for this purpose.
\section{Summary and Conclusions}
\label{sec:conc}
To observationally investigate the embedded morphology and ongoing physical mechanisms around {\it l} = 345$\degr$.5, we have carried an analysis of multi-wavelength data of an area of $\sim$74\rlap.{$'$}6 $\times$ 55$'$.
The radio continuum map at 843 MHz reveals two distinct ionized filaments (i.e., IF-A (extent $\sim$8\rlap.{$'$}5) and IF-B (extent $\sim$22\rlap.{$'$}65)). Ionized clumps powered by massive OB stars are identified toward both the ionized filaments.
The $^{13}$CO(1--0), $^{13}$CO(2--1), and C$^{18}$O(2--1) emissions are examined in a velocity range of [$-$21, $-$10] km s$^{-1}$
to study the parent molecular clouds of IF-A and IF-B, which have filamentary appearances. However, IF-A and IF-B seem to be situated at a distance of 2.4 kpc and 1.4 kpc, respectively. We have investigated two cloud components around $-$18 and $-$15 km s$^{-1}$ toward the filamentary parent clouds of IF-A and IF-B, which are connected in velocity space. The filamentary cloud components also spatially overlap with each other along the major axis, which may be treated as filamentary twisting/coupling.
Massive stars are evident toward the common zones of the cloud components, where noticeable Class~I protostars also seem to be present.
Based on our observational outcomes, we suggest the possibility of the collision of two filamentary clouds around 1.2 Myr ago.
The origin of IF-A and IF-B may be explained by the combined feedback of massive stars.
The continous elongated structure of the parent cloud of IF-A is identified in the molecular maps, and power-law dependencies of its structure functions are found, reflecting turbulent properties of gas. The difference in the power-law indices of the structure function dependencies for the gas in the filament and its surrounding gas is found. It could be connected with the influence of massive stars on the filament, which may affect the turbulent properties.
The central part of the parent cloud of IF-B is broken where this ionized filament is detected. Considering the observed ionized and molecular morphologies, our results seem to support the findings of the most recent model of Whitworth \& Priestley (2021), which is related to the escape and the trap of the ionizing radiation from an O star formed in a filament.
\section*{Acknowledgments}
We are grateful to the anonymous reviewer for the constructive comments and suggestions.
The research work at Physical Research Laboratory is funded by the Department of Space, Government of India. L.E.P. acknowledges the support of the IAP State Program 0030-2021-0005. This work is based [in part] on observations made with the {\it Spitzer} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX) under programmes 092.F-9315 and 193.C-0584. APEX is a collaboration among the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. The processed data products are available from the SEDIGISM survey database located at https://sedigism.mpifr-bonn.mpg.de/index.html, which was constructed by James Urquhart and hosted by the Max Planck Institute for Radio Astronomy. A part of this work has made use of data from the European Space Agency (ESA) mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement.
\subsection*{Data availability}
Distances to stars in the GAIA EDR3 underlying this article are available from the publicly accessible website\footnote[1]{https://cdsarc.cds.unistra.fr/viz-bin/cat/I/352}.
The {\it Herschel}, WISE, and {\it Spitzer} data underlying this article are available from the publicly accessible NASA/IPAC infrared science archive\footnote[2]{https://irsa.ipac.caltech.edu/frontpage/}.
The {\it Herschel} temperature map underlying this article is available from the publicly accessible website\footnote[3]{http://www.astro.cardiff.ac.uk/research/ViaLactea/}.
The ATLASGAL 870 $\mu$m continuum data underlying this article are available from the publicly accessible ATLASGAL database server\footnote[4]{https://www3.mpifr-bonn.mpg.de/div/atlasgal/}.
The SEDIGISM molecular line data underlying this article are available from the publicly accessible website\footnote[5]{https://sedigism.mpifr-bonn.mpg.de/cgi-bin-seg/SEDIGISM\_DATABASE.cgi}.
The Mopra molecular line data underlying this article are available from the publicly accessible website\footnote[6]{https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/LH3BDN}.
The MALT90 molecular line data underlying this article are available from the publicly accessible website\footnote[7]{http://malt90.bu.edu/}.
The SUMSS 843 MHz continuum data underlying this article are available from the publicly accessible server\footnote[8]{https://skyview.gsfc.nasa.gov/current/cgi/query.pl}.
\bibliographystyle{mnras}
\bibliography{reference} %
|
Title:
Pulsar Double-lensing Sheds Light on the Origin of Extreme Scattering Events |
Abstract: In extreme scattering events, the brightness of a compact radio source drops
significantly, as light is refracted out of the line of sight by foreground
plasma lenses. Despite recent efforts, the nature of these lenses has remained
a puzzle, because any roughly round lens would be so highly overpressurized
relative to the interstellar medium that it could only exist for about a year.
This, combined with a lack of constraints on distances and velocities, has led
to a plethora of theoretical models. We present observations of a dramatic
double-lensing event in pulsar PSR~B0834+06 and use a novel phase-retrieval
technique to show that the data can be reproduced remarkably well with a
two-screen model: one screen with many small lenses and another with a single,
strong one. We further show that the latter lens is so strong that it would
inevitably cause extreme scattering events. Our observations show that the lens
moves slowly and is highly elongated on the sky. If similarly elongated along
the line of sight, as would arise naturally from a sheet of plasma viewed
nearly edge-on, no large over-pressure is required and hence the lens could be
long-lived. Our new technique opens up the possibility of probing interstellar
plasma structures in detail, leading to understanding crucial for
high-precision pulsar timing and the subsequent detection of gravitational
waves.
| https://export.arxiv.org/pdf/2208.06884 | command.
\newcommand\aastex{AAS\TeX}
\newcommand\latex{La\TeX}
\def\mhvk#1{{\textcolor{magenta}{MHvK: #1}}}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{xcolor}
\shorttitle{Pulsar Double-lensing Sheds Light on the Origin of Extreme Scattering Events}
\shortauthors{Zhu, H., ET AL.}
\graphicspath{{./}{figures/}}
\begin{document}
\title{Pulsar Double-lensing Sheds Light on the Origin of Extreme Scattering Events}
\correspondingauthor{Hengrui Zhu}
\email{[email protected]}
\author[0000-0001-9027-4184]{Hengrui Zhu}
\affiliation{Department of Physics \& Astronomy, Oberlin College, Oberlin, OH 44074}
\affiliation{Department of Physics, Princeton University, Jadwin Hall Washington Road, NJ 08544, USA}
\author[0000-0001-7888-3470]{Daniel Baker}
\affiliation{Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 Saint George Street, Toronto, ON M5S 3H8, Canada}
\author[0000-0003-2155-9578]{Ue-Li Pen}
\affiliation{Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 Saint George Street, Toronto, ON M5S 3H8, Canada}
\affiliation{Institute of Astronomy and Astrophysics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan}
\affiliation{Canadian Institute for Advanced Research, 180 Dundas St West, Toronto, ON M5G 1Z8, Canada}
\affiliation{Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St George Street, Toronto, ON M5S 3H4, Canada}
\affiliation{Perimeter Institute of Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5, Canada}
\author[0000-0002-1797-3277]{Dan R. Stinebring}
\affiliation{Department of Physics \& Astronomy, Oberlin College, Oberlin, OH 44074}
\author[0000-0002-5830-8505]{Marten H. van Kerkwijk}
\affiliation{Department of Astronomy and Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada}
\keywords{pulsars: individual (B0834+06) --- ISM: individual objects (Extreme Scattering Events)}
\section{Introduction}\label{sec:intro}
Extreme scattering events (ESEs) — propagation-produced variations in quasar flux density — have been a puzzle since their discovery in 1987 \citep{fdjh87}.
ESEs manifest as frequency-dependent changes in the observed flux of quasars, usually a sharp spike followed by a dip, for a period of several weeks to months.
It is now widely agreed that ESEs cannot be explained by intrinsic variations of the source \citep{fdj+94}.
Instead, refraction effects from a dense plasma structure in the ISM, hereafter referred to as a plasma lens, with a length scale of a few astronomical units can explain both the observed flux curve as well as the duration of such events \citep{cfl98}.
Yet one difficulty remains: the required electron density and temperature for a roughly rounded plasma lens implies an over-pressure compared to the diffusive ISM by a factor of $10^3$ \citep{ww98,gs06}.
Such a high pressure indicates that the plasma lens would evaporate on the time scale of a year.
This, combined with a lack of constraints on distances and velocities, has led to a plethora of theoretical models \citep{dpg18}.
It was realized early on that pulsars might be powerful probes of these lenses: pulsars scan the sky quickly and, because of their compact sizes, scintillate due to multi-path scattering in the interstellar medium, yielding many new observables \citep{grb+87,cbl+93,cw86,rlg97}.
In this paper, we present a multi-epoch observation of a double-lensing event in pulsar PSR~B0834+06.
We then use a novel phase-retrieval technique to show that the data can be reproduced remarkably well with a two-screen scattering model: one screen with many small lenses and another with a single, strong one \citep{lpm+16}.
Then, we measure the magnification, size, and velocity of the latter lens, and show that it would inevitably cause extreme scattering events if it passed by the line-of-sight to a quasar.
The paper is organized as the following: in section \ref{sec:obs} we summarize our observations and review the theory of pulsar scintillation. We describe the phase-retrieval technique we adopted in Section \ref{sec:wf}. The double-lensing model and its agreement with the data are presented in Section~\ref{sec:model}. In Section~\ref{sec:interpret}, we extract parameters of the lens and demonstrate that it is capable of causing Extreme Scattering Events. Lastly, we discuss our results and conclude in Section \ref{sec:conclude}.
\section{Observation and pulsar scintillation}\label{sec:obs}
From October to December 2005, we took seven weekly observations of pulsar B0834+06 at 318-319MHz with the 305-m William E. Gordon Telescope at the Arecibo Observatory.
At each session, we took 45 minutes of data using the 327 MHz receiver with a bandwidth of 0.78 MHz centered at 318.5 MHz.
We created power spectra with 2048 frequency channels, summing the two circular polarizations.
The spectra were then integrated according to the pulsar rotational phase with an integration time of 10 seconds (i.e., averaging over roughly 8 pulses), and the resulting power as a function of frequency, time (10-second chunks), and pulsar rotational phase written out.
We then created dynamic spectra -- power as a function of frequency and time -- by subtracting the background, off-pulse spectrum from the integrated on-pulse signal.
We further divided each time bin of the dynamic spectra by its mean over frequency to mitigate pulse-to-pulse variability of the pulsar.
The spectra show the rich scintillation structures characteristic of scattering in the interstellar medium (Fig.~\ref{fig:dspec}).
To highlight this structure we created conjugate spectra, the Fourier transforms of dynamic spectra, which are functions of Doppler frequency and differential delay, the Fourier conjugates of time and frequency, respectively \citep{crsc06}.
We show the modulus of four of our conjugate spectra in the top row of Figure~\ref{fig:sec_wf}.
The power in each is concentrated in a broad parabola, called a scintillation arc, which consists of upside-down parabolas called inverted arclets.
At a delay of about 1 ms, there is an island of inverted arclets in the conjugate spectrum that migrates consistently down and to the right during the 7 weeks.
This ``1~ms feature'' was first reported by \cite{bmg+10} in a single-epoch VLBI observation made in the middle of our set of observations, and follow-up analysis by \cite{lpm+16} suggested it might arise from double lensing of the pulsar.
As will become clear below, we find that the conjugate spectra can be modelled in detail using a double-lensing interpretation, and that the feature arises from a surprisingly strong lens.
The main scintillation arc can be understood from considering pairs of scattered images of the pulsar.
In terms of their (complex) magnifications $\mu_{j,k}$ and angular offsets $\boldsymbol{\theta}_{j,k}$ from the line of sight, the relative Doppler frequency $f_D$, geometric delay $\tau$, and brightness $I$ of a given pair $j,k$ are given by,
\begin{align}
\tau &= f_{\nu} = \frac{d_{\rm eff}}{2c} (|\boldsymbol{\theta}_j|^2-|\boldsymbol{\theta}_k|^2),\label{eq:tau_def}\\
f_{D} &= f_{t} = \frac{1}{\lambda}(\boldsymbol{\theta}_j-\boldsymbol{\theta}_k) \cdot \mathbf{v}_{\rm eff}~\label{eq:fd_def},\\
I &= |C(\tau, f_D)| = |\mu_j \mu_k^*|,\label{eq:ijk_def}
\end{align}\noindent
where $d_{\rm eff}$ and $\mathbf{v}_{\text{eff}}$ are the effective distance and velocity of the pulsar-screen-Earth system, $C$ is the conjugate spectrum, $c$ is the speed of light, and $\lambda$ is the observation wavelength (see Appendix~\ref{app:theory} for details).
Generally, the magnifications are largest near the line of sight, and thus the brightest signals arise when one member of the pair has $|\boldsymbol{\theta}|\simeq0$.
Considering this, one infers $\tau\propto f_D^2$, thus reproducing the main parabola, as long as $d_{\rm eff}$ and $\mathbf{v}_{\rm eff}$ are roughly constant, which would happen in a thin-screen scattering geometry with scattering localized along the line of sight \citep{smc+01,crsc06}.
The inverted arclets arise from mutual interference between the scattered images, and their sharpness is a signature of highly anisotropic scattering, where the scattered images of the pulsar lie along a (nearly) straight line \citep{wmsz04}.
Like in previous observations, we find that the arclets move along the scintillation arc at a constant speed for long periods of time \citep{hsa+05,msm+20}.
This implies that the scattered images giving rise to the arclets must arise from a large group of parallel and elongated structures in the scattering screen, e.g., turbulence elongated along a given direction \citep{gs95}, waves on a plasma sheet seen in projection as folds \citep{rbc87,pl14}, or magnetic noodles of plasma stabilized by reconnection \citep{gwi19}.
\section{Phase Retrieval.}\label{sec:wf}
Scattering by the interstellar medium can be well-described as a linear filter.
Hence, the observed signal is the convolution between the impulse response function of the interstellar medium and the intrinsic pulsar signal \citep{wksv08,ws05m,wdv13m,pmdb14m}.
However, because pulsar emission is like amplitude-modulated noise, for slow pulsars whose pulse width is longer than the scattering time, the observed signal contains no useful phase information.
Instead, via the dynamic spectrum one only has a measurement of the squared modulus of the impulse response function.
In general, retrieving the phases from just the amplitudes is an ill-posed problem.
However, when the scattering is highly anisotropic, phase retrieval becomes possible.
The method we use is described in detail in \citet{bvm+21}, but, briefly, it relies on two realizations.
First, for highly anisotropic scattering, the vector offsets $\boldsymbol{\theta}_{j,k}$ in Equations~\ref{eq:tau_def} and~\ref{eq:fd_def} become effectively one-dimensional and for any given $\eta$ in $\tau=\eta f_D^2$ it becomes possible to remap the conjugate spectrum $C(f_D, \tau)$ to $C(\theta_j, \theta_k)$.
For the correct quadratic constant of proportionality $\eta$, which is also the curvature of the scintillation arc, one then finds that the main arc and the arclets are aligned with the cardinal directions in $\theta-\theta$ space \citep{swm+20m}.
Second, if aligned, $C(\theta_j,\theta_k)$ can be factorized using eigenvector decomposition, and the largest eigenvector will be an estimate of the impulse response function \citep{bvm+21}.
In our determinations of the wavefields from each of our dynamic spectra, we follow the procedure of \citet{bvm+21} in detail.
In particular, we reduce the frequency resolution of the dynamic spectra in order to avoid including information from the 1~ms feature (for which the assumption of a single one-dimensional screen does not hold).
We then map the resulting impulse response function estimates back to estimates of the dynamic wavefield, interpolating to the original resolution, and replace the estimated amplitudes with those given by the observed dynamic spectrum.
Finally, Fourier transforms of the estimated dynamic wavefields yield the wavefields in the frequency domain, presented in the second row of Fig~\ref{fig:sec_wf}.
We see that in Fig~\ref{fig:sec_wf}, all arclets are reduced to single points in the wavefields, with most lying along the parabola from the main screen, and a few in the 1~ms feature. The power in the 1~ms feature concentrates along a blizzard shape that resembles a portion of a parabola, offset from the origin. We will show that such structure is exactly as predicted by the double-lensing geometry.
\section{Double-lensing model}\label{sec:model}
With the wavefield, it becomes possible to test the double lensing model directly.
We construct a simple but detailed model, illustrated in Fig.~\ref{fig:model}, in which we represent individual lenses as linear structures that can bend light only perpendicular to their extent.
We assume two lensing planes, with velocities, distances, and orientations of the linear lenses taken from \citet{lpm+16} (see Table~\ref{tab:1}).
Next, for each epoch we use the local maxima along the scintillation arc in the wavefields to determine the location of the linear lenses on the main lensing screen.
Given those locations, the doubly refracted rays can then be solved and the corresponding relative Doppler frequency and delay calculated (see Appendix~\ref{app:double_lens}).
In Fig.~\ref{fig:sec_wf}, we show with faint blue dots the points on the main arc that we use to define linear lenses on the main scattering screen, and with red dots the corresponding double-lensed rays inferred from our model (blue and red rays in the model in Fig.~\ref{fig:model}, resp.).
As can be seen in Fig.~\ref{fig:sec_wf}, the model perfectly reproduces the observed 1~ms feature, including its evolution over a period of 50 days, indicating the 1~ms feature indeed arises from double refraction by highly anisotropic lenses.
As noted above, the images making up the 1~ms feature form part of a parabola offset from the origin.
That the parabola is incomplete implies that not all lenses in the main screen participate in the double refraction, confirming the conclusion of \cite{lpm+16} that the lens that causes the 1~ms feature terminates.
The required geometry is illustrated in the right panel of Fig.~\ref{fig:model}: the two orange lenses at the bottom do not contribute double-lensed rays (red, dashed lines) because of the termination of the 1~ms (red) lens.
As time progresses, not only the overall delay decreases while the pulsar moves towards the 1~ms lens, but also more and more of the bottom of the parabola appears.
This entire evolution of the 1~ms feature is captured by our simple double-lensing model, as shown in the bottom row of Fig.~\ref{fig:sec_wf}.
\begin{table}
\begin{center}
\begin{tabular}{ll}
\hline
\hline
\textbf{Parameter} & \textbf{Value} \\
\hline
$d_{\rm psr}$\dotfill & $620 \pm 60$ pc \\
$\mu_\alpha$\dotfill & $2.16 \pm 0.19$ mas/yr\\
$\mu_\delta$\dotfill & $51.64 \pm 0.13$ mas/yr\\[.8ex]
$d_1$\dotfill & $389 \pm 5$ pc \\
$d_2$\dotfill & $415 \pm 11$ pc \\
$v_{\rm 1 \parallel}$\dotfill & $-23 \pm 1$ {\rm km/s} \\
$v_{\rm 2 \parallel}$\dotfill & $-3 \pm 3$ {\rm km/s} \\
$\alpha_1$\dotfill & $154.8 \pm 1$ deg\\
$\alpha_2$\dotfill & $136.1 \pm 1$ deg\\
\hline
\end{tabular}
\end{center}
\caption{
Distances, velocities, and orientations from \citet{lpm+16}.
The pulsar distance ($d_{\rm psr}$) and angular velocity towards East and North ($\mu_\alpha$ and $\mu_\delta$) are measured from VLBI observations.
The distances to the main scattering screen ($d_1$) and that of the 1~ms lens ($d_2$) are calculated assuming the pulsar velocity and distance, and their uncertainties reflect only the uncertainty in the relative distance and velocity.
For the velocities, we can only constrain the component parallel to the images (i.e., along the direction normal to the linear lenses).
The position angles $\alpha_1$ and $\alpha_2$ are between the lines of images and North (through East).
The central value of $\alpha_2$ was adjusted slightly (within the uncertainty) from that given in \citet{lpm+16} to best reproduce the wavefields presented in Fig.~\ref{fig:sec_wf}.
}
\label{tab:1}
\end{table}
\section{Properties of the 1 ms Lens}
\subsection{Velocity and Aspect Ratio}
From the VLBI observations of \cite{bmg+10}, \cite{lpm+16} inferred a low velocity of both lenses.
We can confirm this from our wavefield spectra, as those spectra are essentially holographic images of the pulsar in delay-Doppler-shift space.
This means that for a single screen, given knowledge of the velocities and distances of the pulsar, screen, and Earth, the wavefield can be mapped to the lensed image of the pulsar on the sky, with only a reflection ambiguity around the direction of motion.
One generally cannot map a second screen using the same procedure, but in our case the two scattering planes are known to be much closer to each other than they are to Earth or the pulsar (see Table~\ref{tab:1}), and hence the holographic mapping still yields a reasonable approximation also for the 1~ms feature.
The construction of the holographic images makes use of the fact that for the wavefield one has (see App.~\ref{app:theory}),
\begin{eqnarray}
\tau&=&\frac{d_{\rm eff}}{2c}|\boldsymbol\theta|^2,\\
f_{D}&=&\frac{\boldsymbol\theta\cdot\boldsymbol{v}_{\rm eff}}{\lambda}.
\end{eqnarray}
Thus, geometrically, the Doppler frequency $f_{D}$ constrains the scattered image to a line on the sky perpendicular to the effective velocity, whereas the differential delay $\tau$ constrains it to a circle centered on the line-of-sight image.
Since the line intersects with the circle twice, at two points symmetric around the direction of the effective velocity, the mapping has a two-fold ambiguity.
In our dataset, this ambiguity is resolved by the previous VLBI observation of \cite{bmg+10}\footnote{A sign error in the analysis of \cite{bmg+10} caused their VLBI images to be flipped along both axis.
As a result, the 1~ms feature was mapped to the South of the pulsar, which is inconsistent with its delay decreasing with time.
This sign error is important here, but does not influence the discussion in \cite{bmg+10}}.
To produce the scattered images of the pulsar, we first rebinned our wavefield power spectra by a factor of 4 in delay to increase the signal-to-noise ratio.
Then, for each delay, we subtracted the noise floor and calculated average fluxes and Doppler frequencies inside masked regions around the main parabola (on both positive and negative sides) and around the 1~ms feature, and then converted delay and Doppler frequency to angles along and perpendicular to the direction of $v_{\rm eff}$ using the above equations.
We present the resulting approximate pulsar images of the first and last epochs in Fig.~\ref{fig:1ms_motion}.
The scattered images mostly lie along a straight line, but the 1~ms feature is mapped onto a line segment in a different direction.
While both structures are linear, the mechanisms for producing them differ.
The main linear group of images arises from singly scattered rays by a parallel set of linear lenses.
Each linear lens creates an image at a location closest to the line of sight, so the images move with the pulsar, and the lenses are extended perpendicular to this linear group of images (see Fig.~\ref{fig:model}).
On the other hand, the 1~ms feature is created by double lensing, in which rays are partially bent by the 1~ms lens and then further bent towards the observer by lenses on the main screen.
As seen from Earth, these images can move along the 1~ms lens as the pulsar moves, but not perpendicular to it.
Without the main scattering screen, the 1~ms lens would only be capable of producing a single scattered image (an image that we do not see because the 1~ms lens terminates, although it should have appeared shortly after our campaign, when the pulsar crossed the termination point).
Since the multiple images we see from the double lensing trace out part of the 1~ms lens, they reveal directly that it has a high aspect ratio.
The lack of movement between first and last epoch also shows that the lens velocity is small.
To quantity this, we fitted straight lines to the images for both epochs.
From the perpendicular shift between the lines, we infer an upper bound of $3{\rm\,km \,s^{-1}}$ on the velocity component parallel to its normal, consistent with inferences of \cite{lpm+16} from the VLBI results (see Table~\ref{tab:1}).
\subsection{Width and Magnification of the Lens}\label{sec:width}
Another important parameter for the 1~ms lens is its width, which we can estimate from the magnification~$\mu$.
In coordinates centred on the lens, given a true angular position $\beta$ of a source and an apparent position $\theta$ of its refracted image, by conservation of surface brightness, the magnification is given by \citep{sp18,cfl98},
\begin{equation}
\mu = \frac{\mathrm{d}\theta}{\mathrm{d}\beta}.
\end{equation}
For a lens far away from the source and $\mu$ not too large, one can approximate $\mu\sim\theta/\beta$, estimate the width $\omega$ of the lens as $\omega\sim\theta\sim\mu\beta$, and use that $\theta\ll\beta$, so that the observed offset $\delta\beta = \beta-\theta\sim\beta$ and thus $\omega\sim\mu\delta\beta$.
The magnification of a doubly refracted image equals the product of the magnifications by the two lenses.
To measure the magnification for the 1~ms lens, we measured the fractional flux of the 1~ms feature and a region on the main scintillation arc associated with the lenses on the main screen that participated in the double refraction (see Appendix~\ref{app:flux}).
Averaging the values for the different epochs, we find that the 1~ms lens has $\mu = 0.06\pm0.02$.
Combined with $\delta\beta = 24\pm4\,\mathrm{mas}$, the inferred angular width of the lens is then $\omega=1.5\pm0.5{\rm\,mas}$, corresponding to a physical width $w=0.6\pm0.2{\rm\,AU}$.
\section{Interpretation \& Discussion}\label{sec:interpret}
We argue that the parameters we measured for the 1~ms lens imply that it would cause extreme scattering events if it passed in front of a quasar.
First, for any lens to cause a significant drop in flux, it must deflect a radio source by more than half of its angular width.
The 1~ms lens, given the maximal delay seen in our observations, is able to deflect radiation by at least 83~mas at 318~MHz.
Given that the bending angle scales as the square of the wavelength, it can thus bend light by at least half its width up to 3.4\,GHz, covering the range in frequencies where extreme scattering events are observed.
Second, given its low velocity, the crossing time would be roughly $w / v_\oplus \simeq 40{\rm\,day}$ (where $v_\oplus=30{\rm\,km\,s^{-1}}$ is the orbital velocity of the earth), in agreement with observed extreme scattering events.
The duty cycle of extreme scattering events is about 0.007 \citep{fdjh87}, which means that if lenses like the 1~ms lens are responsible, they cannot be rare: their typical separation would be $\omega/0.007=200{\rm\,mas}$.
For PSR B0834+06, given its proper motion of 50 mas/yr, one would expect it to cross a similar lens roughly every four years.
Of course, many such crossings would be missed, but we note that an event that is (in hindsight) similar to ours occurred in the 1980s \citep{rlg97}.
One may wonder whether some of the more distant lenses on the main screen would also be capable of causing extreme scattering events, since they deflect pulsar radiation by similar angles and some are at least as bright as the 1~ms feature in the holographically reconstructed images shown in Fig.~\ref{fig:1ms_motion}.
However, for the 1~ms lens, the observed brightness of images is much lower than the magnification because rays are refracted twice, while for the lenses on the main screen the brightness is a direct measure of their magnifications.
For those with a large bending angle, we find $\mu\lesssim0.1\%$, and hence infer angular widths below 0.03 mas.
Such widths are small compared with the angular widths of quasars \citep{kmj+18,gkf99}, and hence these lenses cannot cause the significant dimming seen in extreme scattering events.
If the 1~ms lens is typical of structures causing extreme scattering events, it excludes a number of models.
In particular, the low velocity eliminates any models that demand the velocity to be orders of magnitude higher, like those that appeal to structures in the Galactic halo \citep{fdjh87}.
Furthermore, the high aspect ratio undermines isotropic models like large clouds of self-gravitating gas \citep{ww98}.
Indeed, the high aspect ratio may help solve the largest conundrum in extreme scattering events, which is that simple estimates, based on spherical symmetry, give a very high electron density, of $\sim\!10^3{\rm\,cm^{-3}}$, which implies an over-pressure by three orders of magnitude compared to the general interstellar medium \citep{fdjh87,cfl98,ww98}.
For the 1~ms lens, the observed largest bending angle $\alpha\simeq83{\rm\,mas}$ implies a gradient of the electron column density \citep{cfl98},
\begin{equation}
\frac{{\rm d} N_e}{{\rm d} x} = \frac{2\pi \alpha}{\lambda^2r_e} \simeq 900{\rm\;cm^{-2}\,cm^{-1}},
\end{equation}
where $r_e$ is the classical electron radius.
Thus, under spherical symmetry, one would infer an electron density similar to the problematic ones mentioned above \citep{bst+16}.
Given that the lens is elongated on the sky, however, it may well be elongated along the line of sight too, i.e., be sheet-like.
If so, the inferred electron density within the lens decreases by the elongation factor, and the over-pressure problem can be avoided if the lens is elongated by about a factor~$10^3$.
The above supports the idea that lensing could arise from plasma sheets viewed under grazing incidence.
Both over-dense \citep{rbc87} and under-dense \citep{pk12} sheets have been suggested, and those could be distinguished based on how their behaviour scales with frequency \citep{sp18}.
Unfortunately, for this purpose, our bandwidth of 1\,MHz is too limited, but we believe a reanalysis of the VLBI data on this event will likely yield an answer \citep{bpv22}.
A different test of whether extreme scattering events are caused by structures like our 1~ms lens can be made using quasar flux monitoring surveys: if a lens is highly anisotropic, Earth's orbital motion would cause the lens to be traversed multiple times.
For lenses stationary relative to the local standard of rest, we perform a simulation and find that the likelihood for repeats approaches unity in two antipodal regions on the sky (see Appendix~\ref{app:simulation}).
Monitoring in those two regions can thus further clarify the anisotropy of the lenses responsible for extreme scattering events.
\section{Conclusions}\label{sec:conclude}
In conclusion, we report a strong, slowly-moving and highly anisotropic lens from pulsar scintillation observations.
The observation used only 6 hours of telescope time yet revealed detailed and novel plasma structures in the interstellar medium.
Lenses with properties similar to the 1~ms lens cannot be rare in the Galaxy.
Extended pulsar and quasar monitoring with the next generation survey telescopes like CHIME will further ascertain the statistics of these scatterers, while detailed studies with large telescopes like FAST promise to determine their physical properties.
This will have benefits beyond understanding the lensing proper, since refraction by lenses such as these poses a significant problem for the detection of gravitational waves with a pulsar timing array \citep{cbl+93,cks+15}.
Deeper understanding will help develop mitigation strategies, which will be especially important as we move beyond the detection of a stochastic wave background to an era in which individual gravitational wave sources are analyzed \citep{btc+19}.
\section*{Code availability}
The code used for phase retrieval and generating the wavefields is integrated into the {\tt scintools} package developed by Daniel Reardon \citep{rcb+20}, available at \href{https://github.com/danielreardon/scintools}{\color{blue}github.com/danielreardon/scintools (external link)}. The ray-tracing code used for modeling the double lensing geometry is part of the {\tt screens} package developed by one of us \citep{screens:22} at \href{https://github.com/mhvk/screens}{\color{blue}github.com/mhvk/screens (external link)}.
\section*{Acknowledgements} %
We dedicate this paper to the Arecibo Observatory and its staff.
We thank W. Brisken for clarifications regarding his previous VLBI results, Tim Sprenger for discussions and verifying our two-screen solution, and the Toronto scintillometry group for general discussions.
We appreciate support by the NSF (Physics Frontiers Center grant 2020265 to NANOGrav, and grant 2009759 to Oberlin College; D.S. and H.Z.) and by NSERC (M.H.v.K., U.-L.P. and D.B.).
We received further support from the Ontario Research Fund—Research Excellence Program (ORF-RE), the Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, CRD 523638-18, 555585-20], the Canadian Institute for Advanced Research (CIFAR), the Canadian Foundation for Innovation (CFI), the Simons Foundation, Thoth Technology Inc, who owns and operates ARO and contributed significantly to the relevant research, the Alexander von Humboldt Foundation, and the Ministry of Science and Technology of Taiwan [MOST grant 110-2112-M-001-071-MY3].
\software{
astropy \citep{astropy:13, astropy:18, astropy:22},
numpy \citep{numpy:20},
matplotlib \citep{matplotlib:07},
screens \citep{screens:22}}
\bibliography{psrrefs}{}
\appendix
\restartappendixnumbering
\section{Theory of Pulsar Scintillation}\label{app:theory}
Observations suggests that a significant amount of pulsar scattering is dominated by thin screens (for a detailed account, see \citet{smc+01}).
We consider a pulsar behind a single scattering screen, with distances $d_{\rm psr}$ and $d_{\rm scr}$ from the observer, respectively.
The geometric delays for rays that pass through the screen are then the same as for the case that the source is at infinity and the screen is at what is known as the ``effective distance'',
\begin{equation}\label{eq:deff}
d_{\text{eff}}=\frac{d_{\rm psr}d_{\rm scr}}{d_{\rm psr}-d_{\rm scr}}
= d_{\rm psr}\frac{1-s}{s},
\end{equation}
where $s = 1-d_s/d_p$ is the fractional distance from the pulsar to the screen.
For two paths at angles $\boldsymbol{\theta}_j$ and $\boldsymbol{\theta}_k$ between the observer and a screen at $d_{\rm eff}$, one then recovers the differential geometric delay:
\begin{equation}
\tau = f_{\nu} = \frac{d_{\rm eff}}{2c} (|\boldsymbol{\theta}_j|^2-|\boldsymbol{\theta}_k|^2).\label{eq:tau_def2}
\end{equation}
Because of the different delays along different paths, the pulsar's radiation interferes with itself and casts a diffractive pattern onto the observer plane.
The velocity of this diffractive pattern with respect to the observer is called the effective velocity, which determines the differential Doppler shifts:
\begin{equation}
f_{D} = f_{t} = \frac{1}{\lambda}(\boldsymbol{\theta}_j-\boldsymbol{\theta}_k) \cdot \mathbf{v}_{\rm eff},\label{eq:fd_def2}
\end{equation}\noindent
where the effective velocity is given by,
\begin{equation}\label{eq:veff}
\mathbf{v}_{\rm eff}= -\frac{1-s}{s}\mathbf{v}_{\rm psr} + \frac{1}{s}\mathbf{v}_{\rm scr} - \mathbf{v}_{\oplus},
\end{equation}\noindent
and $\mathbf{v}_{\rm psr}$, $\mathbf{v}_{\rm scr}$, and $\mathbf{v}_{\oplus}$ are the components perpendicular to the line of sight of the pulsar, screen, and Earth velocities, respectively.
Under the stationary phase approximation, we can treat the pulsar as scattered into $N$ images with (complex) magnifications $\mu_j$ and geometric phase delay $\exp\{i[f_{Dj}t+\tau_j\nu]\}$, where $\tau_j=(d_{\rm eff}/2c)|\boldsymbol\theta_j|^2$ and $f_{D_j}=(\boldsymbol\theta_j\cdot \boldsymbol{v}_{\rm eff})/\lambda$ are the geometric delay and Doppler frequency with respect to the line-of-sight image.
We normalize the flux so that $\sum_j|\mu_j|^2=1$.
The dynamic spectrum, which encodes the interference of the $N$ images, is given by,
\begin{align}
D(t,\nu) &= \left\vert\sum_j \mu_j\exp\{2\pi i[f_{Dj}t+\tau_j\nu]\}\right\vert^2~\\
&= \sum_{j,k}\mu_j\mu_k\exp\{2\pi i[(f_{Dj}-f_{Dk})t+(\tau_j-\tau_k)\nu]\}.
\end{align}
Note that the dynamic spectrum defined above is entirely real as the phase is antisymmetric under the exchange of j and k.
The Fourier transform of the dynamic spectrum is the conjugate spectrum; its square modulus, called the secondary spectrum, is given by,
\begin{equation}
|C(f_D,\tau)|^2 = 2\sum_{j,k}\mu_j^2\mu_k^2 \delta(f_D,f_{D_j}-f_{D_k}) \delta(\tau,\tau_j-\tau_k).\label{eq:ss}
\end{equation}
\section{Double refraction by two linear lenses.}\label{app:double_lens}
Consider two screens with linear features between the telescope and the pulsar, and a cylindrical coordinate system in which $z$ is along the line of sight, with direction $\hat{z}$ pointing towards the pulsar.
For scattering screen $i$ at distance $d_i$, a line, representing a linear lens on screen $i$, can be written as,
\begin{equation}
d_{i}\hat{z} + \vec{r}_{i} + \sigma \hat{u}_{i},
\end{equation}\noindent
where $\vec{r}_{i}$ is a cylindrical radius from the line of sight to
the line (i.e., $\hat{r}_i\cdot\hat{z}=0$), $\hat{u}_i=\hat{z}\times\hat{r}_i$ a
unit vector perpendicular to it in the plane of the screen, and $\sigma$ is the position along the line.
Imagine now a ray going from the observer to some point along a linear lens on the first screen, at distance $d_1$.
Since it will be easiest to work in terms of angles relative to the observer, we use $\rho=r/d$ and $\varsigma=\sigma/d$ to write this trajectory, from $d = 0$ to $d = d_1$, as,
\begin{equation}
d(\hat{z} + \rho_{1}\hat{r}_{1} + \varsigma_{1}\hat{u}_{1}).
\end{equation}
When the ray hits the lens, light can be bent only perpendicular to the lens, by an angle which we will label $\alpha_1$ (with positive $\alpha_1$ implying bending closer to the line of sight \citep{sp18}).
Hence, beyond the screen, for $d > d_1$, its trajectory will be
\begin{equation}
d(\hat{z} + \rho_{1}\hat{r}_{1} + \varsigma_{1}\hat{u}_{1})
- (d-d_{1})\alpha_{1}\hat{r}_{1}.
\end{equation}
If the ray then hits a lens on the second screen at a distance $d_2$, it will again be bent, by $\alpha_2$, and then follow, for $d>d_2>d_1$,
\begin{equation}
d(\hat{z} + \rho_{1}\hat{r}_{1} + \varsigma_{1}\hat{u}_{1})
- (d-d_{1})\alpha_{1}\hat{r}_{1}
- (d-d_{2})\alpha_2\hat{r}_{2}.
\end{equation}
In order to specify the full trajectory, we need to make sure that the ray actually intersects the lens on the second screen, and ends at the pulsar, i.e.,
\begin{align}
d_{2}(\hat{z} + \rho_{1}\hat{r}_{1} + \varsigma_{1}\hat{u}_{1}) - (d_{2}-d_{1})\alpha_{1}\hat{r}_{1}
&= d_{2}(\hat{z} + \rho_{2}\hat{r}_{2} + \varsigma_{2}\hat{u}_{2}),\\
d_{p}(\hat{z} + \rho_{1}\hat{r}_{1} + \varsigma_{1}\hat{u}_{1}) - (d_{p}-d_{1})\alpha_{1}\hat{r}_{1} - (d_{p}-d_{2})\alpha_{2}\hat{r}_{2}
&= d_{p}\hat{z}.
\end{align}
These constraints can be simplified to,
\begin{align}
\varsigma_{1}\hat{u}_{1} - (1-d_{1}/d_{2})\alpha_{1}\hat{r}_{1} - \varsigma_{2}\hat{u}_{2}
&= \rho_{2}\hat{r}_{2} - \rho_{1}\hat{r}_{1},\label{eq:sim_1}\\
\varsigma_{1}\hat{u}_{1} - (1-d_{1}/d_{p})\alpha_{1}\hat{r}_{1} - (1-d_{2}/d_{p})\alpha_2\hat{r}_{2} &= -\rho_{1}\hat{r}_{1}.\label{eq:sim_2}
\end{align}
Hence, one is left with a pair of two-dimensional vector equations with four scalar unknowns, viz., the bending angles $\alpha_{1,2}$ and angular offsets $\varsigma_{1,2}$ along the two linear lenses.
This set of equations can be extended easily to arbitrary number of screens and a possible non-zero origin.
Since the equations are linear in the unknowns, the set can be solved using matrix inversion.
We use the {\tt screens} package \citep{screens:22} for this purpose, which also uses the inverted matrix to calculate time derivatives $\dot\varsigma$ and $\dot\alpha$ given velocities of the observer, screens, and pulsar (and thus $\dot\rho$ in the equations), as well as the implied delay $\tau$ and its time derivative $\dot\tau$ for each ray.
We note that the same scattering geometry is obtained by considering the geometric limit of the two-screen scattering model in \citet{smw+22}.
It is different, however, from the geometry considered by \cite{spmb19}, as those authors assumed that the scattering points were fixed on their respective screens, while we assume that light can only be bent perpendicular to the linear structures, which implies that the scattering images move along those structures as the relative positions of the observer, screens, and pulsar change.
\section{Flux estimation and magnification of the 1~ms lens.}\label{app:flux}
To find the flux of a scattered image $i$, which equals $\mu_i^2$, we integrate over a region corresponding to the interference of that image with all other images, i.e., the corresponding inverted arclet in the secondary spectrum, as defined in equation \ref{eq:ss}.
Evaluating this integral, one finds that it yields twice the flux of image $i$,
\begin{align}
&\iint \delta(j,i)2\sum_{j,k}\mu_j^2\mu_k^2\delta(f_D, f_{D_j}-f_{D_k})\delta(\tau,\tau_j-\tau_k) {\rm d}f_D {\rm d}\tau\nonumber\\
&= 2\mu_{i}^2\sum_k\mu_k^2 = 2\mu_{i}^2.
\end{align}
We measured the flux of the 1~ms feature by integrating around it (red region in Fig.~\ref{fig:flux_region}) in each of the 7 secondary spectra (with appropriate noise floors subtracted).
We also measured the flux of the corresponding part of the main arc that participated in the double lensing, as inferred from our double lensing model (blue region in Fig.~\ref{fig:flux_region}).
In Fig.~\ref{fig:frac_flux}, we show the resulting fractional fluxes, as well as their ratio, which we use as an estimate of the magnification of the 1~ms lens.
Generally, both fluxes increase with time, as expected since the images approach the line of sight.
The one exception is the second epoch, in which the 1~ms feature is much fainter.
We do not know the reason for this, but note that on that day the other side of the scintillation arc also appeared much fainter at high delay than it was in the first and third epoch.
Neglecting the second epoch, we find an averaged magnification of $0.06\pm0.02$ for the 1~ms lens, which is what we used in the section~\ref{sec:width} to infer the width of the lens.
Note that one might have expected the magnification of the 1~ms lens to increase with time as its impact parameter got smaller, too.
In our estimate, however, we have ignored that the magnification of the linear lenses on the main screen is not exactly the same for the singly and doubly scattered rays, since these have different bending angles.
We leave a detailed analysis of the magnification as a function of bending angle (and perhaps location along the lens) for future work.
\section{Repetition Likelihoods for Extreme Scattering Events.}\label{app:simulation}
If, as our observations suggest, quasar extreme scattering event are caused by slowly-moving linear plasma lenses, one might expect to see repetitions due to Earth's orbital motion.
The likelihood depends on how the Earth's motion projects on the screen, and thus on the screen orientation and location on the sky.
To estimate the probabilities, we simulated screens over the entire sky, sampling on a Gaussian Legendre grid with 25 points along the declination and 50 points along the right ascension axis.
Then, using the {\tt astropy} package \citep{astropy:18,astropy:13}, we calculated Earth's trajectory relative to the local standard of rest (i.e., orbital motion plus the solar system systemic motion), as projected on the sky for each angular direction, for a one-year period.
We then generated large numbers of randomly oriented lines for each grid point to represent linear lenses, assumed stationary relatively to the local standard of rest.
For each grid point, we counted the number of simulated lenses that intersected more than once with the projected trajectory of Earth, and then took the ratio with the number that intersected at least once as the likelihood for an extreme scattering event to repeat within a year.
We show the result in Fig.~\ref{fig:repeat_probability}.
For the line of sight to the first extreme scattering event, QSO~0954+658, the (interpolated) probability is a modest 0.44, consistent with no repetition having been seen in the three years the source was monitored \citep{fdjh87}, but there are also regions on the sky for which the probability of repeats approaches unity.
\section{Testing the screen geometry.}\label{app:annuel variation}
The main scintillation arc persists for a long period of time, which offers the opportunity to test whether the velocity and distance to the underlying scattering screen remain consistent with what is inferred from VLBI, using the variation of the quadratic constant of proportionality $\eta$ for the scintillation arc, as induced by the changing Earth orbital motion \citep{mzs+21,spv+21},
\begin{equation}\label{eq:eta2}
\eta = \frac{d_{\rm eff}\lambda^2}{2c (|\mathbf{v}_{\rm eff}| \cos\alpha)^2},
\end{equation}
where $\alpha$ is the angle between the effective velocity (Eq.~\ref{eq:veff}) and the line defined by the scattered images.
Measured values of $\eta$ for a year-long observing campaign are presented in Fig.~\ref{fig:curv_evo}, with the seven observations analyzed here highlighted.
Overdrawn is the evolution predicted based on Eq.~\ref{eq:eta2} for the pulsar and screen parameters in Table~1 in the main text and the known motion of Earth.
The prediction is qualitatively correct, but somewhat offset from the measurements.
To correct for this offset, we performed a fit in which we only allowed the velocity of the main scattering screen to vary (which is the parameter with the largest fractional uncertainty).
We found a better fit for $v_{1,\parallel}=-24{\rm\,km/s}$, which is within the reported uncertainty.
Thus, overall our results confirm the values inferred from the previous analysis of VLBI data \citep{lpm+16,bmg+10}.
|
Title:
JWST/NIRCam Coronagraphy: Commissioning and First On-Sky Results |
Abstract: In a cold and stable space environment, the James Webb Space Telescope (JWST
or "Webb") reaches unprecedented sensitivities at wavelengths beyond 2 microns,
serving most fields of astrophysics. It also extends the parameter space of
high-contrast imaging in the near and mid-infrared. Launched in late 2021, JWST
underwent a six month commissioning period. In this contribution we focus on
the NIRCam Coronagraphy mode which was declared "science ready" on July 10
2022, the last of the 17 JWST observing modes. Essentially, this mode will
allow to detect fainter/redder/colder (less massive for a given age)
self-luminous exoplanets as well as other faint astrophysical signal in the
vicinity of any bright object (stars or galaxies). Here we describe some of the
steps and hurdles the commissioning team went through to achieve excellent
performances. Specifically, we focus on the Coronagraphic Suppression
Verification activity. We were able to produce firm detections at 3.35$\mu$m of
the white dwarf companion HD 114174 B which is at a separation of $\simeq$ 0.5"
and a contrast of $\simeq$ 10 magnitudes ($10^{4}$ fainter than the K$\sim$5.3
mag host star). We compare these first on-sky images with our latest, most
informed and realistic end-to-end simulations through the same pipeline.
Additionally we provide information on how we succeeded with the target
acquisition with all five NIRCam focal plane masks and their four corresponding
wedged Lyot stops.
| https://export.arxiv.org/pdf/2208.00998 |
\keywords{High Contrast Imaging, Infrared Astronomy, Coronagraphy, James Webb Space Telescope (JWST), Commissioning, NIRCam, Exoplanets, High Angular Resolution}
\section{NIRCam Coronagraphs Are Science Ready}
\label{sec:intro} %
Before diving into details about how we prepared and carried out the commissioning of the \nircam Coronagraphy mode\footnote{Landing page "NIRCam Coronagraphic Imaging" on the JWST User Documentation platform/wiki (JDox)\cite{jdox_general}: {\small \tt \href{https://jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-observing-modes/nircam-coronagraphic-imaging}{jwst-docs.stsci.edu/jwst-near-infrared-camera/nircam-observing-modes/nircam-coronagraphic-imaging}}}, figure~\ref{fig:on-sky-335r} displays an on-sky image and detection of the white dwarf (WD) HD 114174 B, companion to the $\sim$4 billion year old main sequence star HD 114174A (spectral type G5IV-V). Together they form a \enquote{Sirius like system} \cite{gratton2021}. This WD was first imaged about 10 years ago with the rise of adaptive optics (AO) \cite{crepp2013} and it is used a spectrophotometric calibrator for extreme AO instruments like SPHERE \cite{beuzit2019}. With a contrast of $\simeq$ 10 magnitudes ($10^{4}$) and a separation of 0.5\arcsec (the companion has gotten closer in recent years), it is also a \enquote{perfect} object to use to demonstrate the high contrast imaging (HCI) capability of a new instrument and/or telescope. Figure~\ref{fig:on-sky-335r} shows that the detection has a high signal to noise ratio (SNR) as the recovered WD signal harbors the six secondary spots typical of the round mask point spread function (PSF) with an aperture is defined by the Lyot stop as seen in figure~\ref{fig:ta}.
\nircam coronagraphy will be used primarily to directly image and characterize young, self-luminous giant exoplanets\cite{bowler2016_review} and their circumstellar environment (protoplanetary disks, debris disks, jets). At short separations (e.g. $\leqslant$0.3\arcsec at 2\micron, $\leqslant$0.5\arcsec at 3.5\micron) \nircam will not outperform ground based extreme AO-fed instruments on 8 to 10-meter telescopes. But at larger separations and wavelengths the stability and sensitivity will outperform even future extremely large telescopes (25 to 39-meter class). All these world class observatories will be very complementary\cite{girard2020_muse}. There are already a lot of synergies between ground and space, ALMA (submillimetric) and Hubble and now Webb. Coronagraphy on-board JWST (\nircam and \miri)\cite{girard2018spie} - because of its stability in space - will also be possible for rather faint young stellar objects (YSO) and extragalactic targets (e.g. active galactic nuclei, etc.).
\subsection{NIRCam Coronagraphy: How Does This Mode Work?}
\label{sec:nrc-coron} %
Due to its complexity and dependency on other observatory functionalities, \nircam Coronagraphy was one of the last modes to be commissioned. It is highly sensitive to and less forgiving than other modes to Target Acquisition (TA) error, guiding accuracy, wavefront error (WFE), focus and distortion correction. The distortion correction is particularly difficult to achieve and must be performed independently of the Imaging Mode solution. In addition, with five coronagraphs to commission, it is a lengthy process.
The \nircam coronagraphs were designed\cite{krist2007_spie_disk, krist2009, krist2010_spie_jwst_occulters} to work in the face of significant diffraction from the Optical Telescope Element (OTE) segments and in the face of possible pupil shear. The Lyot stops have undersized holes to solve these problems. Additionally, the coronagraph was designed to work outside of the regular field of view used for wavefront sensing and surveys. This requirement was met by making the four Lyot stops wedged to deflect the field of view as explained in figure~\ref{fig:principle}. This is important to note that there are four such wedge$+$Lyot stop elements: SW RND, SW BAR, LW RND (both for MASK335R and MASK430R), LW BAR. Each of them introduces a slightly different offset and a different distortion and pupil \enquote{wander} (each field point has a slightly different pupil alignment between \nircam and the OTE) and therefore we needed to compromise pupil element positioning between the different round masks (specifically M335R and M430R which share the same Lyot stop).
This has implications both to achieve precise target acquisition and to provide astrometrically calibrated data to the community.
\subsection{Science Readiness Criteria}
\label{sec:sr} %
There were no contractual requirements for the \nircam Coronagraphy mode. Nevetherless, the relevant metrics agreed between the PI and commissioning scientists at STScI and NASA Goddard were:
\begin{enumerate}
\item Contrast: 5-$\sigma$ contrast of $10^4$ at 1\arcsec with the F335M filter and the MASK335R (most versatile round mask for LW) with reference star subtraction (Reference Differential Imaging: RDI).
\item Target Acquisition: better than 0.5 pixels (1-$\sigma$) for any coronagraphic mask ($\leqslant$15 mas for SW, $\leqslant$30 mas for LW).
\end{enumerate}
If the telescope managed to deliver a stable and diffraction limited images at $\leqslant$ 2 \micron and we managed to perform target acquisition to within a pixel or so, we knew from recent simulations that the first criterion would be easily met. The second criterion would guarantee even better, expected performance\cite{perrin2018, carter2021, carter2021spie, hinkley2022simulations}.
\section{Preparation: simulation and astrometric framework}
\label{sec:prep} %
During the few years preceding the JWST launch, instrument teams have been rehearsing and getting ready for the commissioning by exercising proposal preparation tools, data reduction \jwst pipeline\footnote{JWST Data Analysis With the JWebbinars: {\small \tt \href{https://www.stsci.edu/jwst/science-execution/jwebbinars}{stsci.edu/jwst/science-execution/jwebbinars}}}\cite{gordon2022_JWSTCAL} and analysis scripts on simulated data. \nircam Coronagraphy is no exception and Girard et al. 2018\cite{girard2018spie} described the \enquote{end to end} prototype developed then. Of course, when real data \enquote{come down}, things can differ a bit and the team took advantage of having \nircam be operated from the start of OTE commissioning (as a camera for all the telescope / wavefront sensing activities) to solve a number of issues and characterize all 10 detectors (or Sensor Chip Assembly: SCAs).
\subsection{pyNRC}
\label{sec:pynrc} %
\pynrc is \enquote{a set of Python-based tools for planning observations with JWST NIRCam. It includes an Exposure Time Calculator (ETC), a simple image slope simulator, and an enhanced data simulator compatible with the JWST pipeline. This package works for a variety of NIRCam observing modes including direct imaging, coronagraphic imaging, slitless grism spectroscopy, and weak lens imaging. All PSFs are generated via \webbpsf and \webbpsfext (extensions) to reproduce realistic JWST images and spectra}.
\noindent For \nircam Coronagraphy \pynrc has been instrumental. While early end-to-end data prototypes\cite{girard2018spie} were made using a set of packages (\pancake, \webbpsf, \mirage). Approaching commissioning, the team relied almost entirely on \pynrc which, with \webbpsfext, integrates everything (interface to APT files and catalogs including Gaia, ramp simulator with noise sources and cosmic rays, \pysiaf apertures, generation of \jwst pipeline compliant products).
Figure~\ref{fig:pynrc} shows an examples of noiseless (\texttt{slope}), yet useful \pynrc simulations. All of these very generated directly from APT exported files, something that the \nirccos wrapper\cite{kammerer2022spie} also does. \pynrc will also generate noisy \texttt{uncal} files with all the DMS compliant headers for the \jwst pipeline stages to run. \pynrc can also generate a scenario of observations, introduce a sensible wavefront drift between science target and reference star(s) as well as compute predicted contrasts.
\subsection{WebbPSF/WebbPSF{\_ext}}
\label{sec:webbpsf} %
During the Science Instruments (SI) commissioning, the OTE team had more and more experience in performing routine maintenance measurements of the wavefront / optical path differences (OPD) maps of the telescope using the fine phasing technique provided by the \nircam weak lens mode (providing defocused images for phase retrieval). \webbpsf was modified to allow the use of contemporary OPDs measured on orbit\footnote{JWST Using OPDs Measured On Orbit: {\small \tt \href{https://webbpsf.readthedocs.io/en/latest/jwst_measured_opds.html}{webbpsf.readthedocs.io/en/latest/jwst\_measured\_opds.html}}} within days of (before or after) any given program or activities. This proved to be extremely useful for our \nircam Coronagraphy commissioning subteam as we heavily relied on high fidelity simulations to assess the TA accuracy and generally succeed with it.
\noindent Calling \webbpsf \enquote{on the fly} can be computationally intensive and \webbpsfext \enquote{provides some enhancements to the \webbpsf package for PSF creation. This follows the \pynrc implementation for storing and retrieving JWST PSFs. In particular, this module generates and saves polynomial coefficients to quickly create unique instrument PSFs as a function of wavelength, focal plane position, wavefront error drift from thermal distortions. More specifically, \webbpsfext uses \webbpsf to generate a series of monochromatic PSF simulations, then produces polynomial fits to each pixel. Storing the coefficients rather than a library of PSFs allows for quick creation (via matrix multiplication) of PSF images for an arbitrary number of wavelengths (subject to hardware memory limitations, of course). The applications range from quickly creating PSFs for many different stellar types over wide bandpasses to generating a large number of monochromatic PSFs for spectral dispersion}.
\subsection{SIAF Infrastructure}
\label{sec:siaf} %
Science Instrument Aperture File (SIAF) is a reference file used in operations that contains the official information on all apertures (e.g., \nircam Apertures) and internal instrument coordinates. For instance:
\bi
\item ($V2_\textrm{Ref}$, $V3_\textrm{Ref}$) is the reference position in ($V2$, $V3$) coordinates (arcsec); some of these entries are used to define telescope pointings.
\item $V3_\textrm{Idl Y Angle}$ is the rotation (in degrees, counterclockwise) of the aperture's ideal Coordinate System Y-axis relative to $V3$.
\item The ideal coordinate system is a distortion-removed frame used for dithers and other pointing offsets. These coordinates correspond to a functional transform of the pixel coordinates in the science frame. The orientation and parity of the ideal coordinate system are equal to the pixel coordinates.
\item ($V2_1$, $V2_2$, $V2_3$, $V2_4$), ($V3_1$, $V3_2$, $V3_3$, $V3_4$) are the vertices in the ($V2$, $V3$) coordinates (arcsec) of the quadrilateral defined by each aperture.
\ei
Each of our deliveries to update the SIAF (e.g. mask positions, offsets with respect to FGS) had to be carefully crafted and verified. Simple formatting mistakes or a mis-transformation of coordinates can result in a completely wrong TA offset (SAM) calculation and a loss of precious days of commissioning and telescope time.
\subsection{NIRCam Commissioning Activities}
\label{sec:nrc-com} %
\nircam was used to acquire the first photons of JWST and to align and focus all 18 telescope segments. \nircam took all the \enquote{selfies} of the primary mirror with its dedicated pupil imaging lens (PIL) in the short wavelength (SW) channel. \nircam was thus the first science instrument (SI) to be used from the end of January 2022, a month after launch. As coronagraphy is one of the most complex modes of the instrument and has dependencies on many other activities, it was expected to be one of the last modes to be declared \enquote{science ready}. Figure~\ref{fig:nrc-cars} describes the commissioning activities related or needed to check out the \nircam Coronagraphy mode.
\subsection{The Coronagraphic Suppression Verification Program: PID 1441}
\label{sec:nrc-31}
This program was largely inspired by simulation work in Perrin et al. 2018 showing that deeper contrasts can be achieved using more than one PSF reference star\cite{perrin2018}. In 2018-2019 we built a case for such a program as no programs had been approved to qualify \nircam Coronagraphy performance other than the TA one with a very limited possibility to explore the contrast. We therefore decided to design a program whose main goal is to demonstrate that we can perform effective coronagraphic, star-light suppression with JWST/NIRCam and achieve the performance expected and simulated thus far: contrasts, inner-working angles (IWA) and detection limits (at various angular distances, especially in the speckle limited regime). Our hopes were to achieve a \enquote{desired / expected performance} significantly better than the minimum contrast \enquote{science readiness} requirement, reach 5$\times$10$^{-6}$ below 1\arcsec. A second important objective is to be able to provide clear guidelines to observers through a post-commissioning update of our high-contrast documentation suite (JDox, etc.) and tuning of the Exoposure Time Calculator (ETC) and other tools. For that, we need to explore a minimum of the contrast parameter space: take data through both the round and bar occulters. To avoid spending too much time, priority is given to the long-wavelength (LW) channel as it will be the most strikingly better than the ground and popular for NIRCam. Hence, this activity does not make use of the short-wavelength (SW) channel.
The strategy is to use bright stars (K$\sim$5) to be very efficient and achieve high SNR and contrasts with reasonably fast readouts and in the least possible execution time, yet avoiding saturation limits and producing 10 to 100 frames. Having a large enough number of frames allows for frame selection (e.g remove the ones affected by Cosmic Rays and for aggressive post-processing with more degrees of freedom (KL).
The way \coronsup was carried out is explained in greater details in Kammerer et al. 2022 (this conference)\cite{kammerer2022spie}: HD 114174 (with its WD companion, observable in June/July 2022) was set to be our \enquote{science scene}/star (Obs 1 \& 2 for MASK335R, Obs 5 \& 6 for MASKLWB with 2 rolls). 3 reference stars were carefully chosen (all main sequence G stars in the long baseline interferometry calibrator catalog to avoid multiplicity, all within K$\sim$5.0 and 5.2): HD 111733, HD 115640, HD 116249 (all observed with a 9-point small grid dither "SGD"\cite{soummer2014_sgd} pattern) respectively at an angular distance of 5.3\degr, 4.6\degr and 11.9\degr from HD 114174 which would allow to explore difference time and pitch angle (solar elongation) baselines. At this point, we have not completed all this analysis (to come in a subsequent paper) because we focused on the readiness of the mode and thus mainly on the MASK335R/F335M setup.
\section{Commissioning NIRCam Coronagraphy}
\label{sec:com} %
If we know the positions of all focal plane masks, manage to center the star in the TA aperture (after a decent telescope initial pointing), then the placement accuracy of the star behind the masks only depends on the small angle maneuver (SMA), the last offset of a few arc seconds. Unfortunately it is not exactly as simple as that. The calculation of the SMA is affected by the residual distortion and the positions of the masks have a significant uncertainty $\sim$5-10 mas (poor flat field quality, filter shifts, convolution by the PSF). Finally the centering accuracy in the TA aperture if severely affected by the measurement uncertainty of the centro\"iding algorithm which is more biased along the horizontal axis (x) because of the geometry of the PSF (wider than tall).
\subsection{Astrometry and Distortion}
\label{sec:dist} %
Figure~\ref{fig:astrom1} shows our procedure to analyse astrometric data (\coronlmc) on the Large Magellanic Cloud (LMC)\footnote{Custom routine to combine all distortion measurements: {\small \tt \href{https://github.com/arminrest/jwst_distortions_tools}{github.com/arminrest/jwst\_distortions\_tools}}}.
Subsequently, members of our team managed to adapt the DrizzlePac's TweakReg module\footnote{\texttt{tweakreg}: {\small \tt \href{https://drizzlepac.readthedocs.io/en/latest/_modules/drizzlepac/tweakreg.html}{drizzlepac.readthedocs.io/en/latest/\_modules/drizzlepac/tweakreg.html}}} to align our images taken at a given epoch to Gaia DR3 (taken as the on-sky truth). Figure~\ref{fig:astrom2} shows the improvement before (left) and after this alignment. This is also a great assessment of our distortion correction residuals (spread of the blue points) which are of the order of 5 to 8 mas RMS in the COM area (close to the coronagraphic masks) and about 3 to 4 mas in the rest of the full frame SCA (A5 or any other). This means \nircam Coronagraphy can already be used for astrometric followups of point sources, orbital fitting and the determination of model-independent dynamical masses of planetary-mass companions in synergy with other high contrast instruments. Nevertheless we hope in the future to improve these distortion correction residuals to 3 to 4 mas in the COM area as well. This will require to model out the discontinuity which is seen in the data (not many matches between y=1100 and y=1400, third plot of figure~\ref{fig:astrom2}).
\subsection{Flat fields and mask positions}
\label{sec:masks} %
We do not see the focal plane masks (even with a bright star is close) unless we can \enquote{back illuminate} them. We tried to median combine all dithered position of the LMC astrometric field to identify the mask positions but that turned out to be sub-optimal. On-sky flats taken with the zodiacal light allowed us to get a coarse measurement of the on-sky positions of all masks (and the COM \enquote{real estate}: ND squares, etc.). The issue is that flats were taken using wide bandpass filters (to collect enough light) and we had to measure filter to filter shifts and take them into account in the final mask positions, adding to the uncertainty.
\subsection{NIRCam Coronagraphic Target Acquisition}
\label{sec:ta} %
The goal of coronagraphic target acquisition (TA) with NIRCam is to accurately align an astronomical point source—the \enquote{host}—on a coronagraphic mask (occulter). Coronagraphic TA involves an initial slew of the telescope to place the target on a 4\arcsec$\times$4\arcsec subarray in the ~4\arcsec vicinity of the selected mask. If the target is bright than (K$\leqslant$6.3), the subarray is located behind a neutral density square (nominally ND $\sim$ 3). If fainter (K$\geqslant$6.3), the target is positioned behind a nearby, clear (ND = 0) region of the coronagraphic optical mount (COM). The first phase of TA is complete when the detector obtains an exposure of the target on an appropriate region of the COM (ND = 0 or 3) near the specified coronagraphic mask. Coronagraphic TA images are always be taken in either the F210M or F335M filter, for short- or long-wavelength (SW, LW) coronagraphy, respectively. Coronagraphic TA images are taken using 128$^2$ or 64$^2$ subarrays, for SW or LW, respectively. Figure~\ref{fig:ta} shows the principle of the Coronagraphic TA with each mask's position and inner-working-angle (IWA). In figure~\ref{fig:ta1441} we see that the shape of the unnocculted PSF affect the accuracy and repeatability of the TA with the current centro\"iding algorithm. In section~\ref{sec:rec} we discuss the perspectives for improvements.
The telescope pointing accuracy is superb. The initial slew always placed our star within $\sim$4 pixels of the TA subbarrays (depending on the \enquote{quality} of the guide star used by the Fine Guiding Sensor\cite{doyon2012_FGS}, FGS 1 or 2. Some guide stars have Gaia\cite{gaia2021_dr3_long} information and other are not). Globally performance indicators related to TA that we were able to assess are:
\bi
\item Target Positioning – TA Performance: TA on the coronagraphic PSF can induce up to 20 mas offsets from true center, which propagates through an observation’s pointing. Likely due to PSF side lobes (figure~\ref{fig:ta}) interfering with center of mass algorithm. Offsets from TA to occulter are consistent with a sigma of $\sim$ 3 mas.
\item SAM performance: Even after accounting for TA inaccuracies, there are residual errors in the source positioning relative to the commanded offset position. For both masks, the star misses the expected location in a consistent manner (the FGS moved the telescope to the same location every time). The post TA stellar positionning would have consistently missed the specified reference mask position by less that 0.5 pixels (by $\sim$30 mas) in x and/or y. Worse performance at MASKLWB locations (for each filter central wavelength), which is closer to the corner of the \nircam field of view, suggesting distortion corrections could be the culprit, likely due to differences induced from pupil wheel tuning activities.
\item SGD Performance: we measured SGD offsets by performing a cross-correlation of the central SGD position with all surrounding dithers as well as performing cross-correlations with the \webbpsf simulation of a perfectly centered source. SGD perform very close to expectations, approximately within $\sim$2 to 3 mas from the ideal locations. These measurements are consistent with what the FGS team has been reporting.
\ei
\subsection{LW Pupil Wheel Alignment, 1.5 million km from Earth!}
\label{sec:pw}
Half way through our TA commissioning activities, we noticed all LW coronagraphic (occulted) images had a unexpected pattern. After discarding the occasional red companion (because the bright speckle on Figure~\ref{fig:pw} did not show for SW), we ran extensive \pynrc simulations with various amounts of pupil shear. These simulations revealed that the LW pupil wheel (PW) rotation needed to be adjusted. We triggered an anomaly (standard procedure in the event of a serious issue) to make sure we would get the necessary ressources and assistance from OSS (Operations Scripts System). Indeed, the software was not design to perform such an alignment procedure on orbit (07C as \enquote{contingency} in figure~\ref{fig:nrc-cars}). We had to think of several strategies and discuss them extensively. We finally had to proceed with a safe two-step approach:
\begin{enumerate}
\item Measure the field offsets ($\sim$7 pixels expected) caused by the PW rotation at 6 discrete positions, with coronagraphic optics inserted but unnocculted as the Engineering Imaging Template (only one allowing to perform table loads to rotate the PW) did not have any TA.
\item Load the measured offsets (transposed in ideal coordinates) into a subsequent observation with Coronagraphic TA for the MASK335R and MASKLWB cases and compare the PSF pattern with simulations done using contemporaneous OPD maps.
\end{enumerate}
Unfortunately a significant tilt event occurred on June 27 2022 just when we were finally able to acquire data in occultation. Using contemporaneous OPD maps to perform our analysis against the most realistic simulations was intense (short time and high pressure) but primordial. We converged on two trade-off LW PW offsets: +120 steps for the RND masks and +105 steps for the BAR mask.
Based on these observations we could move on to \coronsup. We knew the TA would probably have degraded slightly with all the successive coordinate transformations and shift measurements and the absence of new astrometric/distortion data since we had moved the two LW pupil wedges and hence the associated distortion solutions by a small amount.
\section{Contrast Performance}
\label{sec:coron-perf}
Figure ~\ref{fig:on-sky-335r} showed the ability of the mode to recover cleanly a faint companion inside the so called inner-working-angle (IWA) of the coronagraph. In this section we will focus on the performance of the LW round mask MASK335R. Performance of the LW bar mask MASKLWB are good as well and reported in Kammerer et al. 2022 (this conference)\cite{kammerer2022spie}.
\subsection{Data reduction and post-processing}
\label{sec:data} %
All the results presented in this paper are using the \jwst pipeline that will be described in Gordon et al. 2022b (in prep.)\cite{gordon2022_JWSTPIPE} stage 1 (\texttt{detector1}) and 2 (\texttt{image2}) unless specified otherwise. Following the recommended strategy and implementation of the coronagraphy-specific (\nircam and \miri) stage 3 (\texttt{coron3}), a mini-PSF reference library is build using the small grid dither (SGD)\cite{soummer2014_sgd} of one or several reference stars. A principal component analysis approached is then used, the so-called he Karhunen-Lo\`eve Image Projection (KLIP)\cite{soummer2012_klip} to subtract an optimal reference PSF to the science scene and reveal faint signal around it. \spaceklip is an agile community version of \texttt{coron3} based on the very successful and popular \pyklip. It is in active development\cite{kammerer2022spie} and has many functionalities that we are not describing in this work: forward modeling of point sources and disks, simultaneous astrometry and photometry using a Monte-Carlo approach, etc.
Here we present the performance to the best of our knowledge at the current state of our analysis just after the science readiness review with a limited analysis period after the \enquote{Coronagraphic Suppression Verification} data (\coronsup) was taken early July. These analysis focused on the MASK335R as it was the setup chosen to meet the readiness criteria. We anticipate that performance will evolve favorably as TA errors will be reduced (during Cycle 1) and TA repeatability can be improved, improving the TA centro\"iding algorithm (no timeline for now).
In the {\sl Characterization of JWST science performance from commissioning} report by Rigby et al. 2022\cite{rigby2022long} posted on July 12 2022\footnote{A report on the actual JWST science performance, as characterized through the 6-month commissioning activities: {\small \tt \href{https://jwst-docs.stsci.edu/breaking-news}{jwst-docs.stsci.edu/breaking-news\#BreakingNews-JWSTscienceperformancereportattheendofcommissioning}}}, the same contrast curves are shown as in figure~\ref{fig:contrast-335r} but only displaying the 9 SGD of the worst case reference star (Obs 3). Here we also applied a corrective factor of $\sim$2 to take into account the wavelength dependence of the TA neutral density (ND) filter \footnote{The transmission curve of the ND is shown in Kammerer et al. 2022 (this conference)}. In the end, the contrast in this paper is thus about twice worst than in the report. Figure ~\ref{fig:adirdi} shows corrected contrasts and compares with that of the commissioning report.
\subsection{Long Wavelength performance}
\label{sec:lw} %
The LW achievable contrast (MASK335R) is summarized in figures~\ref{fig:contrast-335r} \&~\ref{fig:adirdi}.
\subsection{Short Wavelength performance}
\label{sec:sw} %
\coronsup (Suppression Verification) did not include any SW measurements so we only had a few images from \coronta (Coronagraphic TA) to experiment and compute less-than-ideal preliminary, yet encouraging contrasts. Figure ~\ref{fig:SGDcontrast} shows the results of these experiments. We also subtracted a synthetic PSF library to each SGD to see which one would be closest to the ideal positioning. It revealed to work but was judged as a marginally sensitive approach (only sensitive between 0.2\arcsec and 0.55\arcsec) for our main goal at the time: assessing the relative positioning with respect to the mask.
\section{Discussion}
\label{sec:discussion}
\subsection{PSF Subtraction Strategies}
\label{sec:adirdi} %
For Cycle 1 and probably Cycle 2, \textbf{we recommended users to adopt as main coronagraphic strategy: 2 rolls (as separated as possible in the limit of 14\degr) and at least 1 PSF reference star taken back to back in an uninterruptible sequence}. Our \coronsup commissioning program allowed us to start investigating what the best and most efficient PSF subtraction strategy would be with \nircam coronagraphs given the telescope stability, wavefront residuals and what we know of our TA error and repeatability issues. Figure ~\ref{fig:contrast-335r} shows the importance of the SGD mitigation strategy: the more reference star and diversity in positioning the merrier. In general equating the science target SNR with a standard star with a 9-point SGD pattern allows a very clean detection at 0.5\arcsec (4.7 $\lambda/D$, well within the 6 $\lambda/D$ IWA) of a $10^4$ contrast point source companion. Figure ~\ref{fig:adirdi} shows that having one or several reference stars is more important than performing rolls if the goal is to detect a faint companion or structures as close as possible (e.g. between 0.4\arcsec and 1\arcsec in the speckle limited regime).
\subsection{Operational Maturity}
\label{sec:ops-maturity}
Even though we identified areas to improve (mainly the TA but also the astrometric analysis) the operational maturity of the mode is high. Indeed we have performed many TAs and reached a point where guide star acquisition (from FGS) only fails once in a while, not more than for other JWST modes.
With new \nircam reference files (e.g. distortion reference files for Coronagraphy) have recently (July 2022) been delivered to the Calibration Reference Data System (CRDS)\footnote{JWST Calibration Reference Data System (CRDS): {\small \tt \href{https://jwst-crds.stsci.edu}{jwst-crds.stsci.edu}}}, the official \jwst pipeline stage 3 (\texttt{coron3}) is now able to provide \texttt{i2d.fits} in MAST which are more than a quicklook and thus practically \enquote{science grade}, very similar to the \spaceklip images that we have shown in this paper. Intermediary products are fantastic: For instance, the \texttt{psfsub} product is a cube for each science integration (53s for \coronsup) as a slice for each roll, KLIP-subracted (RDI) by one of the reference stars. The WD companion HD 114174 B is clearly and cleanly imaged in nearly every frame.
The Exposure Time Calculator (ETC)\footnote{JWST Exposure Time Calculator (ETC): {\small \tt \href{https://jwst.etc.stsci.edu}{jwst.etc.stsci.edu}}} predicted the percetage of the full well with less than 20\% discrepancy with the reality. The TA SNR were as expected. ETC is overoptimistic (no TA error, less detector noise) but the current ETC OPD is worse and there is no SGD implementation. All in all, the ETC is currently still adequate to prepare proposals and observations.
JDox needs to be updated as for all the JWST mode. \nircam Coronagraphy presented outdated contrast curve predictions\cite{beichman2010}. Nevertheless the recommended PSF recommended strategy and the HCI articles still hold.
\subsection{Commissioning hurdles, recommendations and perspectives}
\label{sec:rec} %
The \nircam focal plane masks are forgiving but it is challenging to know where they are with respect to our star of interest.
The main hurdle has been the LW pupil wheel misalignment which made us slip by about 4-5 weeks. Astrometry with the Coronagraphic PSF and 10 SCAs was a challenge but the team now has the software infrastructure and knowledge to do it again. Our simulation framework (\pynrc, \webbpsf, \webbpsfext) has allowed us to come up with agile ways to analyze data and move one with the commissioning of \nircam Coronagraphy to the level we are at now, offering the mode to Cycle 1 users, including \ers and \gto. \textbf{The Target Acquisition (TA) can be further improved. We recommend that the TA parameters be remeasured carefully during Cycle 1 which could afford significantly better performance than is currently possible and very much better than pre-launch predictions} (subject to the state of the OTE {\sl vis-\`a-vis} tilt events, etc.).
Our plan as of July 2022 is:
\begin{enumerate}
\item Proceed with the July observations (ERS etc.) and get even more knowledge on the TA repeatability
\item Plan a LMC astrometric calibration program similar to \texttt{NRC-21b} \coronlmc: we now have the tools to analyze this more quickly and determine the distortion solutions and offsets for both SIAF and CRDS updates, take into account the rotation term (possibly $\sim$0.1 deg from our estimations).
\item Perform a follow-up calibration program similar to \texttt{NRC-30} \coronta that checks TA accuracy is improved.
However to investigate robustness and repeatability we need more than 1 observation per mask and therefore, if it’s acceptable (likely better than now), we will not stop science programs after this.
\end{enumerate}
For future cycles we hope to be able to implement and support \nircam Coronagraphy with simultaneous SW and LW (always saving both, as for the Imaging mode). We think we will improve the official \jwst pipeline and \mast products based on all feedback we would have receive from the vibrant \hci community both in terms of pre-processing and post-processing. We wish to improve the TA algorithm which is currently a limitation for robustness and accuracy. Finally we wish to improve and fine tune our astrometric calibration method by eventually taking into account (via a model) the discontinuity between the two areas (COM and non-COM).
The RDI with SGD strategy appears very good. While it can be time consuming: slew to the reference star(s), SGD, it has been shown that \nircam contrasts are only marginally affected by a spectral mismatch between science target and reference star(s) and that brighter reference star(s) can be observed with the same or higher SNR in less total time. Finally the telescope wavefront monitoring has shown excellent stability over hours (about 20 nm over a huge slew, larger than our recommended $\sim$5-10\degr$\,$ typically slew between targets of a same program. In other words, \textbf{it is probably better in most cases to use brighter, slightly farther reference star(s) with slightly different spectral types than chose a roll only strategy}.
Once many reference stars will have been observed using many settings, it will be possible to use an archive reference star library in the RDI KLIP subtraction. That reference star library can be ideally composed of many stars, eventually with/without tilt events (anything that can happen during a science observation). We have started to experience with using a synthetic PSF library (only with SW MASK210R F210M as shown on figure~\ref{fig:SGDcontrast}. Increasing the size of the library and the information in it (include tilt events, TA errors, spectral and brightness diversity) should yield good results. We can even think of using a hybrid PSF library composed of both real stars and synthetic PSFs (created using contemporary OPDs and non-contemporary features such as tilt events / segment relaxation, accounting for micrometeoroid impacts, etc.).
Based on our experience with \coronsup, there is absolutely no doubt that \nircam Coronagraphy will soon deliver impactful science results as the measured flight performance is above expectations\cite{carter2021, hinkley2022simulations}. Known giant exoplanets will be characterized and studied further and new planets will be discovered. Unlike AO on the ground (even with lasers) \nircam Coronagraphy can be used on faint hosts: brown dwarfs, galaxies, etc. Possibilities are endless. Finally, if SW and LW can be saved simultaneously in the future as well as with the use of archive and/or synthetic or hybrid PSF libraries, then \nircam Coronagraphy can become a more efficient JWST mode and yield even more science return.
\acknowledgments %
These observations were made possible through the efforts of the many hundreds of people composing the international commissioning staff of JWST. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with programs \coronlmc, \coronta and \coronsup. Jens Kammerer is supported by programs PID 1194, 1411, and 1412 through a NASA grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. Some of the research described in this publication was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. We thank warmly Johannes Sahlmann who was the main developer and initiator of \texttt{pystortion} precursor to \jwstdistortion and \pysiaf, making the first prototype of JWST astrometric calibration in NIRISS Imaging mode\cite{sahlmann2019}. We thank Scott Friedman, our Commissioning Scientist who kept us on a schedule and ran the fabulous JDB (JWST Daily Briefings). We also thank the SPIE Organizing Committee and the Proceedings Coordinators.
\bibliography{SPIE2022} %
\bibliographystyle{spiebib} %
\vspace{-0.2cm}
\section{Acronyms}
For more JWST related acronyms and abbreviations: {\small \tt \href{https://jwst-docs.stsci.edu/jwst-acronyms-and-abbreviations}{jwst-docs.stsci.edu/jwst-acronyms-and-abbreviations}}
\vspace{-0.2cm}
\begin{table}[h!]
\small
\begin{center}
\begin{tabular}{|l|l|}
\hline
ADI & Angular Differential Imaging \\
ALMA & Atacama Large Millimeter/submillimeter Array \\
AO & Adaptive Optics \\
APT & Astronomer's Proposal Tool \\
COM & Coronagraphic Optical Mount \\
CRDS & (JWST) Calibration Reference Data System \\
CSA & Canadian Space Agency \\
CVT & Coronagraphic Visibility Tool \\
CV3 & Cryo-Vacuum test \#3 \\
DI & Direct Imaging \\
DMS & Data Management System \\
ESA & European Space Agency \\
ETC & Exposure Time Calculator \\
FGS & Fine Guiding Sensor \\
FoV & Field of View \\
FPA & Focal Plane Array \\
Hawk-I & High Acuity Wide field K-band Imager (VLT)\\
HCI & High Contrast Imaging \\
HST & Hubble Space Telescope \\
JDox & JWST user documentation (shorthand) \\
JWST & James Webb Space Telescope \\
KLIP & Karhunen-Lo\`eve Image Projection \\
LMC & Large Magellanic Cloud \\
LW & Long Wavelength (Channel) \\
mas & milliarcsecond \\
MAST & Mikulski Archive for Space Telescopes \\
MIRI & Mid-Infrared Instrument \\
NASA & National Aeronautics and Space Administration \\
ND & Neutral Density (filter) \\
NIRCam & Near InfraRed Camera \\
OPD & Optical Path Difference \\
OSS & Operations Scripts System \\
OTE & Optical Telescope Element \\
PSF & Point Spread Function \\
PI & Principal Investigator \\
PID & Proposal ID (Identification in APT and MAST) \\
PIL & Pupil Imaging Lens \\
PW & Pupil Wheel \\
RDI & Reference Differential Imaging \\
RMS & Root Mean Square \\
RND & Round (mask) \\
SAM & Small Angle Maneuver\\
SCA & Sensor Chip Assembly \\
SGD & Small Grid Dither(s) \\
SI & Science Instrument \\
SIAF & Science Instrument Aperture File \\
SNR & Signal to Noise Ratio \\
SW & Short Wavelength (Channel) \\
TA & Target Acquisition \\
VLT & Very Large Telescope\\
WD & White Dwarf \\
WFE & WaveFront Error \\
YSO & Young Stellar Object \\
\hline
\end{tabular}
\end{center}
\label{table:acronyms}
\end{table}%
|
Title:
Lepto-hadronic jet-disc model for the multi-wavelength SED of M87 |
Abstract: The low-luminosity Active Galactic Nuclei M87, archetype of Fanaroff-Riley I
radio-galaxies, was observed in a historically quiet state in 2017. While
one-zone leptonic jet models alone cannot explain the core radio-to-gamma-ray
spectrum, we explore a hybrid jet-disc scenario. In this work, we model the
overall spectral energy distribution of M87's core with a dominating one-zone
lepto-hadronic jet component, coupled with the contribution from the accretion
flow. We find close-to-equipartition parameter sets for which the jet component
fits the radio-to-optical data as well as the gamma-ray band, while the
accretion flow mainly contributes to the X-ray band. The effects of gamma-ray
absorption by the Extragalactic Background Light during the propagation towards
Earth are probed and are found to be negligible for this model. The neutrino
flux produced by such scenarios is also calculated, but remains below the
current instruments' sensitivity.
| https://export.arxiv.org/pdf/2208.14756 |
\title{Lepto-hadronic jet-disc model for the multi-wavelength SED of M87}
\correspondingauthor{Margot Boughelilba}
\email{[email protected]}
\author[ 0000-0003-1046-1647 ]{Margot Boughelilba}
\affiliation{Institute for Astro and Particle Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author[0000-0001-8604-7077]{Anita Reimer}
\affiliation{Institute for Astro and Particle Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\author[0000-0003-1332-9895]{Lukas Merten}
\affiliation{Ruhr-Universität Bochum, Institut für Theoretische Physik IV, 44801 Bochum, Germany}
\affiliation{Institute for Astro and Particle Physics, University of Innsbruck, 6020 Innsbruck, Austria}
\keywords{Jets(870) --- Particle astrophysics (96) --- Active galactic nuclei (16) --- High energy astrophysics (739) --- Low-luminosity active galactic nuclei (2033) --- Astrophysical black holes (98) --- Cosmic ray sources (328) --- Gamma-ray sources (633) --- Non-thermal radiation sources (1119) --- Relativistic jets (1390)} %
\section{Introduction} \label{sec:intro}
M87 is one of the closest examples %
of low-luminosity Active Galactic Nuclei (AGN), located at a distance of $\sim 16.8$ Mpc from Earth (corresponding to a redshift $z \approx 0.004$) in the Virgo cluster. The mass of the supermassive black-hole at its center was estimated around $6.5 \times 10^9 \mathrm{M}_\sun$ \citep{2017_Mass_EHT}.
In 2017, an extensive multi-wavelength observation campaign was launched, taking quasi-simultaneous data from several telescopes over the entire electromagnetic band \citep{EHTpaper}. For nearly 2 months, M87's core region was observed in a particularly low state. These observations allow to study the innermost radiation from the AGN, in particular the launching region of the jet, that M87 exhibits, as the broadband spectrum of these observations is dominated by the emission from the core and not from the jet's knots such as HST-1.
Furthermore, the closeness and the size of M87 make it a prime candidate accelerator of the observed high-energy cosmic rays (see, e.g. \citealt{UHECR_M87, M87UHECR_TeV}).
To explain the multi-wavelength spectral energy distribution (SED) of M87, different emission models are usually probed. Leptonic jet models are, for the case of M87, typically based on the Synchrotron-Self-Compton mechanism: synchrotron photons produced by the interaction between the jet's relativistic electrons and positrons with the ambient magnetic field are used as a target field for inverse Compton scattering by the same particles, thereby producing high-energy radiation. In \cite{EHTpaper}, two different one-zone leptonic models were applied, but failed to reproduce both, the high and low energy parts of the SED of M87 at the same time.
On the other hand, lepto-hadronic models have been proposed to explain the SED of objects such as M87 (e.g. \citealt{AnitaM87}). In such models, accelerated protons are also present in the jet together with electrons, and the high-energy part of the SED is assumed to be the result of proton-initiated processes. A clear observational signature between these two kinds of models is the production of neutrinos in the case of lepto-hadronic models.
In this paper, we explore a global model coupling the jet lepto-hadronic emission and the accretion flow, in order to explain the observed SED of M87. With this model, all the emission would originate from the core region of the AGN. %
The paper is organized as follows: in Section \ref{sec:jet section} we describe the jet model, then in Section \ref{sec:adaf} we detail the accretion flow model component and its parameters. Results of the simulations are presented in Section \ref{sec:results} and we conclude about the global core emission in Section \ref{sec:conclusion}.
\section{Jet model component} \label{sec:jet section}
As of today, the one-sided jet launching from M87 has been well-studied in all different wavelengths. In the 2017 observation campaign, \cite{EHTpaper} did not infer any time variability in the flux above 350 GeV. The data collected focus on the core emission. The angular resolution of radio observations suggests that the radio emission region is close to the jet launching region. While launch mechanisms are still unclear, the total estimated jet power for M87 of $\sim 10^{43-44} \, \mathrm{erg}\, \mathrm{s}^{-1}$ \citep{Prieto_M87, Jet_power, Jet_power_Stawarz} can be provided through, e.g. the Blandford-Znajek mechanism \citep{Blandford_Znajek, EHT2019_BZ_BP}.
In this paper we explore models that can reproduce the quiet and steady state of M87's core observed between March and May 2017. There is evidence of sub- to superluminal motion of radiating jet components in M87's inner jet (eg Snios et al 2019, Walker et al 2018), that can support a jet model setting in which the emission region is viewed as a moving blob. On the other side, the possibility that the jet is a continuous zone in which the particles flow is often considered (see, e.g. \citealt{BlandfordSteady, SteadyReview}) in the case of quiet state emission, and cannot be ruled out. We investigate both scenarios here.
First we consider the jet %
emission region as a spherical blob with a constant radius $r'_\mathrm{em}$ of magnetized plasma moving at a mildly relativistic speed along the axis of a, during the observation time, non-expanding jet, inclined by an angle $\theta$ with respect to the line of sight. This defines a Doppler factor $\delta_\mathrm{j} = \Gamma_\mathrm{j}^{-1}(1 - \beta_\mathrm{j}\cos{\theta})^{-1}$ where $\Gamma_\mathrm{j}$ and $\beta_\mathrm{j}c$ are the bulk Lorentz factor and velocity respectively.
For the second scenario we consider the jet as a continuous cylinder of radius $r'_\mathrm{em}$ and proper length $l' = \Gamma_\mathrm{j} l$, with $l$ being the observed length.
The EHT observation provides a strong constraint on the size of the emission region, as the angular resolution allows to probe the closest regions to the black hole. At 230 GHz, the radio flux was measured with an angular resolution $\theta_\mathrm{obs}$ of 0.06", corresponding to 7.5 $r_\mathrm{g}$ (for M87, $r_\mathrm{g} = G M_\mathrm{BH}/c^2 \approx 9.8 \times 10^{14} \, \mathrm{cm}$) in radius. However, even for a mildly relativistic jet velocity, the blob travels farther than 7.5 $r_\mathrm{g}$ over the observation time. In the continuous jet scenario, we assume that the jet is launched around the innermost stable orbit of the black hole, i.e. $6 \, r_g$ for a static black hole. Hence, when observing the core region within $7.5\, r_g$ at $230 \, \mathrm{GHz}$ the jet component is likely not the dominant one. Since we choose to focus on the core emission, we take care that for an emission region of size $\lesssim 7 \, r_\mathrm{g}$, the predicted radio flux does not exceed this particular data point. %
Furthermore, the SED of M87 indicates a self-absorbed, stratified jet below at least 86 GHz \citep{EHTpaper, Blandford_prediction_SSA}. This lower limit on the self-absorption frequency $\nu_\mathrm{ssa, obs}$ and corresponding flux $S_{\nu_\mathrm{ssa, obs}}$ coupled with the estimate of the size of the emission region allows to derive a relation for the magnetic field strength B required. Following the treatment by \cite{Kino_SSA} for a moving blob:
\begin{eqnarray}
B &=& b(p_e)\left(\frac{\nu_\mathrm{ssa,obs}}{1\mathrm{GHz}}\right)^5\left(\frac{\theta_\mathrm{obs}}{1\mathrm{mas}}\right)^4\left(\frac{S_{\nu_\mathrm{ssa},\mathrm{obs}}}{1\mathrm{Jy}}\right)^{-2}\left(\frac{\delta}{1+z}\right) \quad \mathrm{G}
\end{eqnarray}
with $b(p)$ described in the Appendix \ref{b(p)}.
From this, we estimate an order of magnitude for the magnetic field strength and then adjust the primary electron injection parameter so that the synchrotron radiation produced is of the order of $S_{\nu_\mathrm{ssa},\mathrm{obs}}$ at the given frequency. Considering that the self-absorption frequency is $\lesssim 230 \mathrm{GHz}$ (around the EHT data point), with the observed flux being $S_{\nu_\mathrm{ssa},\mathrm{obs}}\sim 0.6 \, \mathrm{Jy}$, gives an estimate for the magnetic field strength between $\sim 5 - 60$ G.
We assume that the emission region contains primary relativistic electrons and protons that are isotropically and homogeneously distributed in the comoving jet frame, and following a power-law energy spectrum cutting off exponentially, such that the spectral number density $n'_{e,p}(E') \propto E'^{-p_{e,p}}e^{-E'/E'_{\mathrm{max},e,p}}$ cm$^{-3}$, for $E' \ge E'_{\mathrm{min},e,p}$ (where e,p denotes the electrons or the protons, respectively).
These primary particles are injected continuously into the emission region at a rate $q_i$ (cm$^{-3}$s$^{-1}$), where they suffer from different interactions. These are photo-meson production, Bethe-Heitler pair-production, inverse-Compton scattering, $\gamma$-$\gamma$ pair production, decay of all unstable particles, synchrotron radiation (from electrons and positrons, protons, and $\pi^\pm$, $\mu^\pm$ and $K^\pm$ before their respective decays) and particle escape. Positrons are treated the same way as electrons, hence in the following we will use electrons to refer to the two populations irrespective of their type.
Primary particles can also interact with external target photon fields (i.e. produced outside the jet). However, no evidence of a dusty torus has been found \citep{No_DT_in_M87} and no Fe K$\alpha$ line has been observed to support the existence of a strong broad-line region (BLR) component \citep{DiMatteoM87}. This is in line with the properties of "true" type 2 AGN (\citealt{LLAGN_NLR}; or see \citealt{Review_DT_BLR} for a review). Hence we do not consider the dusty torus nor the BLR as external target fields. On the other hand, the accretion flow could serve as an external photon field for the jet's primary particles, a possibility we discuss in Section \ref{sec:results}. %
The maximum energy of the primary particles is determined by $E_\mathrm{max} = \min{(E_\mathrm{max}^\mathrm{Hillas}, E_\mathrm{max}^\mathrm{loss/acc})}$ where $E_\mathrm{max}^\mathrm{Hillas}$ is the energy given by the Hillas criterion \citep{Hillas} and $E_\mathrm{max}^\mathrm{loss/acc}$ is the energy obtained by balancing the particles' acceleration and loss rates.
The Hillas criterion constrains the Larmor radius of the particles to be smaller or equal to the size of their acceleration region, leading to an estimate of the maximum particle energy $E_\mathrm{max}^\mathrm{Hillas} \approx 10^{21} Z\beta \left(R/\mathrm{pc}\right) \left(B/\mathrm{G}\right) \mathrm{eV}$. %
Expressions for $E_\mathrm{p,max}^\mathrm{loss/acc}$ and $E_\mathrm{e,max}^\mathrm{loss/acc}$ are obtained by equating the acceleration timescale $t_\mathrm{acc}(E_\mathrm{p,e,max}^\mathrm{loss/acc})$ and the loss timescale $t_\mathrm{cool}(E_\mathrm{p,e,max}^\mathrm{loss/acc})$ respectively. We follow the work of \cite{AnitaM87} to verify that the ratio of the two maximum energies $E_\mathrm{p,max}^\mathrm{loss}/E_\mathrm{e,max}^\mathrm{loss} = \left(m_p/m_e\right)^{4/(3-\beta)}$ is obtainable with a realistic turbulence spectrum. For, e.g. Kolmogorov diffusion, $\beta=5/3$, we get $E_\mathrm{p,max}^\mathrm{loss}/E_\mathrm{e,max}^\mathrm{loss} \sim 6\times 10^9$. Bohm diffusion, where the magnetic field is fully tangled, corresponds to $\beta=1$, and in the case of strong magnetic fields, Kraichnan turbulence $\beta = 3/2$ can be present \citep{Kraichnan_turbulence}.
To compute the time-dependent direct emission and cascade component from the jet's particles, we use a particle and radiation transport code (see, e.g. \cite{Anita_matrix_intro}) that is based on the matrix multiplication method described in \cite{Protheroe_Stanev_matrix} and \cite{Protheroe_Johnson_matrix}. The interaction rates and secondary particles and photons yields are calculated by Monte Carlo event generator simulations (except for synchrotron radiation, for which they are calculated semi-analytically). These are then used to create transfer matrices, that describe how each particle spectrum will change after a given timestep $\delta t$. To ensure numerical stability, we set $\delta t$ equal to the smallest interaction time for any given simulation. In each timestep, energy conservation is verified. For steady-state spectra, we run the simulation until we reach convergence, which we define here as the ratio $R_\mathrm{conv}$ between the flux at a simulation time $t$ and the flux at a simulation time $t - \delta t$. Convergence is reached when $R_\mathrm{conv} = F_\nu(t + \delta t)/F_\nu(t) < 1 \pm 10^{-3}$ .
All the calculations listed above are done in the jet frame. The observed spectrum $\nu F_{\nu}$ is then given by the frame transformation $F_{\nu} = (1+z)$$g_\mathrm{boost}$$L'_{\nu}/(4\pi d_\mathrm{L}^2)$ where $L'_{\nu}$ is the comoving luminosity from the jet with $d_\mathrm{L} = 16.8 \, \mathrm{Mpc}$ the luminosity distance of the source and $\nu_\mathrm{obs} = \delta_\mathrm{j}\nu'/(1+z)$. The Doppler enhancement factor is $g_\mathrm{boost} = \delta_\mathrm{j}^3$ for a moving blob and $g_\mathrm{boost} = \delta_\mathrm{j}^2/\Gamma_\mathrm{j}$ for a continuous jet \citep{dopplerSikora, DopplerStawarz}. For a given comoving energy density, we obtain the intrinsic luminosity through $u'_{\nu} = (r'_\mathrm{em}/c)(L'_{\nu}/V')$, where $V'$ is the comoving volume of the emission region (i.e. depending on the geometry). We find that we can obtain the same observed flux for both jet configurations by setting the length of the continuous cylinder to $l' = 2\delta_\mathrm{j}\Gamma_\mathrm{j}r'_\mathrm{em}/3$. We apply this for the remaining part of this work, hence the results that we show in Section \ref{sec:results} are identical for the moving blob and the continuous jet scenario, given this condition.
The effect of gamma-ray absorption by the Extragalactic Background Light (EBL) on the escaping photon beam travelling from the source to Earth is taken into account. Three different models, using different approaches to calculate the EBL SED as a function of the redshift, are used here to compute the flux attenuation factor. We use the models of \cite{EBL_franceschini}, \cite{EBL_dominguez}, and \cite{EBL_gilmore}, which are based on existing galaxy populations and extrapolates them back in time, based on the evolution of galaxy populations directly observed over the range of redshifts that contribute the most significantly to the EBL, and based on forward evolution of galaxy populations starting with cosmological initial conditions, respectively. The high-energy flux of M87 can be used to probe these models, and constrain the EBL density, especially in the far infrared band, where the differences are especially large between the models. However, we find that due to the distance of M87, the effects of gamma-ray absorption are negligible for gamma rays with an energy lower than 10 TeV ($\sim 10^{27}$ Hz). As we predict the emitted flux to peak at $\sim 10^{24-25}$ Hz with a strong flux decrease towards higher energies (see Section \ref{sec:results}), we hence cannot discriminate between any of the three models.
\section{Accretion flow} \label{sec:adaf}
Low-luminosity AGNs like M87, are expected to host accretion flows around their SMBH that are radiatively inefficient. This is characterised by the formation of geometrically thick, optically thin, very hot accretion flows, called Advection-Dominated Accretion Flows (ADAFs, introduced by \citealt{first_adaf_torii, first_adaf} and further developed by e.g. \citealt{Narayan_Yi_original, adafintro}). ADAFs exist only when the accretion rate is sufficiently low ($\dot{M} \lesssim 0.01\dot{M}_\mathrm{Edd}$), and consist of a plasma of thermal electrons and ions, where both components may have different temperatures, $T_e$ and $T_i$ respectively.
In addition to the ADAF, we assume the existence of a truncated standard thin accretion disc (Shakura \& Sunyaev disc, \citealt{ShakuraSunyaev}) extending the outer parts of the ADAF.
Here, we investigate inhowfar an ADAF/disc system can contribute to the X-ray component, while not overshooting the radio-to-optical part of the SED that is considered to be jet dominated.
In the following, we use the quantities $X_n = \frac{X}{10^n}$ and the normalized quantities $r=R/R_\mathrm{S}$, with the Schwarzschild's radius $R_\mathrm{S} = 2 r_g = 2.95 \times 10^5 \, m_\mathrm{BH}$, $m_\mathrm{BH}=M_\mathrm{BH}/M_\odot$ and $\dot{m}=\dot{M}/\dot{M}_\mathrm{Edd} = \eta_\mathrm{eff} \dot{M}c^2/L_\mathrm{Edd}$, where $\eta_\mathrm{eff}$ is the radiation efficiency of the standard thin disk ($\eta_\mathrm{eff} \approx 0.1$) and the Eddington luminosity $L_\mathrm{Edd} \simeq 1.3 \times 10^{47} \, m_{\mathrm{BH},9} \, \mathrm{erg}\, \mathrm{s}^{-1}$ .
We make use of the one-zone, height-integrated, self-similar solutions of the slim disc equations derived by \cite{Narayan_Yi_original} to describe (see Appendix \ref{appendix:self-solutions}) the hot plasma.
To obtain the spectrum emitted by an ADAF, the balance between the heating and cooling of the thermalized electrons present in the plasma $ q^{e+} = q^{e-} \label{eq:thermal balance}$, is solved to determine the scaled electron temperature $\theta_e = k_B T_e / m_e c^2$.
Here $q^{e+}$ is the electrons' heating rate, and $q^{e-}$ is their cooling rate.
The emission mechanisms that we consider in the following are synchrotron radiation, bremsstrahlung and Comptonization of the two previous components. The total cooling rate is the sum of the three individual cooling rates, detailed in Appendices \ref{appendix:brem and synch} and \ref{appendix:compton}. The heating mechanisms and rates are described in Appendix \ref{appendix:heating_rates} and they consist of Coulomb collision between ions and electrons, and viscous energy dissipation.
The plasma is a two-temperature plasma where the ion temperature is related to the electron temperature through $T_i + 1.08T_e \approx 6.66\times 10^{12}\, \beta r^{-1}$ \citep{Narayan_Yi_original}, where $\beta$ is the ratio between the gas $p_g$ and the total pressure $p = \rho c_s^2 = p_m + p_g$ with $p_m = B^2/8\pi$, and $\rho$ is the mass density and $B$ is the isotropically tangled magnetic field.
We obtain the electron temperature by varying $T_e$ using a bisection method to solve the balance equation for each radius.
Furthermore, we take $\dot{m}$ of the form $\dot{m} = \dot{m}_\mathrm{out} \left(r/r_\mathrm{out}\right)^s$, where $r_\mathrm{out}$ is the outer radius of the ADAF and is associated with an accretion rate $\dot{m}_\mathrm{out}$, and $s$ is a mass-loss parameter (introduced by \citep{Blandford_Begelman_massloss}) that is used to include the presence of outflows or winds from the ADAF.
Upon obtaining the electron temperature, the emitted spectrum from the ADAF is computed, integrating over the radius of the ADAF.
In order to take into account absorption, we follow the method of \cite{ADAF_spectrum_Manmoto}, and derive the flux from synchrotron and bremsstrahlung emission as
\begin{equation}
F_{\nu,\mathrm{0}} = \frac{2\pi}{\sqrt{3}}\, B_\nu\left[ 1 - \exp{(-2\sqrt{3}\, \tau_\nu)}\right] \, \, \mathrm{erg} \, \mathrm{cm}^{-2} \, \mathrm{s}^{-1} \, \mathrm{Hz}^{-1} \label{eq:lnu_synch_brem}
\end{equation}
where \begin{eqnarray}
B_\nu = \frac{2 h \nu^3}{c^2}\frac{1}{e^{\frac{h\nu}{k_\mathrm{B} T_e}} - 1} \nonumber
\end{eqnarray} is the Planck's function, and $\tau_\nu$ is the optical depth for absorption defined such that $\tau_\nu = (\sqrt{\pi}/2)\kappa_\nu H$, with $\kappa_\nu = (j_{\nu, \mathrm{syn}} + j_{\nu, \mathrm{br}})/(4\pi B_\nu)$ the absorption coefficient. The emissivities $j_{\nu, \mathrm{syn}}$ and $j_{\nu, \mathrm{br}}$ are given in Appendix \ref{appendix:brem and synch}.
Hence the local luminosity from synchrotron and bremsstrahlung at a given radius is given by
\begin{equation}
L_{\nu,\mathrm{0}} = 2\pi R^2F_{\nu,\mathrm{0}}
\end{equation}
Synchrotron radiation and bremsstrahlung further act as a photon field for inverse Compton scattering by the thermal electrons. Following the work of \cite{ADAF_neutrinos_CR}, we compute the number density of photons after the i-th scattering:
\begin{equation}
N_{\gamma,i}(\epsilon) = \frac{R}{c}\int d\gamma \, \frac{3}{4\gamma^2} \, N_e(\gamma, \theta_e) \, N_{\gamma,i-1}\left(\frac{3\epsilon}{4\gamma^2}\right) \, R_c\left(\frac{3\epsilon}{4\gamma^2}, \gamma\right) \label{eq:compton_flux}
\end{equation}
where $R_c(\epsilon, \gamma)$ is the scattering rate for electrons with Lorentz factor $\gamma$ and photons with dimensionless energy $\epsilon = h\nu/(m_e c^2)$, that we take from \cite{Coppi_Blandford_scattering}. $N_e(\gamma, \theta_e)$ is the Maxwellian distribution of electrons, described in equation \ref{eq:maxwellian}.
The initial condition is given by $N_{\gamma,0}(\epsilon) = L_{\epsilon,\mathrm{0}}/(h \nu \pi c R^2) $ with $L_{\epsilon,\mathrm{0}} = (m_e c^2/h) L_{\nu,\mathrm{0}} $.
The self-similar solutions give a good estimate for the ADAF emission for sufficiently large radii ($r \gg r_\mathrm{sonic}$, where $r_\mathrm{sonic}$ is the sonic radius; \citealt{Sonic_radius_critical}), however the inner part of the ADAF ($r \sim 2.5 - 4$) is thought to be at the origin of the ring observed by the EHT collaboration \citep{EHT2019_BZ_BP} at 230 GHz. We cannot use the self-similar solutions to account for this inner part emission, but, considering the ADAF framework, we expect that synchrotron radiation is the dominant process in this region. The synchrotron radiation is self-absorbed until the peak frequency corresponding to the emission radius (here 230 GHz at $r \sim 2.5 - 4$ corresponds to $R \sim 5 - 8 \, r_g$), hence we add a power-law component $F_\nu \propto \nu^{5/2}$ that we scale to the observed flux at 230 GHz, to the existing ADAF spectrum (coming from regions $r \ge 5$).
The maximum radius $r_\mathrm{max}$ is poorly constrained. As there is no evidence for the presence of a truncated thin disc in the infrared data, we set $r_\mathrm{max} = 2\times 10^5$ so that any contribution from an outer disc truncated at this radius would be negligible (the computation of the outer disc spectrum is performed in Appendix \ref{appendix disc}). This value is consistent with the Bondi radius derived by \cite{bondiradius}. For the remaining parameters, we explore a broad range of values, as summarized in Table \ref{table:parameters_ADAF}.
\begin{table}
\centering
\tablenum{1}
\label{table:parameters_ADAF}
\caption{Summary of the allowed ranges for the parameters $\alpha$, $\beta$, $\dot{m}_\mathrm{out}$, s, $\delta_e$, as well as the best values chosen to represent the ADAF model for M87. Here $\alpha$ is the viscosity parameter introduced by \cite{ShakuraSunyaev}, $\beta$ is the ratio between the gas and the total pressure, $\dot{m}_\mathrm{out}$ is the accretion rate at the outermost part of the ADAF, s is the mass-loss parameter characterizing the evolution of the accretion rate over the volume, and $\delta_e$ is the fraction of viscous energy directly transmitted to the plasma electrons.}
\begin{tabular}{ccccc}
\hline
\hline
parameter & minimum value & maximum value & best choice & reference work \\
\hline
$\alpha$ & 0.01 & 1 & 0.1 & 1, 2 \\ %
$\beta$ & 0.5 & $<1$ & 0.9 & 3, 4 \\
$\dot{m}_\mathrm{out} $ & $1\times 10^{-4}$ & $2.93\times 10^{-3}$ \tablenotemark{a} & $1.6 \times 10^{-3}$ & 5, 6 \\ %
$s$ & 0 & 1 & 0.39 & 6, 7 \\ %
$\delta_e$ & $10^{-4}$ & $10^{-1}$ & $5 \times 10^{-3}$ & 5, 8\\
\hline
\end{tabular}
\tablenotetext{a}{ The upper limit is the Bondi accretion rate, calculated with the mass estimate of \cite{2017_Mass_EHT}}
\tablerefs{(1) \citealt{alpha_visco_mad}, (2) \citealt{alpha_visco_obs}, (3) \citealt{beta_values1}, (4) \citealt{beta_values2}, (5) \citealt{DiMatteoM87}, (6) \citealt{LLAGNmodels}, (7) \citealt{Blandford_Begelman_massloss}, (8) \citealt{Mahadevan1997}}
\end{table}
For this work, we wish to probe whether an ADAF component could explain the X-ray data, without overestimating the radio to optical observations. In Figure \ref{fig:adaf}, we present the spectrum obtained with the parameter values that represent the data best. With the accretion rate dependency on the radius, its value in the innermost regions $r\sim 2.5$ is set to the value inferred from the black hole ring observations \citep{EHT2019_BZ_BP}, where an accretion rate in the inner region of $\dot{m}\sim 2\times 10^{-5}$ was estimated. The values of $\beta$ and the electron density in the black hole vicinity are compatible with values derived for MAD (Magnetically Arrested Disk; see e.g. \citealt{firstMAD, MAD}) simulations \citep{MagFieldEHT}. The ADAF component alone is not entirely consistent with the X-ray data, however its contribution is added to the jet emission to produce the overall SED (see Section \ref{sec:results}).
\section{Results} \label{sec:results}
With the methods described above we probe whether the total joint model (jet component added to the ADAF component) can explain the global SED. We start by setting the fixed parameters of the jet. Since the synchrotron self-absorption frequency is a critical feature of the observed spectrum (see Section \ref{sec:jet section}), we fix the size of the emission region in order to maximise the self-absorption frequency value, while being consistent with the measured \cite{EHT2017A} flux value, namely we set $r_\mathrm{em}' = 5\times 10^{15}$ cm $\approx 5 \, r_g$. This corresponds to the radius of the sphere in the moving blob scenario, while for the continuous jet this gives the transverse radius of the cylinder.
For this region, we explore a parameter space starting with varying the magnetic field strength between 10~G and 50~G. For each magnetic field we adjust the electron maximum energy and spectral index in order to reproduce the observed cutoff in the optical band, while complementing the ADAF contribution around $10^{16}$ Hz.%
Once we find the combination between the jet magnetic field strength, the size of the emission region and the injection rate of electrons inside the jet region we determine the Doppler factor $\delta_j \approx 2.3$.
This corresponds to a velocity $\beta c = 0.73c$ with an inclination of the jet $\theta = 17 \degr$. With this value of the Doppler factor and the size of the emission region we choose, the length of the cylinder, using the geometry described in \ref{sec:jet section}, is $l' \approx 10^{16} \, \mathrm{cm}$. %
For the injected proton population, we explore cutoff energies between $10^9$ and $10^{10}$ GeV, and spectral indices between 1.7 and 2.0. There are less observational constraints on the proton population than for the electrons. We check the ratio of the maximum proton-to-electron energy (see Section \ref{sec:jet section}) and consider models for which the total energy density in particles is lower than or equal to the magnetic energy density.
As mentioned in Section \ref{sec:jet section}, the accretion flow could serve as an external target photon field for the jet's interactions. To assess if the ADAF would make a relevant target field, we compare the energy density of the internal (jet) and external (flow) photon fields in the jet's frame. To do so we transform the accretion flow radiation field into the jet's frame, assuming for simplicity that the the flow is seen as a point source behind the jet. This is a rough approximation, however we only want to estimate the dominant field here.
In Figure \ref{fig:target_fields_comparison}, we compare the photon spectral number density of the photon fields (for the jet model for which $B = 10\,\mathrm{G}$, $p_{p} = 1.7$, $E'_{\mathrm{max},p}=6\times 10^9\,\mathrm{GeV}$, corresponding to the top panel of Figure \ref{fig:flux_B10G_p170}, and the ADAF shown in Figure \ref{fig:adaf}) at two frequencies $10^{11}$ Hz and $10^{18}$ Hz, at which we expect the accretion flow to contribute (see Section \ref{sec:adaf}). Obviously, the internal radio photon field is dominating the external radio photon field. Even at X-ray energies, after only a short time (10 days, over the two months of simulated observation) the internal target field contribution is larger than the external one. Therefore we do not consider the accretion flow as an external target photon field for the jet particles.
The SED is obtained by averaging the light curves over a time corresponding to the observation campaign time, i.e. 2 months.
The goodness of the fits (for both the light curve and the SED) is estimated by computing the p-value of the $\chi^2$-test for each model. We keep models that have a p-value $p_{\chi^2} > 0.01$.
In Figures \ref{fig:flux_B10G_p170} and \ref{fig:flux_B10G_p185} we present four models for which $B = 10$~G. The models have the lowest (highest) maximum proton energy and lowest (highest) proton index possible given the observations and the constraints listed in Section \ref{sec:jet section}. With these models we obtain a jet power of $P_\mathrm{j} = 2-4 \times 10^{43} \, \mathrm{erg}\, \mathrm{s}^{-1}$ and ratios of magnetic-to-particle energy density of $U_\mathrm{part}/U_B = 0.6-1.3$.
In Figures \ref{fig:flux_B50G_p170} and \ref{fig:flux_B50G_p200} we did the same exploration, and present four models for which $B = 50$~G. We find that with such a high value for the magnetic field strength, it is harder to fit the data, and one has to consider lower proton densities and higher maximum proton energies. For a proton injection spectrum of $p=2$, we find a good fit only for maximum proton energies $E_\mathrm{p,max} \ge 8\times 10^{9} $~GeV (see top panel in Figure \ref{fig:flux_B50G_p200}). With these models we obtain a jet power of $P_\mathrm{j} \approx 3 \times 10^{44} \, \mathrm{erg}\, \mathrm{s}^{-1}$ and ratios of magnetic energy density to particle energy density of $U_\mathrm{part}/U_B \approx 10^{-2}$
For both $B=10$~G and $B=50$~G, it is easier to obtain a light-curve above 350 GeV complying with the observations with a higher value of the maximum proton energy, but since proton synchrotron radiation represents the main contribution to the high-energy spectral bump, the higher the maximum proton energy, the higher the frequency the emission will peak at, and the SED fits get poorer.
The neutrino spectra (single-flavor flux) produced by the source from the models with $B = 10$~G are presented in Figure \ref{fig:neutrinos}. The predicted flux is low, because the main gamma-ray emission contribution is due to proton synchrotron radiation, which does not produce neutrinos. We compare this value to the sensitivity to a point-like source of high-energy neutrinos with a neutrino flux $\propto E^{-2}$, of the Pierre Auger Observatory \citep{PierreAuger} and the IceCube observatory \citep{IceCube} at M87's declination.
\section{Conclusions} \label{sec:conclusion}
We have applied a lepto-hadronic, time-dependent jet model, complemented with an advection dominated accretion flow to M87's nuclear emission in a low flux state. We found a range of parameter values that allow to reproduce the multi-wavelength data taken in 2017 during the \cite{EHTpaper} observation campaign. We investigated two types of jet configuration, namely the moving blob and the continuous jet scenario. For a given geometry, we were able to find identical results for both geometries. We focused on an jet emission region of the size similar to the EHT angular resolution at M87's distance, namely $5 r_\mathrm{g}$. Within this region we estimated a magnetic field strength in the range 5-60 G. The level of flux around the synchrotron self-absorption frequency ($86 < \nu_\mathrm{SSA} < 230$ GHz) constrains further the injection parameters of the relativistic electrons in the jet. For a range $10 \, \mathrm{G} \le B \le 50 \, \mathrm{G}$ we found that the electrons spectral index is limited to $p_e \approx 1.80 - 1.85$, in order to reproduce the radio-to-optical part of the SED. For the same reason, the maximum energy for the electron distribution is found to be $E_{\mathrm{max},e} \lesssim 5$ GeV. Concerning the high energy emission, we have found parameter values that fit the data for the whole range of magnetic field strengths considered. However, it it worth pointing out that the proton maximum energy and spectral index ranges are dependent on the value for the magnetic field strength. For $B = 10$ G we found that when $p_p \simeq 1.7$ (minimum proton spectral index) the proton maximum energy is in the range $6\times 10^9 \, \mathrm{GeV} \lesssim E_{\mathrm{max}, p} \lesssim 1\times 10^{10} \, \mathrm{GeV}$ while for $p_p \simeq 1.85$ (maximum proton spectral index for these parameter values) it is in the range $7\times 10^9 \, \mathrm{GeV} \lesssim E_{\mathrm{max}, p} \lesssim 1\times 10^{10} \, \mathrm{GeV}$. For $B = 50$ G when $p_p \simeq 1.7$ (minimum proton spectral index) the proton maximum energy is in the range $7\times 10^9 \, \mathrm{GeV} \lesssim E_{\mathrm{max}, p} \lesssim 1\times 10^{10} \, \mathrm{GeV}$ while for $ p_p \simeq 2.00 $ (maximum proton spectral index for these parameter values) it is in the range $8\times 10^9 \, \mathrm{GeV} \lesssim E_{\mathrm{max}, p} \lesssim 1\times 10^{10} \, \mathrm{GeV}$. This required increase in the maximum proton energy makes it harder to fit the gamma-ray part of the SED above $10^{25}$ Hz.
Combining the jet's emission with the ADAF allows us to reproduce at the same time the apparent cut-off in the optical band, and the power-law-like flux component at X-rays energies.
Unlike previous works (e.g. \citealt{Feng_accretion_jet_model, LLAGNmodels}), we found a configuration where both the jet and the ADAF have a distinct contribution in the SED. Beyond the scope of this paper, as mentioned in the introduction, would be the estimation of the contribution to the cosmic-ray flux from M87. Indeed, with protons accelerated up to $10^{10}\, \mathrm{GeV}$ and a jet power of $10^{43-44}$ erg s$^{-1}$, M87 could contribute to the detected high-energy cosmic-ray flux on Earth (\cite{M87UHECR_TeV}, see \citealt{uhecr_power_requirement} about the power requirements of cosmic-ray sources).
In this work we have considered only one-zone models for the jet emission. In the framework of structured jet models, multi-zone scenarios have been invoked to explain M87's SED (e.g. \citealt{Two-flow_jets, structuredjet}). In particular, \cite{SpineSheath_jets} developed a leptonic scenario in which a fast inner jet is embedded in a slower outer sheath. Here, the beaming pattern related to the boosting of one layer into the other could explain the high-energy part of the SED. They applied it to M87's SED \citep{M87_spine_sheath}. A transverse structure in jets is further supported by observations of limb-brightening (for M87 see \citealt{M87_limb}, for radio-galaxies and blazars see \citealt{Mrk_structure, RG_structure}). However, with twice as many parameters as for one-zone jet models, such as the one we considered, it is difficult to constrain two-zone scenarios to date.
\\
\begin{acknowledgments}
MB has for this project received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 847476. The views and opinions expressed herein do not necessarily reflect those of the European Commission. MB wishes to thank Paolo Da Vela for the fruitful discussions and insightful comments on this paper.
LM acknowledges support from the DFG within the Collaborative Research Center SFB1491 "Cosmic Interacting Matters - From Source to Signal".
This research was funded in part by the Austrian Science Fund (FWF)
(grant number I 4144-N27). For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
We would like to thank the anonymous referee for comments and suggestions that helped improve this paper.
\software{This work benefited from the following software: NumPy \citep{numpy}, Matplotlib \citep{matplotlib}, pandas \citep{pandas, panda_software}, jupyter notebooks \citep{ipython}.}
\end{acknowledgments}
\appendix
\section{Jet's magnetic field: b(p) coefficient} \label{b(p)}
\cite{Kino_SSA} derived a relation between the jet's magnetic field and the observable quantities:
\begin{eqnarray*}B = b(p)\left(\frac{\nu_\mathrm{ssa,obs}}{1\mathrm{GHz}}\right)^5\left(\frac{\theta_\mathrm{obs}}{1\mathrm{mas}}\right)^4\left(\frac{S_{\nu_\mathrm{ssa},\mathrm{obs}}}{1\mathrm{Jy}}\right)^{-2}\left(\frac{\delta}{1+z}\right) \, \mathrm{G}.
\end{eqnarray*}
Here $b(p)$ is defined as
$ b(p) = 5.52\times 10^{57}\, \Big[\Big(3 X_2 \, c_2(p)\Big)/\Big(2\pi X_1 \, c_1(p)\Big)\Big]^2 $, where
\begin{eqnarray*}
X_1 = \frac{\sqrt{3}e^3}{8\pi m_e}\left(\frac{3e}{2\pi m_e^3 c^5}\right)^{p/2}
\end{eqnarray*}
\begin{eqnarray*}
c_1(p)=\Gamma\left(\frac{3p + 2}{12}\right)\Gamma\left(\frac{3p + 22}{12}\right)
\end{eqnarray*}
\begin{eqnarray*}
X_2 = \frac{\sqrt{3}e^3}{8\sqrt{\pi} m_e c^2}\left(\frac{3e}{2\pi m_e^3 c^5}\right)^{(p-1)/2}
\end{eqnarray*}
\begin{eqnarray*}
c_2(p)=\Gamma\left(\frac{3p + 19}{12}\right)\Gamma\left(\frac{3p -1}{12}\right)\Gamma\left(\frac{p +5}{4}\right)/\Gamma\left(\frac{p + 7}{4}\right)/(p+1)
\end{eqnarray*}
\section{ADAF self-similar solutions} \label{appendix:self-solutions}
The one-zone, height-integrated, self-similar solutions of the slim disc equations were derived by \cite{Narayan_Yi_original} to describe the hot plasma. The solutions and their expression using the relevant scaled quantities are given by:
\begin{eqnarray}
v_R &\approx& \frac{\alpha}{2}v_K \approx 1.06\times 10^9 \, \alpha_{-1} \, r^{-1/2} \,\, \mathrm{cm} \, \mathrm{s}^{-1} \nonumber \\
c_s &\approx& \frac{1}{2} v_K \approx 1.06\times 10^{10} \, r^{-1/2} \,\, \mathrm{cm} \, \mathrm{s}^{-1} \nonumber\\
H &\approx& \frac{1}{2}R \approx 1.48 \times 10^{14} \, m_{\mathrm{BH},9} \, r \,\, \mathrm{cm} \\
\rho = n_p\,m_p &\approx& \frac{\dot{M}}{4\pi R H v_R } \approx 2.66\times 10^{-15} \, m_{\mathrm{BH},9}^{-1} \, \dot{m}_{-3} \, \alpha_{-1}^{-1} \, r^{-3/2} \,\, \mathrm{g} \, \mathrm{cm}^{-3} \nonumber\\
B &\approx& \sqrt{8\pi \rho c_s^2 (1 - \beta)} \approx 2.75 \times 10^3 \, m_{\mathrm{BH},9}^{-1/2} \, \dot{m}_{-3}^{1/2} \, \alpha_{-1}^{-1/2} \, (1-\beta)^{1/2} \, r^{-5/4} \,\, \mathrm{G} \nonumber\\
\tau_\mathrm{T} &=& n_p \sigma_\mathrm{T} R \approx 0.313 \, \dot{m}_{-3} \, \alpha_{-1}^{-1} \, r^{-1/2} \nonumber
\end{eqnarray}
where $v_K = \sqrt{G M_\mathrm{BH}/R}$ is the Keplerian velocity, $v_R$ is the radial velocity, and $c_s$ is the isothermal sound speed. Here, $\alpha$ is the viscosity parameter introduced by \cite{ShakuraSunyaev}, $\beta$ the ratio between the gas $p_g$ and the total pressure $p = \rho c_s^2 = p_m + p_g$ with $p_m = B^2/8\pi$, where $B$ is the isotropically tangled magnetic field. The Thomson optical depth is denoted by $\tau_\mathrm{T}$.
\section{ADAF cooling mechanisms} \label{appendix:details_synch_brem}
\subsection{Synchrotron radiation and bremsstrahlung} \label{appendix:brem and synch}
We assume that the plasma electrons follow a relativistic Maxwellian distribution
\begin{equation}
N_e(\gamma_e, \theta_e) = n_e \frac{\gamma_e^2 \beta_e \exp{(-\gamma_e/\theta_e)}}{\theta_e \, K_2(1/\theta_e)} \label{eq:maxwellian},
\end{equation} where $n_e \approx n_p$ is the electrons number density, $\beta_e$ and $\gamma_e$ are the relative velocity and the Lorentz factor of the thermal electrons respectively and $K_n(x)$ is the n-th order modified Bessel function.
For synchrotron radiation from thermal electrons and bremsstrahlung, we use the fitting formula derived by \cite{Narayan_Yi_original}. The synchrotron emissivity is given by
\begin{equation}
j_{\nu, \mathrm{syn}} = 4.43\times 10^{-30}\frac{4\pi \, n_e \, \nu}{K_2(1/\theta_e)} \, I'\left(\frac{4\pi \, m_e \, c \, \nu}{3 \, e \, B \, \theta_e^2}\right) \, \, \mathrm{erg} \, \mathrm{cm}^{-3} \, \mathrm{s}^{-1} \, \mathrm{Hz}^{-1}\label{eq:syn_emissivity}
\end{equation}
where $I'(x)$ is defined in \cite{Narayan_Yi_original}:
\begin{eqnarray*}
I'(x) = \frac{4.0505}{x^{1/6}}\left( 1 + \frac{0.4}{x^{1/4}} + \frac{0.5316}{x^{1/2}} \right)\exp{(-1.8899\,x^{1/3})}
\end{eqnarray*}
The Bremsstrahlung cooling rate is given by the sum of the rates from electron-electron and ion-electron interactions \citep{rates_hotplasma, Svensson_brems}:
\begin{equation}
q_\mathrm{br} = q_\mathrm{ee} + q_\mathrm{ei}
\end{equation}
The ion-electron and electron-electron bremsstrahlung cooling rates are respectively given by \citep{Svensson_brems, rates_hotplasma}:
\begin{eqnarray*}
q_\mathrm{ei} & = & 1.48 \times 10^{-22} \, n_e^2 \, F_\mathrm{ei}(\theta_e) \, \, \mathrm{erg} \, \mathrm{cm}^{-3} \, \mathrm{s}^{-1}\\
q_\mathrm{ee} & = & \left\{
\begin{array}{cc}
& 2.56 \times 10^{-22} \, n_e^2 \, \theta_e^{3/2} \, (1 + 1.1\theta_e + \theta_e^2 - 1.25\theta_e^{5/2}) \quad \mathrm{if} \, \, \theta_e < 1\\
& 3.40 \times 10^{-22} \, n_e^2 \, \theta_e \, \left[ \ln{(1.123\theta_e)} + 1.28 \right] \quad \mathrm{if} \, \, \theta_e > 1
\end{array}
\right. \, \, \mathrm{erg} \, \mathrm{cm}^{-3} \, \mathrm{s}^{-1}
\end{eqnarray*}
where
\begin{eqnarray*}
F_\mathrm{ei}(\theta_e) = \left\{
\begin{array}{cc}
& 4\left( \frac{2\theta_e}{\pi^3} \right)^{0.5} \, (1 + 1.1781\theta_e^{1.34}) \quad \mathrm{if} \, \, \theta_e < 1\\
& \frac{9\theta_e}{2\pi} \, \left[ \ln{(1.123\theta_e + 0.48)} + 1.5 \right] \quad \mathrm{if} \, \, \theta_e > 1
\end{array}
\right.
\end{eqnarray*}
Assuming a Gaunt factor equal to unity, we approximate the emissivity due to bremsstrahlung to be
\begin{equation}
j_{\nu, \mathrm{br}} \approx q_\mathrm{br}\frac{h}{k_\mathrm{B} T_e}\exp{(-\frac{h\nu}{k_\mathrm{B}T_e})} \,. %
\end{equation}
\subsection{Compton cooling} \label{appendix:compton}
In order to take into account the inverse Compton scattering in the cooling mechanism we use the formulation derived by \cite{Esin_compton_enhancement} to compute the cooling rate.
Assuming the comptonization is enhancing the initial energy of the seed photons, we can define the energy enhancement factor $\eta(\nu)$ such that
\begin{equation}
\eta = \exp{[s(A-1)]}[1 - P(j_m+1, As)] + \eta_\mathrm{max}P(j_m+1, s)
\end{equation}
where $P(a,x)$ is the regularized lower incomplete gamma function and
\begin{eqnarray*}
A = 1 + 4\theta_e + 16\theta_e^2, \quad s = \tau_\mathrm{T} + \tau_\mathrm{T}^2 \\
\eta_\mathrm{max} = \frac{3k_\mathrm{B} T_e}{h\nu}, \quad j_m = \frac{\ln{\eta_\mathrm{max}}}{\ln{A}} \quad .
\end{eqnarray*}
Finally the total cooling rate of the electrons is given by
\begin{equation}
q^{e-} = \frac{1}{H} \int \mathrm{d}\nu \eta(\nu) F_{\nu,\mathrm{0}} \label{eq:Qminus}
\end{equation}
where $F_{\nu,\mathrm{0}}$ is described in the main text, equation \ref{eq:lnu_synch_brem}.
\section{ADAF heating rates} \label{appendix:heating_rates}
The electrons are heated in two different ways in the plasma, namely they can be directly heated by a fraction $\delta_e$ of the viscous dissipated energy, and they can also be heated through Coulomb collisions with the ions.
The viscous energy dissipation rate per unit volume $q^\mathrm{visc}$ is given in \citep{Narayan_Yi_original} as
\begin{equation}
q^\mathrm{visc} = \frac{3 \epsilon' \rho v_R c_s^2}{2R} = 0.08\epsilon' \, m_{\mathrm{BH},9}^{-2} \, \dot{m}_{-3} \, r^{-4} \, \, \mathrm{erg} \, \mathrm{cm}^{-3} \, \mathrm{s}^{-1}
\end{equation}
where $\epsilon' = (5/3 - \gamma')/(\gamma' - 1)$, with $\gamma' = (32 - 24\beta - 3\beta^2)/(24 - 21\beta) $.
For the Coulomb interaction heating rate per unit volume $q^\mathrm{ie}$, from \cite{rates_hotplasma}, assuming $n_e \approx n_p$, we use the approximation from \cite{Mahadevan1997}.
\begin{equation}
q^\mathrm{ie} = 5.61\times 10^{-32} \frac{n_e^2(T_i - T_e)}{K_2(1/\theta_e)}\left(\frac{\theta_e\theta_i}{\theta_i(\theta_e + \theta_i}\right)^{1/2}\left[\frac{2(\theta_e + \theta_i)^2 + 1 + 2(\theta_e + \theta_i}{(\theta_e + \theta_i)}\right]e^{-1/\theta_e} \, \, \mathrm{erg} \, \mathrm{cm}^{-3} \, \mathrm{s}^{-1}
\end{equation}
The total heating rate is then given by
\begin{equation}
q^{e+} =q^\mathrm{ie} + \delta_e q^\mathrm{visc} \label{eq:Qeplus}
\end{equation}
\section{Truncated thin disc} \label{appendix disc}
For completeness, we compute the spectrum from an outer disc, such that the inner radius of the disc is equal to the outer radius of the accretion flow $r_\mathrm{tr} = r_\mathrm{max}$.
The emission is characterized by the sum of blackbody spectra with temperature
\begin{equation}
T_\mathrm{disc}(R) = \left( \frac{G\, M_\mathrm{BH}\, \dot{M}}{8\pi \, R^3 \, \sigma_\mathrm{B}}\left[ 1 - \left( \frac{R_\mathrm{tr}}{R} \right)^{1/2}\right] \right)^{1/4}
\end{equation}
The emission is then given by
\begin{equation}
F_{\nu, \mathrm{disc}} = \frac{4\pi h \cos{\theta} \nu^3}{c^2 D^2}\int_{R_\mathrm{tr}}^{R_\mathrm{max,disc}} \, \frac{R \mathrm{d}R}{e^{h\nu/k_\mathrm{B}T_\mathrm{disc}(R)} - 1}
\end{equation}
where we have set an outer radius of $\sim 3\times 10^6$ but this parameter has a poor influence on the spectrum, given the large truncation radius.
\bibliography{references}{}
\bibliographystyle{aasjournal}
|
Title:
21 cm power spectrum in interacting cubic Galileon model |
Abstract: We show the detectability of interacting and non-interacting cubic Galileon
models from the $\Lambda$CDM model through the 21 cm power spectrum. We show
that the interferometric observations like the upcoming SKA1-mid can detect
both the interacting and the non-interacting cubic Galileon model from the
$\Lambda$CDM model depending on the parameter values.
| https://export.arxiv.org/pdf/2208.11560 |
\title{21 cm power spectrum in interacting cubic Galileon model}
\author{Bikash R. Dinda \orcid{0000-0001-5432-667X}}
\email{[email protected]}
\email{[email protected]}
\affiliation{Department of Physical Sciences, Indian Institute of Science Education and Research Kolkata, India}
\affiliation{Department of Theoretical Physics, Tata Institute of Fundamental Research, Dr. Homi Bhabha Road, Navy Nagar, Colaba, Mumbai-400005, India}
\author{Md. Wali Hossain \orcid{0000-0001-6969-8716}}
\email{[email protected]}
\affiliation{Department of Physics, Jamia Millia Islamia, New Delhi, 110025, India}
\author{Anjan A. Sen}
\email{[email protected]}
\affiliation{Centre For Theoretical Physics, Jamia Millia Islamia, New Delhi, 110025, India}
\date{\today}
\section{Introduction}
Recent cosmological observations \cite{Riess:1998cb,Perlmutter:1998np,Ade:2015xua} unveiled that our present Universe is going through an accelerated expanding phase. But to date, we do not have any proper theoretical explanation for this accelerated expansion. The simple possible explanation could be an exotic matter having negative pressure known as {\it dark energy} \cite{Copeland:2006wr,Linder:2008pp,Silvestri:2009hh,Sahni:1999gb}. The cosmological constant ($\Lambda$) is the simplest candidate for dark energy. It is also the most favored cosmological model observationally. However, it is plagued with the fine-tuning problem \citep{Martin:2012bt} and cosmic coincidence problem \citep{Zlatev:1998tr}. Apart from these well-known problems the concordance $\Lambda$CDM model is also in conflict with the string swampland conjectures \citep{Vafa:2005ui,Obied:2018sgi,Andriot:2018wzk} and the recent local measurement of the present value of the Hubble constant $H_0$ \citep{Riess:2019cxk,Wong:2019kwg,Pesce:2020xfe}. While the swampland conjecture rules out any stable de Sitter solution in string theory, the local measurement of $H_{0}$ points towards a discrepancy of 5$\sigma$ between $H_0$ constraint from the Planck observation of cosmic microwave background (CMB) with $\Lambda$CDM as the underlying model \citep{Planck:2018vyg} and the recent model independent local measurement of $H_0$ by Riess et al \citep{Riess:2019cxk}.
To Look for alternative theories of gravity, one way is to modify the gravity at the large cosmological scale in such a way so that it becomes repulsive at large scales which gives rise to the accelerated expansion of the universe \cite{Clifton:2011jh,deRham:2014zqa,deRham:2012az,DeFelice:2010aj}. Dvali, Gabadadze, and Porrati (DGP) gave a scenario where a 4D Minkowsky brane is located on an infinitely large extra dimension and gravity is localized in the 4D Minkowsky brane \cite{Dvali:2000hr}. This model is known as the DGP model which gives rise to late time acceleration but its self-accelerating branch has a ghost \cite{Luty:2003vm,Nicolis:2004qq}. When we reduce the 5D DGP theory to 4D, at the decoupling limit, the theory gives rise to a Lagrangian of the form $(\nabla \phi)^2 \Box \phi$ \cite{Luty:2003vm}. This Lagrangian, in the Minkowski background, possesses the Galilean shift symmetry $\phi\to\phi+b_\mu x^\mu+c$, where $b_\mu$ and $c$ are the constants, and gives rise to second order equation of motion and hence free from ghost \cite{Luty:2003vm,Nicolis:2004qq,Nicolis:2008in}. Because of possessing the shift symmetry, the scalar field is dubbed as the "Galileon" \cite{Nicolis:2008in}. In the Minkowski background, we can construct two more Lagrangians containing higher derivatives and shift symmetry which gives rise to a second-order equation of motion. Along with a linear potential term and standard canonical kinetic term there exist five such terms which can possess the above-mentioned shift symmetry and give a second order equation of motion \cite{Nicolis:2008in}. All these five terms together form the Galileon Lagrangian \cite{Nicolis:2008in}. In the curved background, the non-linear completion of the Galileon Lagrangian includes non-minimal coupling to keep the equation of motion second order\cite{Deffayet:2009wt}. The non-minimal terms may cause the fifth force effect which can be evaded locally by implementing the Vainshtein mechanism \cite{Vainshtein:1972sx}. The Galileon models are the sub-classes of the more general scalar-tensor theory known as the Horndeski theory \cite{Horndeski:1974wa,Kobayashi:2011nu}. Late time cosmic acceleration is well studied in the Galileon theory \cite{Chow:2009fm,Silva:2009km,Kobayashi:2010wa,Kobayashi:2009wr,Gannouji:2010au,DeFelice:2010gb,DeFelice:2010pv,Ali:2010gr,Mota:2010bs,Deffayet:2010qz,deRham:2010tw,deRham:2011by,Hossain:2012qm,Ali:2012cv}.
Horndeski theories are constraint by the detection of the event of binary neutron star merger GW170817, using both gravitational waves (GW) \cite{LIGOScientific:2017vwq} and its electromagnetic counterpart \cite{LIGOScientific:2017zic,LIGOScientific:2017ync}, which rules out a large class of Horndeski theories that predict the speed of GW propagation different from that of the speed of light \cite{Ezquiaga:2017ekz,Zumalacarregui:2020cjh}. The only higher derivative term that survives is $\sim G(\phi,X) \Box \phi$,where $X=-(1/2)(\nabla\phi)^2$ and $G(\phi,X)$ is a function of $\phi$ and $X$. When $G(\phi,X)\sim(\nabla \phi)^2$ it is cubic Galileon term. This cubic term along with the usual kinetic term and a potential term forms the Cubic Galileon model. Potentials other than the linear one break the shift symmetry but the equation of motion is still second order. These kinds of models are known as the Light Mass Galileon models \cite{Hossain:2012qm,Ali:2012cv}. Without the potential term, in the Cubic Galileon models, we can not have stable late time acceleration \cite{Gannouji:2010au}. This model has been studied extensively in the context of late time cosmology \cite{Chow:2009fm,Silva:2009km,Hossain:2012qm,Ali:2012cv,Brahma:2019kch,Bartolo:2013ws,Bellini:2013hea,Barreira:2013eea,Hossain:2017ica,Dinda:2017lpz}.
In literature, there are several dark energy and modified gravity models. Good dark energy or a modified gravity model should be consistent with different cosmological observations. Hence it is important to study the cubic Galileon model in the context of cosmological observations. In literature, such efforts involving the Galileon model have been done earlier, for example, in the context of type Ia supernova observations \citep{Brahma:2020eqd}, cosmic microwave background, and baryonic acoustic oscillations observations \citep{Renk:2017rzu} etc. In the same spirit, the 21 cm cosmological observations like using the upcoming SKA telescope (square kilometer array) will be promising to detect dark energy and modification of gravity models. For this purpose, the post-reionization epoch (redshift $<6$) is particularly important to constrain the dynamics of dark energy or the dynamics of cosmological geometry in the modified gravity models. In the post-reionization epoch, the Universe is assumed to be highly ionized but neutral hydrogen atoms (HI) are still present and these HI are the biased tracers of the matter in the Universe. Thus the HI energy density tracks the matter-energy density in the Universe and the HI power spectrum is related to the matter power spectrum. The power spectrum of the intensity mapping of the large-scale HI distribution (commonly called the 21 cm power spectrum) is thus one kind of measurement of the large-scale structure formation \citep{Wyithe:2007gz,Loeb:2008hg}.
This paper aims to study the effect of the cubic Galileon model in the 21 cm power spectrum and to check the detectability of the deviation due to this model from the $\Lambda$CDM in the context of the upcoming SKA1-MID telescope specification. The upcoming SKA1-MID telescope is specifically proposed designed to study the structure formation in the low redshift regions ($<3$) which is useful to constrain dark energy and modified gravity model parameters \citep{SKA:2018ckk}. According to the updated proposed design of SKA1-MID, it will have a total of 197 antennas. Among these 64 antennas are from MeerKAT and 133 antennas are originally from SKA1-MID \citep{SKA:2018ckk}. SKA1-MID is proposed to be detecting the redshifted 21 cm line signal at the redshift range from 0.5 to 3 which will have good accuracy (competitive or even with better accuracy than other observations related to galaxy clustering etc.) to test the dynamics of the dark energy or modified gravity \citep{SKA:2018ckk,2019arXiv191212699B}.
The paper is organized as follows: in Sec.~\ref{sec-background} we describe the background dynamics of the Universe in presence of the interacting cubic Galileon field; in Sec.~\ref{sec-perturbation} we describe the evolution of the matter inhomogeneity and the corresponding matter power spectrum; in Sec.~\ref{sec-21cmps} we describe the 21 cm power spectrum; in Sec.~\ref{sec-ska1mid} we show the detectability of interacting cubic Galileon model in the SKA1-MID telescope specifications; finally in Sec.~\ref{sec-conclusion} we present our conclusion.
\section{Background evolution}
\label{sec-background}
We consider the following action in the Einstein frame with a potential $V(\phi)$ \cite{Ali:2012cv}
\begin{eqnarray}
\S=\int {\rm d}^4x\sqrt{-{\rm g}}\Big [\frac{\Mpl^2}{2} R-\frac{1}{2}(\nabla \phi)^2\Bigl(1+\frac{\al}{M^3}\Box \phi\Bigr) - V(\phi) \Big]+ \S_\m\Bigl[\Psi_\m;{\rm e}^{2 \beta \phi/M_{\rm pl}} {\rm g}_{\mu\nu}\Bigr] \, ,
\label{eq:action}
\end{eqnarray}
where the scalar field is non-minimally coupled with gravity in the Jordan frame with $\beta$ as the coupling constant and ${\rm e}^{2 \beta \phi/M_{\rm pl}}$ is the conformal factor that relates the Jordan and Einstein frame metric tensors. In Eq~\eqref{eq:action} $M$ is a constant of mass dimension one, $\Mpl=1/\sqrt{8\pi G}$ is the reduced Planck mass, $\al$ is a dimensionless constant and $\S_\m$ corresponds to the matter action with $\Psi_\m$'s as the matter fields. Action~\eqref{eq:action} can be realized as a sub-class of Horndeski theories \citep{Horndeski:1974wa,Kobayashi:2011nu} and one can recover the usual quintessence models on taking $\alpha\rightarrow0$. $V(\phi)$ is the potential of the cubic Galileon field. Throughout the paper, we have considered only the linear potential, because Galileon shift symmetry is preserved only in linear potential.
In flat Friedmann--Lema\^itre--Robertson--Walker (FLRW) metric, given by $ds^{2} = - dt^{2} + a^{2} (t) d\vec{r}.d\vec{r}$, where $t$ is the cosmic time, $\vec{r}$ is the comoving coordinate vector and $a$ is the cosmic scale factor, the background cosmological equations can be obtained by varying action \eqref{eq:action} with respect to the metric tensor ${\rm g_{\mu\nu}}$ \citep{Ali:2012cv,Hossain:2012qm}
\begin{eqnarray}
3M_{\rm pl}^2H^2 &=& \bar{\rho}_m+\frac{\dot{\phi}^2}{2}\Bigl(1-6 \frac{\alpha}{M^3} H\dot{\phi}\Bigr)+V{(\phi)},
\label{eq:first_Friedmann} \\
M_{\rm pl}^2(2\dot H + 3H^2) &=& -\frac{\dot{\phi}^2}{2}\Bigl(1+2 \frac{\alpha}{M^3} \ddot{\phi}\Bigr)+V(\phi),
\label{eq:second_Friedmann}
\end{eqnarray}
\noindent
where over-dot is the derivative with respect to the cosmic time $t$, $H$ is the Hubble parameter and $ \bar{\rho}_m $ is the background matter-energy density. The background equation of motion for the Galileon field $ \phi $ is given by \citep{Ali:2012cv}
\begin{equation}
\ddot{\phi} + 3H\dot{\phi}-3 \frac{\alpha}{M^3} \dot{\phi}\Bigl(3H^2\dot{\phi}+\dot{H}\dot{\phi}+2H\ddot{\phi}\Bigr)+ V_{\phi}= - \frac{\beta}{M_{Pl}} \bar{\rho}_m ,
\label{eq:E-L_eq}
\end{equation}
\noindent
where subscript $\phi$ is the derivative with respect to the field $\phi$. The continuity equations for matter and scalar field are given by
\begin{eqnarray}
\dot{ \bar{\rho}}_m + 3 H \bar{\rho}_m &=& \frac{\beta}{M_{Pl}} \dot{\phi} \bar{\rho}_m \, ,\\
\dot{ \bar{\rho}}_{\phi} + 3 H (1+w_{\phi}) \bar{\rho}_{\phi} &=& - \frac{\beta}{M_{Pl}} \dot{\phi} \bar{\rho}_m \, ,
\end{eqnarray}
\noindent
where $w_{\phi}$ is the equation of the state of the scalar field.
To study the background evolution we rewrite the above differential equations as an autonomous system of equations. To do this we define the following dimensionless variables \citep{Ali:2012cv,Hossain:2012qm}
\begin{eqnarray}
x &=& \frac{ \dot{\phi} }{\sqrt{6} H M_{Pl}} = \frac{\Big{(} \dfrac{d \phi}{d N} \Big{)}}{\sqrt{6} M_{Pl}}\, ,
\label{eq:x}\\
y &=& \frac{\sqrt{V}}{\sqrt{3} H M_{Pl}}\, ,
\label{eq:y}\\
\epsilon &=& -6 \frac{\alpha}{M^3} H \dot{\phi} = -6 \frac{\alpha}{M^3} H^{2} \Big{(} \dfrac{d \phi}{d N} \Big{)}\, ,
\label{eq:ep}\\
\lambda &=& - M_{Pl} \frac{V_{\phi}}{V},
\label{eq:lam}\\
\Gamma &=& \frac{V_{\phi \phi}V}{V_{\phi}^{2}}\, ,
\label{eq:gam}
\end{eqnarray}
where $N= \ln a$. Using the above-mentioned dimensionless variables we can form the following autonomous system \citep{Ali:2012cv}
\begin{align}
\label{eq:auto_diff}
\frac{{\rm d}x}{{\rm d}N}&=x\Bigl(\frac{\ddot{\phi}}{H\dot{\phi}}-\frac{\dot H}{H^2}\Bigr), \nonumber\\
\frac{{\rm d}y}{{\rm d}N}&=-y \Bigl(\sqrt{\frac{3}{2}}\lambda x+\frac{\dot H}{H^2}\Bigr), \nonumber\\
\frac{{\rm d}\epsilon}{{\rm d}N}&=\epsilon \Bigl(\frac{\ddot{\phi}}{H\dot{\phi}}+\frac{\dot H}{H^2}\Bigr), \nonumber\\
\frac{{\rm d}\lambda}{{\rm d}N}&=\sqrt{6}x\lambda^2(1-\Gamma),
\end{align}
where
\begin{align}
\frac{\dot H}{H^2}&=\frac{6(1+\epsilon)(y^2-1)-3x^2(2+4\epsilon+\epsilon^2)}{4+4\epsilon+x^2\epsilon^2} +\frac{\sqrt{6}x\epsilon (y^2\lambda -\beta \Omega_m)}{4+4\epsilon+x^2\epsilon^2}\\
\frac{\ddot{\phi}}{H\dot{\phi}}&=\frac{3x^3\epsilon-3x\Bigl(4+\epsilon (1+y^2)\Bigr)+2\sqrt{6}(y^2\lambda-\beta\Omega_m)}{x(4+4\epsilon+x^2\epsilon^2)}
\end{align}
\noindent
From Eq.~\eqref{eq:first_Friedmann} we have the constraint equation $\Omega_\m+\Omega_\phi=1$, where $\Omega_\m$ is the matter density parameter and $\Omega_{\phi} = x^2 (\epsilon +1)+y^2$ is the scalar field density parameter. The scalar field ($w_\phi$) and the effective ($w_{\rm eff}$) equation of states are given respectively
\begin{eqnarray}
w_{\phi} &=& \frac{x \left(3 x (\epsilon (\epsilon +8)+4)-2 \sqrt{6} \beta \epsilon \left(x^2 (\epsilon +1)-1\right)\right)-2 y^2 \left(\epsilon \left(\sqrt{6} x (\beta +\lambda )+6\right)+6\right)}{3 \left(\epsilon \left(x^2 \epsilon +4\right)+4\right) \left(x^2 (\epsilon +1)+y^2\right)},
\label{eq:wphi} \\
w_{ \text{eff} } &=& \frac{x \left(3 x (\epsilon (\epsilon +8)+4)-2 \sqrt{6} \beta \epsilon \left(x^2 (\epsilon +1)-1\right)\right)-2 y^2 \left(\epsilon \left(\sqrt{6} x (\beta +\lambda )+6\right)+6\right)}{3 \left(\epsilon \left(x^2 \epsilon +4\right)+4\right)}.
\label{eq:weff}
\end{eqnarray}
To solve the above system of differential equations (Eq.~\eqref{eq:auto_diff}), we need initial conditions for $x$, $y$, $\epsilon$ and $\lambda$. We denote these by $x_i$, $y_i$, $\epsilon_i$ and $\lambda_i$ respectively. In literature, scalar field models are mainly two types depending on the attractor or thawing behavior. Here, we consider the thawing type of behavior for the Galileon field and accordingly choose the initial conditions. In the thawing kind of behavior, the equation of state of the scalar field is very close to $-1$ at initial times, and at late times it thaws away from $-1$. From Eq.~\eqref{eq:wphi}, we can see that $w_{\phi} \approx -1$ when $x \ll y$. We want this behavior at initial times so we fix $x_i = 10^{-3} y_i$ throughout. We fix the initial conditions at redshift, $z=1000$. Next, we fix the $y_i$ parameter for which $\Omega_{m0}=0.3111$. This value corresponds to the Planck 2018 results. Here $\Omega_{m0}$ is the present value of the matter-energy density parameter. So, we are left with three parameters $\epsilon_i$, $\lambda_i$, and the $\beta$ parameters. $\epsilon_i$ represents the deviation from the quintessence model, $\lambda_i$ represents the initial slope of the potential and $\beta$ represents the interaction between matter and the Galileon field.
With the above-mentioned initial conditions, we show the background evolutions in the interacting cubic Galileon model, by plotting the equation of state and the Hubble parameter in Figure~\ref{fig:wphi_H}. From the left panel, we see that the higher the $\epsilon_i$ value lesser the deviation from $\Lambda$CDM. Higher the $\lambda_i$ value larger the deviation from $\Lambda$CDM. More negative the $\beta$ value larger the deviation from the $\Lambda$CDM. We see the same behavior in the deviation of the Hubble parameter in the right panel.
\section{Evolution of perturbations}
\label{sec-perturbation}
In this work, we are working in the sub-Huuble limit. In this limit, we can use the quasi-static approximation i.e we can consider Newtonian perturbation. With the Newtonian perturbation and in the linear theory, the evolution equation of the growth function ($D_{+}$) for the cubic Galileon model is given by \citep{gal1,gal6}
\begin{equation}
\dfrac{d^{2} D_{+}}{d N^{2}} + \left[ - \frac{1}{2} 3 (w_{\text{eff}}+1)+\sqrt{6} \beta x+2 \right] \dfrac{d D_{+}}{d N} - \frac{3}{2} \Omega_{m} \frac{G_{ \text{eff} }}{G} D_{+} = 0 ,
\label{eq:Dplus}
\end{equation}
\noindent
where the growth function is defined as $ \delta_{m} (z) = D_{+} (z) \delta_{m}^{i} $. $\delta_{m}$ is the matter inhomogeneity and $\delta_{m}^{i}$ its initial value. $G$ is the Newtonian gravitational constant. In the interacting cubic Galileon model, the effective gravitational constant ($G_{ \text{eff} }$) is given by \citep{Ali:2012cv}
\begin{equation}
\frac{G_{ \text{eff} }}{G} = \frac{x \left(3 x (\epsilon (\epsilon +8)+4)-2 \sqrt{6} \beta \epsilon \left(x^2 (\epsilon +1)-1\right)\right)-2 y^2 \left(\epsilon \left(\sqrt{6} x (\beta +\lambda )+6\right)+6\right)}{3 \left(\epsilon \left(x^2 \epsilon +4\right)+4\right)} .
\end{equation}
\noindent
To solve the differential equation of the growth function, we consider the fact that at an initial time, at matter dominated epoch (here at $z=1000$), $ D_{+} \propto a $. This corresponds to $ D_{+}|_{i} = \frac{1}{1+1100} = \frac{d D_{+}}{d N} \big{|}_{i} $.
Once we have the solution for the growth function, we can compute the linear matter power spectrum $ P_{m} $ given by
\begin{equation}
P_{m}(k,z) = A k^{n_{s}} T^{2} (k) \frac{D_{+}^{2}(z)}{D_{+}^{2}(z=0)},
\label{eq:Pm}
\end{equation}
where $k$ corresponds to the amplitude of the wave vector. $n_{s}$ is the scalar spectral index for the primordial power spectrum. $ T(k) $ is the transfer function. For our calculations, we consider the $T(k)$ given by Eisenstein and Hu \citep{eisenhu}. $A$ is the normalization constant corresponds to the usual $ \sigma_{8} $ normalization. Here, we consider Eisenstein-Hu transfer function and for the $\sigma_{8}$ normalisation, we fix $\Omega_{m}^{(0)}=0.3111$, $h=0.6766$, $\Omega_{b}^{(0)}=0.049$, $n_{s}=0.9665$ and $\sigma_{8}=0.8102$. These values are best-fit values according to the Planck 2018 results. $h$ is defined as $H_0 = 100 h Km S^{-1} Mpc^{-1}$ with $H_0$ being the present value of the Hubble parameter. $\Omega_{b}^{(0)}$ is the present value of the baryonic matter energy density parameter.
\section{21 cm power spectrum}
\label{sec-21cmps}
The matter distribution is not directly related to the observation. However, in this regard 2-point correlation in excess brightness temperature is useful. So, we study the detectability of the cubic Galileon model over the $\Lambda$CDM model using the 21 cm power spectrum. The 21 cm power spectrum, $P_{21}$ of the excess brightness temperature field is given by \citep{PS_21_1,PS_21_2,PS_21_3,PS_21_5,PS_21_6,PS_21_7,PS_21_8,PS_21_10}
\begin{equation}
P_{21} (k,z,\mu) = C_{T}^{2} (1+\beta_{T} \mu^{2})^{2} P_{m}(k,z),
\label{eq:P212D}
\end{equation}
\noindent
where $\mu=\hat{n}.\hat{k}=\cos{\theta}$, where $\hat{n}$ is the line of sight (LOS) unit direction and $\theta$ is the angle between LOS and the wave vector. $\beta_T$ is defined as
\begin{equation}
\beta_T = \frac{f}{b},
\label{eq:betaT}
\end{equation}
\noindent
where $f$ is the growth factor and it is defined as $f=\frac{d \ln{D_{+}}}{d \ln{a}}$. $b$ is the linear bias that connects the HI distribution to the matter distribution. Throughout this paper, we consider the linear bias i.e. $b=1$. $C_{T}$ is the mean HI excess brightness temperature given by \citep{PS_21_1,PS_21_2}
\begin{equation}
C_{T} (z) = b \bar{x}_{HI} \bar{T} (z),
\label{eq:CT}
\end{equation}
\noindent
where $\bar{x}_{HI}$ is the neutral hydrogen fraction. $\bar{T}$ is given by \citep{PS_21_1,PS_21_2}
\begin{equation}
\bar{T} (z) = 4.0 mK (1+z)^{2} \Big{(} \frac{\Omega_{b0} h^{2}}{0.02} \Big{)} \Big{(} \frac{0.7}{h} \Big{)} \frac{H_{0}}{H(z)}.
\label{eq:Tbar}
\end{equation}
$\mu$ averaged 21 cm power spectrum is computed as
\begin{eqnarray}
P_{21}(k,z) = \int_{0}^{1} d\mu \hspace{0.2 cm} P_{21}(k,z,\mu).
\label{eq:P21avg}
\end{eqnarray}
\noindent
Note that we keep the same notation, $P_{21}$ for the $\mu$ averaged 21 cm power spectrum. We show the deviations in $P_{21}(k,z)$ in Figure~\ref{fig:P21cmp} for the cubic Galileon model from $\Lambda$CDM with the same combinations of parameter values as in Figure~\ref{fig:wphi_H}. We see that the deviations are up to 2 to 20$\%$ depending on the parameters.
In radio interferometric observations like in square kilometer array (SKA) observation, the observables are not measured in $k$, but rather in the baseline distribution, $U$. The conversion from $k$ to $U$ is fiducial cosmological model dependent. If this fiducial model is different from the actual model, we need to consider the correction to the 21 cm power spectrum, defined in Eq.~\eqref{eq:P212D}. Throughout this paper, we consider $\Lambda$CDM as the fiducial model. So, for the cubic Galileon model, the observed 21 cm power spectrum would be \citep{Bull,P21_3D_2,P21_3D_5}
\begin{equation}
P_{21}^{3D}(k,z,\mu) = \frac{1}{\alpha_{||} \alpha_{\perp}^{2}} C_{T}^{2} \Big{[} 1+\beta_{T} \frac{\mu^{2} / F^{2}}{1+(F^{-2}-1) \mu^{2}} \Big{]}^{2} P_{m} \Big{(} \frac{k}{\alpha_{\perp}} \sqrt{1+(F^{-2}-1) \mu^{2}}, z \Big{)},
\label{eq:P213D}
\end{equation}
\noindent
where $ \alpha_{||} = H_{\rm fd} / H $, $ \alpha_{\perp} = r / r_{\rm fd} $ and $ F = \alpha_{||} / \alpha_{\perp} $. The subscript 'fd' corresponds to the fiducial model. Here $r$ is the line of sight comoving distance. The above corrected 21 cm power spectrum is sometimes referred to as the 3D 21 cm power spectrum and that is why we denote this by putting a superscript '3D'. Now we compute the angle averaged (or $\mu$ averaged) 21 cm power spectrum as
\begin{eqnarray}
P_{21}^{3D}(k,z) = \int_{0}^{1} d\mu \hspace{0.2 cm} P_{21}^{3D}(k,z,\mu).
\label{eq:P213Davg}
\end{eqnarray}
\noindent
Note that, we use the same notation for averaged 21 cm power spectrum.
In Figure~\ref{fig:ps21D3Lcdm}, we show the deviation in the angle averaged observed (3D) 21 cm power spectrum for the cubic Galileon model from the $\Lambda$CDM model for the same parameter values as in Figure~\ref{fig:P21cmp}. In the 3D 21 cm power, the deviations are similar. Another important thing to notice here is that there is $k$ dependence in the deviations in the 3D power spectrum unlike in the normal power spectrum. This is because the fiducial model and the actual model are different here.
\section{Detectibility with SKA1-mid telescope}
\label{sec-ska1mid}
Here we are considering the detectability of the cubic Galileon model from the $\Lambda$CDM model with the SKA1-mid telescope. To do so, we need to consider the errors that arise in the observed 21 cm power spectrum. We only consider two types of errors here. One is the system noise and another one is the sample variance. The system noise arises from the instrument. The cosmic variance arises from the finite sampling of modes. We ignore other errors like astrophysical residual foregrounds, effects from the ionosphere, etc. So, we consider an ideal instrumental detection where these errors are completely removed and the only errors present are the system noise and the sample variance. We closely follow the approach of \citep{noise_err_1} to estimate the noise.
We assume a circularly symmetric antenna distribution ($\rho_{ant}$) which is a function of $l$ only, where $l$ is the distance from the centre of the antennae distributions. For the SKA1-mid telescope antenna distributions, we follow the document \url{https://astronomers.skatelescope.org/wp-content/uploads/2016/09/SKA-TEL-INSA-0000537-SKA1_Mid_Physical_Configuration_Coordinates_Rev_2-signed.pdf}. The SKA1-mid telescope specifications have 133 SKA antennas with 64 MEERKAT antennas. The 2D baseline distribution is then given by \citep{noise_err_1}
\begin{equation}
\rho_{2D} (U,\nu_{21}) = B(\nu_{21}) \int_{0}^{\infty} 2 \pi l dl \hspace{0.1 cm} \rho_{ant} (l) \int_{0}^{2 \pi} d\phi \hspace{0.1 cm} \rho_{ant} (|\vec{l}-\lambda_{21} \vec{U}|),
\label{eq:rho2d}
\end{equation}
\noindent
where $\vec{U}$ is the baseline vector given by $\vec{U} = \frac{\vec{k}_{\perp} r}{2 \pi}$. $\vec{k}_{\perp}$ wavevector at the transeverse direction. $\nu_{21}$ and $\lambda_{21}$ are the observed frequency and the wavelength of the 21 cm signal respectively. $\phi$ is the angle between $\vec{l}$ and $\vec{U}$. $ B(\nu_{21})$ is computed by the normalization given by
\begin{equation}
\int_{0}^{\infty} U dU \int_{0}^{\pi} d\phi \hspace{0.1 cm} \rho_{2D} (U,\nu_{21}) = 1.
\label{eq:rho2dnorm}
\end{equation}
The 3D baseline distribution ($\rho_{3D}(k,\nu_{21})$) is defined as
\begin{equation}
\rho_{3D}(k,\nu_{21}) = \left[ \int_{0}^{1} d\mu \hspace{0.2 cm} \rho_{2D}^{2} \left( \frac{r k}{2 \pi} \sqrt{1-\mu^{2}},\nu_{21} \right) \right] ^{\frac{1}{2}},
\label{eq:rho3d}
\end{equation}
The system noise in the 21 cm power spectrum is given by \citep{noise_err_1,noise_err_2,noise_err_3,noise_err_4,Dinda:2018uwm,Hotinli:2021xln}
\begin{equation}
\delta P_{N} (k,\nu_{21}) = \frac{T_{sys}^{2}}{B t_{0}} \Big{(} \frac{\lambda_{21}^{2}}{A_{e}} \Big{)}^{2} \frac{2 r^{2} L}{N_{t} (N_{t}-1) \rho_{3D} (k,\nu_{21})} \frac{1}{\sqrt{N_{k}(k)}}.
\label{eq:pnfinal}
\end{equation}
\noindent
where $N_t$ is the total number of antennas (here 197). $B$ is the bandwidth corresponding to the observed signal and $L$ is the corresponding comoving length. $t_{0}$ is the observation time. $A_{e}$ is the effective collecting area of an individual antenna. This is related to the physical collecting area, $A$ of an antenna given by $A_{e}=e A $, where $e$ is the efficiency of an antenna. We consider $e=0.7$ and $A\approx 1256.6 m^2$ for an SKA1-mid antenna. $ T_{sys} $ is the system temperature. To compute system temperature, we closely follow ... $N_{k}$ is the total number of independent modes in the range between $k$ to $k+dk$ given by $N_{k}(k) = \frac{2 \pi k^{2} dk}{V_{1-mode}}$, where $V_{1-mode}$ is the volume occupied by a single independent mode given by $V_{1-mode} = \frac{(2 \pi)^{3} A}{r^{2} L \lambda_{21}^{2}}$.
The sample variance in 21 cm power spectrum is given by \cite{noise_err_1,noise_err_2,noise_err_3,noise_err_4,Dinda:2018uwm,Hotinli:2021xln}
\begin{equation}
\delta P_{SV} (k,\nu_{21}) = \Big{[} \sum_{\theta} \frac{N_{m}(k,\theta)}{P_{21}^{2}(k,\theta)} \Big{]}^{- \frac{1}{2}},
\label{eq:sv}
\end{equation}
\noindent
where $N_{m}(k,\theta) = \frac{2 \pi k^{2} dk \sin\theta d\theta}{V_{1-mode}}$. Thus, total noise is given by
\begin{equation}
\delta P_{tot} (k,\nu_{21}) = \sqrt{ \delta P_{N}^2 (k,\nu_{21}) + \delta P_{SV}^2 (k,\nu_{21}) } .
\label{eq:totalN}
\end{equation}
In Figure~\ref{fig:ps21D3LcdmSKA}, we have shown that the cubic Galileon model can be detected from the $\Lambda$CDM model with the 21 cm power spectrum. The parameter values and the color codes are the same as in Figure~\ref{fig:ps21D3Lcdm}. The orange line is the total error in the 21 cm power spectrum with the SKA1-mid telescope specifications. The lines above this orange line are detectable by the SKA1-mid telescope.
\section{conclusion}
\label{sec-conclusion}
In this work, we consider the cubic Galileon model which is a special case of Horndeski gravity. Because of the Galileon shift symmetry, this model is ghost-free. We consider the thawing kind of behavior in this model using proper initial conditions. We consider both the interacting and the non-interacting cubic Galileon model. By interaction, we mean the interaction between the total matter and the Galileon field. The potential is linear in this model which naturally arises in this kind of model. We show the deviations of this model from the $\Lambda$CDM one both through the background and the perturbative quantities. Our main focus here is to detect this model through the 21 cm observations. For this purpose, we consider interferometric observations like SKA1-mid observations. We consider system noise and the sample variance according to this observation and by considering $\Lambda$CDM as the fiducial model. We ignore other errors in the 21 cm power spectrum by the assumption that we can completely remove them for an ideal observation. With this error, we show the possibility of the detection of the cubic Galileon model from the $\Lambda$CDM model. Our results show that with forthcoming SKA observations, we can put strong constraints on the modified gravity models like the interacting cubic Galileon model. Also with SKA observation at higher redshifts, there is a bright possibility to distinguish such kind of Galileon models from $\Lambda$CDM which is surely encouraging.
\section{Acknowledgements}
AAS acknowledges the funding from SERB, Govt of India under the research grant no: CRG/2020/004347.
|
Title:
Addition to the Gnevyshev-Ohl rule and prediction of solar cycle 25 |
Abstract: In addition to the Gnevyshev-Ohl rule (GOR), the relation of the odd cycle
with the subsequent even one in the 22-year Hale solar cycle was found. It is
shown that 3 years before the 11-year minimum $m$, the value of the relative
sunspot number SN in an odd cycle is closely related to the value of the
maximum in the next even cycle (correlation coefficient $\rho=0.94$), and the
same relation of an odd cycle with the previous even one is weaker. Like GOR,
cycles are linked in pairs, but opposite to the Rule.
Based on this result, we propose to use SN$_{m-3}$ on the descending phase of
the previous odd cycle as a precursor of the subsequent EVEN cycle (Figure 3a)
-- a precursor called MI3E. For the prediction of an odd cycle or a prediction
without consideration of parity (as in the article by Braj\v{s}a et al., 2022),
this method gives less reliable results.
To predict the amplitude of an ODD cycle, we propose to use the precursor of
the seventh year to its maximum $M$ MA7O -- SN$_{M-7}$ on the descending phase
of the previous even cycle (Figure 3b). It turned out that in this case, we can
also predict the years near the maximum with a high correlation coefficient
($\rho=0.90{-}0.94$).
Thus, the proposed approaches allow us to predict cycles of different parity.
According to our prediction, the current solar Cycle 25 in 2023 will reach a
maximum of 154 units with a prediction error of $\pm25$ (68% confidence) and
$\pm53$ (95% confidence). In 2024, SN will be almost as high as in 2023 -- 147
units, so with smaller time averaging scales, the maximum will fall at the end
of 2023.
| https://export.arxiv.org/pdf/2208.00101 |
\begin{article}
\begin{opening}
\newcommand\mytit{Addition to the Gnevyshev-Ohl rule and prediction of solar cycle 25}
\title{\mytit}
\author{Yu.~A.~\surname{Nagovitsyn}$^{1,2}$}
\author{V.~G.~\surname{Ivanov}$^{1}$}
\runningauthor{Nagovitsyn and Ivanov}
\runningtitle{\mytit}
\institute{$^{1}$Central Astronomical Observatory at Pulkovo, Saint-Petersburg, Russia\\
$^2$State University of Aerospace Instrumentation, St. Petersburg, Russia}
\keywords{Solar activity; Sunspots; Solar cycle prediction; Gnevyshev-Ohl rule}
\end{opening}
\section{Introduction}
As noted in (\opencite{nag09}), ``The empirical Gnevyshev-Ohl
rule (1948), below referred to as GOR or the Rule, is one of the most
puzzling properties of the solar cyclicity.'' \inlinecite{gnevohl48},
based on the consideration of a relative sunspot numbers $\SN$ from 1700
to 1944, found that the area under the curve (total power) of the
11-year cycle $\sum \SN$ has a high correlation for the pair of even --
subsequent odd cycles and weakly correlates for the pair of odd --
subsequent even cycles. Thus, they concluded that the 22-year Hale
cycle begins with an even cycle. This is a strange circumstance since
the resulting rule divides cycles into pairs, and neighboring pairs
are weakly related. One of the questions we raise in this article is:
in addition to the parameter $\sum \SN$, are there cycle parameters that
connect consecutive cycles in different ways, including in the odd --
subsequent even cycle pair?
In addition, we touch on a purely practical question of solar cycle
prediction: how will look the recently started Cycle 25, and what will
be its amplitude when the moment of maximum comes. As it turned out,
in the context under consideration, the prediction depends, and quite
critically, on the parity of cycles, i.e., the global organization of
the Sun's magnetic field in the light of the Hale cycle.
\section{Gnevyshev-Ohl rule}
Almost three-quarters of a century has passed since the publishing of
the article by Gnevyshev and Ohl. Since that time, new 11-year cycles
appeared, adding new statistical data, and the version of $\SN$ used by
these authors was replaced by a version called $\SN$ 2.0 (\opencite{clette14}, \citeyear{clette16}).
Therefore, using SILSO data \url{https://www.sidc.be/silso/}
let's plot the dependencies $\sum \SN_E = f_1(\sum \SN_O)$,
$\sum \SN_O = f_2(\sum \SN_E)$, and calculate the correlation
coefficients $\rho$ for them according to modern material, starting from
the year 1700. The ``E'' and ``O'' subscripts hereinafter indicate even
and odd cycles, respectively. As in the original work of \inlinecite{gnevohl48},
we exclude from consideration a pair of Cycles 4 and 5.
Figure~\ref{fi1} illustrates the results. The confidence interval $\pm1\sigma$ is
selected (the probability of falling within is 68\%).
There is an evident confirmation of the GOR. Thus, this rule captures
the causation of the odd cycle with the previous even one for the
modern version of $\SN$ 2.0 as well (Figure~\ref{fi1}a), while the even cycle
within the Rule is not related to the previous odd one (Figure~\ref{fi1}b).
A summary of possible methods for predicting the amplitudes of 11-year
cycles can be found in the review by \inlinecite{petrovay20}. In particular,
it contains the information about precursor methods.
The precursor methods are based on the idea that the cycle begins
before the sunspot solar cycle minimum between the old and the new
cycle. This idea does not contradict the general approach of the solar
dynamo and can be used to predict the amplitude of the future maximum
in the epoch near the previous minimum. A.I. Ohl (\citeyear{ohl66}) was one of the
first to propose such a method: as a precursor, he used geomagnetic
activity. The geomagnetic activity has often been used for solar
activity cycle prediction (for example, \opencite{pesnell14}). \inlinecite{sval05}
used a polar field. \inlinecite{wilson98} proposed a comprehensive prediction for a number of
cycle parameters. ``Older'' and less-known prediction methods, including
those with the approach of precursors, are contained in Vitinsky's
monograph (\citeyear{vit73}).
According to \inlinecite{macint20}, the international group of
experts Solar Cycle 25 Prediction Panel, (SC25PP) concluded that the
sunspot solar Cycle 25 will be similar in amplitude to Cycle 24; the
maximum will occur no earlier than 2023 and no later than 2026 with
the sunspot number from 95 to 130. However, at the beginning of this
year, Leamon and McIntosh announced that Cycle 24 had finally ended
(Termination Event occurred), and this allows us to look at the
prediction of \inlinecite{macint20} differently:
\url{https://spaceweatherarchive.com/2022/02/25/the-termination-event-has-arrived}.
According to the new prediction, the maximum of the Cycle 25 will be
about 190 (140--240), i.e., close in amplitude to the rather large
Cycle 23.
Most recently, an article by \inlinecite{brajsa22} was published, in
which the maximum of the started Cycle 25 is predicted as $121 \pm 33$ ---
i.e., low.
Thus, at the moment there is uncertainty about the future amplitude of
the Cycle 25, and the scatter of predictions is large.
\section{Cycle precursor associated with the minimum phase}
In the article by \inlinecite{brajsa22}, it is proposed to choose
values of $\SN$ three years before the minimum of the cycle as a
precursor of the maximum amplitude of the cycle. The maximum smoothed
monthly averages were used. We will use annual averages in our work ---
this is not a fundamental difference for the prediction. The interval
for the study is the same as that of \inlinecite{brajsa22} --- from the
year 1749 to the present.
Let's make some remarks. According to SILSO data, Cycle 20 has a
fairly flat maximum with maximum amplitude in the year 1968. At the
same time, $\SN$ show a different maximum according to Kislovodsk data
\url{http://www.solarstation.ru/archiv}, the sunspot group number $G\!N$
according to \inlinecite{svalshat16}, \inlinecite{usos16} data, and the sunspot area according to Greenwich data
\url{https://solarscience.msfc.nasa.gov/greenwch.shtml}: in 1970 --- see Table~\ref{tab1}, the values in the maxima years are underlined. Based on this, in
our work, we will assume that the maximum of the 20th cycle
corresponds to the middle of the year 1970--1970.5.
\begin{table}
\caption{
Average annual values of various indices of solar activity in Cycle 20.
}\label{tab1}
\begin{tabular}{cccccc}
\hline
Year & $\SN$, & $\SN$, & $G\!N$, & $G\!N$, & Area, \\
& SILSO & Kislovodsk & Svalgaard & Usoskin & $\mu$sh, \\
& & & \& Schatten, 2016 & el al., 2016 & Greenwich \\
\hline
1964.5 & 15.0 & 12.9 & 0.88 & 0.845 & 54 \\
1965.5 & 22.0 & 20.4 & 1.21 & 1.203 & 113 \\
1966.5 & 66.8 & 61.2 & 3.32 & 3.635 & 593 \\
1967.5 & 132.9 & 133.3 & 7.07 & 7.934 & 1519 \\
1968.5 & \underline{150.0} & 142.2 & 7.14 & 8.129 & 1570 \\
1969.5 & 149.4 & 139.3 & 7.27 & 7.947 & 1450 \\
1970.5 & 148.0 & \underline{148.3} & \underline{8.04} & \underline{8.968} & \underline{1601} \\
1971.5 & 94.4 & 109.6 & 5.49 & 6.081 & 990 \\
1972.5 & 97.6 & 108.8 & 5.33 & 5.955 & 917 \\
1973.5 & 54.1 & 54.0 & 2.95 & 3.235 & 458 \\
1974.5 & 49.2 & 46.7 & 2.69 & 2.824 & 399 \\
1975.5 & 22.5 & 19.9 & 1.19 & 1.252 & 166 \\
1976.5 & 18.4 & 16.7 & 1.08 & 1.122 &170 \\
\hline
\end{tabular}
\end{table}
Let's take the values of $\SN$ three years before the minimum and
calculate their correlations with the values of different years that
are separated by $\Delta$ years from the next maximum. Negative values of $\Delta$
will correspond to the years before the maximum, positive --- after.
Zero is the maximum itself. The minimum will be denoted by $m$, the
maximum by $M$. The parity of the cycle will be determined by the
predicted maximum. We will build linear relations of the following
form using the least squares method (LSM):
\begin{equation}
\SN_{M+\Delta} = a + b\,\SN_{m-3}\,,
\label{eq1}
\end{equation}
and weakly non-linear ones:
\begin{equation}
\SN_{M+\Delta} = a + b\,\SN_{m-3} + c\,\SN_{m-3}^2\,,
\label{eq2}
\end{equation}
and also calculate the coefficients of determination for them $DC\equiv\rho^2$. This
coefficient is preferable because it has a clear meaning: it is the
proportion of dispersion that can be described using the appropriate
regression. The results are shown in Figure~\ref{fi2} separately for even,
odd, and all cycles together. The dashed line shows the upper limit of
the ``acceptable'' level of correlation for the prediction from our
point of view: $DC=0.5$ ($\rho=0.7$).
The first thing that can be seen in Figure~\ref{fi2}: taking into account the
nonlinearity does not lead to a noticeable improvement in
correlations. But that's not the main thing. It turns out that the
tightness of the relation $\SN_{m-3}$ with $\SN$ in the maximum and the years
closest to it strongly depends on the parity of the predicted cycle.
We can predict an even cycle with a coefficient of determination D =
0.882 (respectively, with $\rho = 0.94$), while an odd cycle --- only with $DC
= 0.524$ ($\rho = 0.72$).
According to the prediction in the style of the article by \inlinecite{brajsa22},
i.e., if we do not take into account the parity of cycles,
we get for a maximum of Cycle 25: $\SN_M = 129\pm34$, $\rho=0.808$, which is quite consistent with
the values obtained in their article. According to the nonlinear
prediction~(\ref{eq2}), our same values were obtained --- taking into account
the possible nonlinearity does not give anything additional: $\SN_M = (71\pm17) + (1.45\pm0.23)\SN_{m-3}$. It is
also close to the received by \inlinecite{brajsa22}.
Now let's see what the regressions and the prediction will look like
if we take into account the parity of the cycles. For even cycles
regression
\begin{equation}
\SN_M = (50\pm14) + (1.60\pm0.18) \SN_{m-3}\,, \quad \rho = 0.939\,,
\label{eq3}
\end{equation}
If our predicted cycle were even, then its maximum would be $\SN_M=113\pm21$.
Confidence intervals due to the high correlation coefficient are very
small. However, Cycle 25 is odd. We calculate the regression for odd
cycles:
\begin{equation}
\SN_M = (91\pm29) + (1.32\pm0.40) \SN_{m-3}\,, \quad \rho = 0.724
\label{eq4}
\end{equation}
and the prediction itself:
\begin{equation}
\SN_M = 144\pm44\,.
\label{eq5}
\end{equation}
This implies that \inlinecite{brajsa22}, having considered even and odd
cycles together, underestimated the predicted value of Cycle 25. The
difference in regressions for even and odd cycles is shown in
Figure~\ref{fi3}a.
Summarizing, according to the effect proposed by \inlinecite{brajsa22},
we can successfully predict the maximum of the next EVEN cycle three
years before the minimum, thereby supplementing the Gnevyshev-Ohl rule
with the relation even -- subsequent odd cycles. The success of the
prediction of an even cycle is guaranteed by a high correlation
coefficient of the equation~(\ref{eq3}).
The precursor of the prediction discussed in this section will be
called MI3E for even cycles and MI3O for odd ones (bearing in mind
that only MI3E corresponds to a reliable prediction).
\section{Cycle precursor associated with the maximum phase}
In the paper by \inlinecite{slonim84}, the idea was expressed to search for
cycle precursors a certain number of years before the maximum. Thus,
the possible predictors turn out to be associated with the maximum
phase, and not with the minimum phase, as in the previous section.
Here we will also consider regressions separately for even, odd cycles
and for all cycles together, choosing for analysis the same time
interval as before, since 1749. Figure~\ref{fi4}a illustrates the changes
in the coefficients of determination for linear relationships of the
type
\begin{equation}
\SN_M = a + b\cdot \SN_{M+\Delta}\,.
\label{eq6}
\end{equation}
Here we bear in mind that $\Delta = 0$ corresponds to the maximum of the
cycle. Precursors $\SN_{M-\Delta}$ correspond to negative $\Delta$ --- i.e., years before
maximum, positive ones indicate relations with post-maximum years,
which are also interesting for the prediction. In addition, we take
into account the weak nonlinearity of the relations. For negative $\Delta$ in
the form
\begin{equation}
\SN_M = a + b\cdot \SN_{M+\Delta} + c\cdot \SN_{M+\Delta}^2\,,
\label{eq7}
\end{equation}
and for positive ones ---
\begin{equation}
\SN_{M+\Delta} = a + b\cdot \SN_M + c\cdot \SN_M^2\,.
\label{eq8}
\end{equation}
We do this specifically for the possible prediction of not only the
maximum values but also the index values in subsequent years, which,
as it turns out, are quite closely related to the maximum. The
determination coefficients for relations of type~(\ref{eq7})--(\ref{eq8}) are shown in
Figure~\ref{fi4}b.
Let's note two circumstances. The first --- in comparison with $\SN_{m-3}$ a more
``successful'' precursor for the prediction of the maximum is $\SN_{M-7}$: in the
case of a linear form~(\ref{eq6}) for odd cycles $DC=0.849$ ($\rho=0.921$), for even $DC=0.863$ ($\rho=0.929$); in the case of
a nonlinear form~(\ref{eq7}) for odd $DC=0.859$ ($\rho=0.927$), for even $DC=0.903$ ($\rho=0.950$). Secondly, when considering
all cycles together, the correlation of $\SN_{M-7}$ with $\SN_M$ is less than for each
of the parity separately, and this means that their relations are
different. This is clearly seen in Figure~\ref{fi3}b.
When we predict the amplitude of the future cycle, we are, of course,
primarily interested in the amplitude of the maximum, but we are also
interested in neighboring years. We found that the predictor of the
maximum is $\SN_{M-7}$. And how much can this value predict the years $M-1$,
$M+1$, $M+2$, etc.? Figure~\ref{fi5} answers this question: the seventh year
before the maximum is a predictor not only of the maximum but also of
the index in the years near the maximum. This applies to both even and
odd cycles, although the relations between $\SN_{M-7}$ and $\SN_{M+\Delta}$ are different.
The prediction precursor discussed in this section we will call MA7E for even cycles and MA7O for odd ones.
Thus, we have created a ``constructor'' for the prediction of the solar cycle, in this case --- the odd solar Cycle 25.
\section{Prediction of the solar activity cycle 25}
Let's use the results of the previous sections to predict Cycle 25. We
use the MA7O predictor. Assuming different years of maximum onset, we
get the $\SN_M$ values given in the Table~\ref{tab2}.
\begin{table}
\caption{
Predicted $\SN_M$ values depending on the estimated maximum year.
}\label{tab2}
\begin{tabular}{cc}
\hline
Year & $\SN_M$ \\
\hline
2022.5 & $199 \pm 25$ \\
2023.5 & $159 \pm 25$ \\
2024.5 & $133 \pm 25$ \\
\hline
\end{tabular}
\end{table}
Then, using regressions, the coefficients of determination of which
are shown in Figure~\ref{fi5}, we calculate the values of $\SN$ in the years of
Cycle 25, which have prognostic significance, for different maximum
years from Table~\ref{tab2}.
The results are shown in Figure~\ref{fi6}a: light icons --- rhombuses, squares,
and circles --- connected by a dashed line constructed using a global
cubic spline. Comparing the obtained curves with the course of the
monthly average values of $\SN$ observed up to June 2022 (small circles
connected by thin lines), we conclude that 2023.5 is the most
preferable of the three years of the assumed maximum from Table 2.
However, in general, the monthly average values are located, although
close to the interpolation curve, but still somewhat below her. Let's
assume a new maximum date --- 2024.0. Note that the average annual
values can be calculated by averaging $\SN$ not only traditionally from
January to December but also from July to June of the next year, and.
Let's calculate such annual averages, output regressions with
determination coefficients similar to those shown in Figure~\ref{fi5}, and
calculate the values for the dates 2023.0--2028.0 in annual
increments --- see Figure~\ref{fi6}a: black circles connected by a thick
spline curve. It can be seen that by assuming a maximum year of 2024.0
we get a better agreement of the behavior of the predicted and
observed $\SN$.
Now about the values indicated in Figure~\ref{fi6}a by crosses. Are there any
general trends in the last years of the cycle (in our case, the odd
one)? Consider the values of $\SN$ in the years of the final minima of
the odd cycle $m$ and the two years before it $m-1$ and $m-2$, depending
on their distance from the initial minimum $\Delta$. It turns out that this
dependence can be described by a parabola $\SN(\Delta) = (327\pm87)-(50\pm18)\Delta+(1.96\pm0.87)\Delta^2$
with a correlation coefficient $\rho=0.811$, i.e. sufficiently high, and we can use it to predict
the descending phase of a new cycle (as it seems, this is a new
unexpected result). In Figure~\ref{fi6}a crosses represent data given by the
date of the initial minimum of Cycle 25 as $2019.5\pm\Delta$. Note here that for even
cycles, a similar dependence also occurs, although the correlation
coefficient with the parabola is somewhat less --- $\rho=0.768$ . However, it can
also be used for solar cycle prediction in the future.
So far, for predictions, we have used confidence ranges of estimates
for a deviation equal to the standard deviation (i.e., with a
probability of falling into the interval equal to 68\%) --- as in \inlinecite{brajsa22},
as well as in many authors of solar cycle predictions.
In fact, if we want to achieve a reliability of at least 95\%, we need
to set a prediction interval based on at least two standard
deviations. Therefore, we will repeat all the previous procedures for
this new requirement. The average annual values of $\SN$, interpreted for
years with traditional calculation from January to December, with
confidence bands are shown in Figure~\ref{fi6}b.
Note that our methodology allows us to predict individual average
annual values of $\SN$ in a cycle, and not only the amplitude of its
maximum, as with most authors.
Having made a prediction of the Cycle 25, it is natural to check
whether the Gnevyshev-Ohl rule is fulfilled for a pair of Cycles
24--25. In Figure~\ref{fi1}a, the corresponding point is circled. As we can
see, our prediction does not contradict this well-known rule.
1
\section{Results}
The Gnevyshev-Ohl rule establishes a relation between adjacent 11-year
cycles in the 22-year ``magnetic'' Hale cycle as: ``an even cycle
determines the next odd one,'' and thus they form a pair. In this work,
we found that the behavior of an odd cycle determines the behavior of
the subsequent even one. Namely, 3 years before the minimum, the value
of $\SN$ in the odd cycle is associated with the value of the maximum in
the subsequent even cycle (correlation coefficient $\rho=0.94$. Similar to GOR,
in this sense, cycles are linked in pairs, but opposite to the Rule.
This is one of the important results of the work.
Based on this result, we propose to use $\SN_{m-3}$ (Figure~\ref{fi3}a) as a precursor of
the subsequent EVEN cycle during the descending phase of the odd one ---
we call this method MI3E. For the prediction of an odd cycle or a
prediction without parity (as in the article by \opencite{brajsa22}),
this method gives less reliable results.
To predict the ODD cycle, we propose to use the precursor of the
seventh year to its maximum MA7O --- $\SN_{M-7}$ during the descending phase of
the previous even cycle (Figure~\ref{fi3}b). It turned out that in this case,
we can predict the years near the maximum with a high correlation
coefficient ($\rho=0.90{-}0.94$). In addition, 7 years before the maximum, it is also
possible to predict an even cycle according to the MA7E precursor
(Figure~\ref{fi5}).
Also noteworthy is the similar behavior of $\SN$ for different cycles
near the final minimum of the cycle, depending on the distance from
the initial minimum. Thus, the proposed approaches make it possible to
predict cycles of different parity.
The current Cycle 25 in the year 2023 should reach a maximum of 154
average annual values with a prediction interval of 25 with a
confidence of 68\% and 53 with a confidence of 95\%. This is higher than
the original official prediction of NOAA/NASA/ISES but lower than the
updated prediction of \inlinecite{leamon21} according to the
methodology of \inlinecite{macint20}. In 2024, $\SN$ will be almost as
high as in 2023 --- 147 units, so with smaller time averaging scales,
the maximum will fall at the end of 2023. Here we note that the
proposed approach makes it possible to predict the values of $\SN$ in
individual years of the cycle, and not only the amplitude of its
maximum.
We have not discussed in this article the relationship between the
precursors $\SN_{m-3}$ and $\SN_{M-7}$, their relative position on the descending phase of
cycles, as well as their physical meaning --- this is the task of the
following studies.
Yury Nagovitsyn thanks the Ministry of Science and Higher Education of
the Russian Federation for financial support for project
075--15--2020--780.
\end{article}
|
Title:
Magnetic field properties in star formation: a review of their analysis methods and interpretation |
Abstract: Linearly polarized emission from dust grains and molecular spectroscopy is an
effective probe of the magnetic field topology in the interstellar medium and
molecular clouds. The longstanding Davis-Chandrasekhar-Fermi (DCF) method and
the recently developed Histogram of Relative Orientations (HRO) analysis and
the polarization-intensity gradient (KTH) method are widely used to assess the
dynamic role of magnetic fields in star formation based on the plane-of-sky
component of field orientations inferred from the observations. We review the
advances and limitations of these methods and summarize their applications to
observations. Numerical tests of the DCF method, including its various
variants, indicate that its largest uncertainty may come from the assumption of
energy equipartition, which should be further calibrated with simulations and
observations. We suggest that the ordered and turbulent magnetic fields of
particular observations are local properties of the considered region. An
analysis of the polarization observations using DCF estimations suggests that
magnetically trans-to-super-critical and averagely trans-to-super-Alfv\'{e}nic
clumps/cores form in sub-critical clouds. High-mass star-forming regions may be
more gravity-dominant than their low-mass counterparts due to higher column
density. The observational HRO studies clearly reveal that the preferential
relative orientation between the magnetic field and density structures changes
from parallel to perpendicular with increasing column densities, which, in
conjunction with simulations, suggests that star formation is ongoing in
trans-to-sub-Alfv\'{e}nic clouds. There is a possible transition back from
perpendicular to random alignment at higher column densities. Results from
observational studies using the KTH method broadly agree with those of the HRO
and DCF studies.
| https://export.arxiv.org/pdf/2208.06492 |
\onecolumn
\firstpage{1}
\title[Magnetic fields in star formation]{Magnetic field properties in star formation: a review of their analysis methods and interpretation}
\author[\firstAuthorLast ]{\Authors} %
\address{} %
\correspondance{} %
\extraAuth{}%
\section{Introduction}
Star formation within molecular clouds, the densest part of the interstellar medium (ISM), is regulated by the complex interplay among gravity, turbulence, magnetic fields, and other factors (e.g, protosteller feedback and feedback from previous generations of stars) at different scales. Magnetic fields interact with the other two major forces (gravity and turbulence) by providing supports against gravitational collapse \citep{1987ARA&A..25...23S} and generating anisotropic turbulence \citep{1995ApJ...438..763G}. Observational studies of magnetic fields are crucial to distinguish between strong-field star formation theories in which magnetically sub-critical clouds slowly form super-critical substructures that subsequently collapse \citep{2006ApJ...646.1043M}, and weak-field star formation theories where large-scale supersonic turbulent flows form overdense intersecting regions that dynamically collapse \citep{2004RvMP...76..125M}.
Polarized thermal dust emission observations have been the most common way to trace the plane-of-sky (POS) component of magnetic field orientations with the assumption that the shortest axis of irregular dust grains is aligned with magnetic field lines \citep{1949PhRv...75.1605D, 2007MNRAS.378..910L, 2007JQSRT.106..225L}. The Goldreich-Kylafis (GK) effect provides an alternative way to trace the POS field orientation (with a 90$^{\circ}$ ambiguity) with molecular line polarization observations \citep{1981ApJ...243L..75G}. The recently developed Velocity Gradient Technique (VGT) proposed another way to trace the POS field orientation with line observations based on the notion that the gradient of velocity centroids \citep[VCG,][]{2017ApJ...835...41G} or thin velocity channels \citep[VChG,][]{2018ApJ...853...96L} is perpendicular to the magnetic field due to the intrinsic properties of magneto-hydrodynamic (MHD) turbulence \citep{1995ApJ...438..763G}.
Several analysis techniques have been developed to infer the properties of magnetic fields based on their orientations: the Davis-Chandrasekhar-Fermi (DCF) method was proposed by \citet{1951PhRv...81..890D} and \citet{1953ApJ...118..113C} approximately 70 years ago and has been the most widely used method to indirectly derive the magnetic field strength with statistics of field orientations. A new tool, the polarization-intensity gradient method \citep[here after the KTH method, ][]{2012ApJ...747...79K}, was proposed about one decade ago and can also be used to assess the significance of magnetic fields based on ideal MHD equations. The Histogram of Relative Orientations (HRO) analysis \citep{2013ApJ...774..128S}, which was proposed right after the KTH method, measures the relative orientation between the magnetic field and density structures and can be used to link the observed magnetic morphology with the physics of simulations. These methods provide information on the magnetic properties in star-forming molecular clouds and allow us to investigate both qualitatively and quantitatively the dynamical role of magnetic fields in the collapse and fragmentation of dense molecular structures.
In this chapter we review the concept and recent developments of these techniques and discuss their limitations. We also summarize the application of these methods to observations of star formation regions and discuss the role of magnetic fields at different spatial scales. In particular, we focus on the relative importance of the magnetic field as compared to gravity and turbulence at different scales of star-forming clouds. In Section \ref{sec:dcf}, we review the DCF method. In Section \ref{sec:hro}, we review the HRO analysis. In Section \ref{sec:KTH}, we review the KTH method. In Section \ref{sec:sum}, we summarize this chapter.
\section{The DCF method}\label{sec:dcf}
In the middle 20th century, \citet{1951PhRv...81..890D} and \citet{1953ApJ...118..113C} proposed the DCF method to estimate the mean\footnote{In this paper, the mean field refers to the vector-averaged magnetic field ($B^\mathrm{m}$) and the ordered field refers to the curved large-scale ordered magnetic field ($B^\mathrm{o}$). We also use the term ``underlying field'' ($B^\mathrm{u}$) to refer to either the mean field or ordered field since many previous studies did not explicitly differ the two. The ordered field and the mean field are equivalent if the large-scale ordered field lines are straight.} magnetic field strength ($B^\mathrm{m}$) of the interstellar medium (ISM) in the spiral arm based on the first interstellar magnetic field orientation observation made by \citet{1949Sci...109..165H}. Since then, the method has been improved and adopted by the community to estimate the field strength in star-forming regions. In this section, we present a review of the original DCF method and its modifications.
\subsection{Basic assumptions} \label{sec:assump}
\subsubsection{Energy equipartition} \label{sec:energyeq}
The original DCF method assumes an equipartition between the transverse (i.e., perpendicular to the underlying field $\boldsymbol{B^{\mathrm{u}}}$) turbulent magnetic and kinetic energies (i.e., the Alfv\'{e}n relation, hereafter the DCF53 equipartition assumption):
\begin{equation}\label{eq:alfven42}
\frac{1}{2} \rho \delta v_{\mathrm{\perp}}^2= \frac{(B^{\mathrm{t}}_{\mathrm{\perp}})^2}{2 \mu_0},
\end{equation}
in SI units\footnote{The equations in SI units in this paper can be transformed to Gaussian units by replacing $\mu_0$ with $4\pi$.}, where $B^{\mathrm{t}}_{\mathrm{\perp}}$ is the transverse turbulent magnetic field, $\delta v_{\mathrm{\perp}}$ is the transverse turbulent velocity dispersion, $\mu_0$ is the permeability of vacuum, $\rho$ is the gas density. The DCF53 assumption is also adopted by the recently proposed Differential Measure Approach \citep[DMA, ][]{2022arXiv220409731L}. In the POS, the DCF53 assumption yields
\begin{equation}\label{eq:alfvenpos}
\frac{1}{2} \rho \delta v_{\mathrm{pos\perp}}^2= \frac{(B^{\mathrm{t}}_{\mathrm{pos\perp}})^2}{2 \mu_0},
\end{equation}
where ``pos'' stands for the POS.
Alternatively, \citet{2016JPlPh..82f5301F} assumed an equipartition between the coupling-term magnetic field ($\boldsymbol{B^{\mathrm{t}}_{\mathrm{\|}}} \cdot \boldsymbol{B^{\mathrm{u}}}$, where ``$\|$'' denote the direction parallel to $\boldsymbol{B^{\mathrm{u}}}$) and the turbulence (hereafter the Fed16 equipartition assumption) when the underlying field is strong. \citet{2021AA...656A.118S} further proposed that only the transverse velocity field is responsible for $\boldsymbol{B^{\mathrm{t}}_{\mathrm{\|}}} \cdot \boldsymbol{B^{\mathrm{u}}}$ and the transverse velocity field for the POS coupling-term field can be approximated with the line-of-sight (LOS) velocity dispersion $\delta v_{\mathrm{los}}$ (see their Section 4.2). Thus they obtained
\begin{equation}\label{eq:eqcouplingpos1}
\frac{1}{2} \rho \delta v_{\mathrm{los}}^2= \frac{B^{\mathrm{t}}_{\mathrm{pos\|}} B^{\mathrm{u}}_{\mathrm{pos}}}{\mu_0},
\end{equation}
where the POS transverse velocity dispersion is neglected. %
\paragraph{Sub-Alfv\'{e}nic case}
Conventionally, a sub-Alfv\'{e}nic state means that the underlying magnetic energy is greater than the turbulent kinetic energy when comparing the magnetic field with the turbulence. It is widely accepted that the DCF53 equipartition assumption is satisfied for pure incompressible sub-Alfv\'{e}nic turbulence due to the magnetic freezing effect where the perturbed magnetic lines of force oscillate with the same velocity as the turbulent gas in the transverse direction \citep{1942Natur.150..405A}. However, the star-forming molecular clouds are highly compressible \citep{2012A&ARv..20...55H}. For compressible sub-Alfv\'{e}nic turbulence, there are still debates on whether the DCF53 or Fed16 equipartition assumptions is more accurate \citep[e.g.,][]{2021AA...647A.186S, 2021AA...656A.118S, 2022MNRAS.510.6085L, 2022ApJ...925...30L, 2022arXiv220409731L}.
Observational studies usually adopt the local underlying field within the region of interest instead of the global underlying field at larger scales. In this case, the volume-averaged coupling-term magnetic energy is 0 by definition \citep{1995ApJ...439..779Z, 2022ApJ...925...30L}, which should not be used in analyses. Several numerical studies \citep{2016JPlPh..82f5301F, 2020MNRAS.498.1593B, 2021AA...647A.186S, 2022MNRAS.515.5267B} have suggested that the volume-averaged RMS coupling-term magnetic energy fluctuation should be studied instead of the volume-averaged coupling-term magnetic energy. With non-self-gravitating sub-Alfv\'{e}nic simulations, they found that the coupling-term field energy fluctuation is in equipartition with the turbulent kinetic energy within the whole simulation box. However, it is unclear whether the Fed16 equipartition assumption still holds in sub-regions of their simulations. Investigating the local energetics is very important because the local and global properties of MHD turbulence can be very different \citep{1999ApJ...517..700L, 2013SSRv..178..163B}. In small-scale sub-regions below the turbulent injection scale and without significant self-gravity, the local underlying magnetic field is actually part of the turbulent magnetic field at larger scales and the local turbulence is the cascaded turbulence \citep{2016JPlPh..82f5301F}. Within self-gravitating molecular clouds, the gravity is comparable to or dominates the magnetic field and turbulence at higher densities \citep{2012ARA&A..50...29C, 2013ApJ...779..185K, 2022ApJ...925...30L}, which has a strong effect on both magnetic fields and turbulence. e.g., the gravity can compress magnetic field lines and amplify the field strength; the gravitational inflows can accelerate the gas and enhance turbulent motions. As observations can only probe the magnetic field in part of the diffuse ISM or molecular clouds, it is necessary to test the validity of the Fed16 assumption in sub-regions of simulations with or without self-gravity. Moreover, \citet{2022MNRAS.510.6085L} and \citet{2022arXiv220409731L} pointed out that the physical meaning of the RMS coupling-term energy fluctuation adopted by the Fed16 equipartition assumption is still unclear, which needs to be addressed in the future.
The traditional DCF53 equipartition assumption has been tested by many numerical works \citep[e.g.,][]{2004PhRvE..70c6408H, 2008ApJ...679..537F, 2021AA...656A.118S, 2021ApJ...919...79L, 2022MNRAS.514.1575C}. For non-self-gravitating simulations, \citet{2004PhRvE..70c6408H} found the DCF53 equipartition is violated throughout the inertial range (i.e., between the turbulence injection scale $L_{inj}$ and dissipation scale) in initially very sub-Alfv\'{e}nic ($\mathcal{M}_{A0} = 0.05-0.5$) simulations, while \citet{2008ApJ...679..537F} found an exact equipartition between turbulent magnetic and kinetic energies for initially slightly sub-Alfv\'{e}nic (initial Alfv\'{e}nic Mach number $\mathcal{M}_{A0} = 0.7$) models. Another numerical work by \citet{2021AA...656A.118S} with non-self-gravitating simulations found that the DCF53 assumption is approximately fulfilled for initially trans-Alfv\'{e}nic ($\mathcal{M}_{A0} = 0.7-2$) models, but the turbulent magnetic energy is much smaller than the turbulent kinetic energy in initially sub-Alfv\'{e}nic ($\mathcal{M}_{A0} = 0.1-0.5$) models. Similarly, as these studies adopted the whole simulation-averaged field as the mean field, it is unclear whether these relations still holds in sub-regions where the local properties are dominant. \citet{2022arXiv220409731L} suggested that the DCF53 equipartition in sub-Alfv\'{e}nic and non-self-gravitating media can be established in the regime of strong turbulence at small scales ($<L_{inj}\mathcal{M}_{A}^2$). For self-gravitating simulations, a numerical study of clustered star-forming clumps found that the DCF53 energy equipartition assumption is approximately fulfilled in trans-Alfv\'{e}nic ($\mathcal{M}_{A} = 0.7-2.5$) clumps/cores at 1-0.1 pc scales \citep{2021ApJ...919...79L}. Another numerical study by \citet{2022MNRAS.514.1575C} with self-gravitating and initially trans-Alfv\'{e}nic ($\mathcal{M}_{A0} = 1.04-1.45$) simulations found that the DCF53 equipartition approximately holds in the whole simulations box. \citet{2022MNRAS.514.1575C} further suggests that the DCF53 equipartition breaks in sub-blocks with insufficient cell numbers (e.g., $<41^3$ cells blocks). It is unclear whether the DCF53 energy equipartition assumption still holds in very sub-Alfv\'{e}nic ($\mathcal{M}_{A} \ll 0.7$) clumps/cores, although the real question may be whether there are very sub-Alfv\'{e}nic clumps/cores in gravity-accelerated gas (see discussions in Sections \ref{sec:dcfobs_BVSturb} and \ref{sec:hrosim}).
In summary, the DCF53 equipartition assumption is valid within trans- or slightly sub-Alfv\'{e}nic self-gravitating molecular clouds, but its validity in very sub-Alfv\'{e}nic self-gravitating regions still needs more investigations. The Fed16 equipartition assumption can be used as an empirical relation when studying the global underlying and turbulent magnetic field in the diffuse ISM beyond the turbulent injection scale if the ISM is sub-Alfv\'{e}nic, but its physical interpretation, as well as its applicability in part of the ISM below the turbulent injection scale and within self-gravitating molecular clouds, are still unclear. The equipartition problem has only been investigated theoretically and numerically. We are unaware of any observational attempts to look into this problem within molecular clouds yet. There is also a lack of observational methods to study the energy equipartition.
\paragraph{Super-Alfv\'{e}nic case}
A super-Alfv\'{e}nic state of turbulence conventionally means that the underlying magnetic energy is smaller than the turbulent kinetic energy. In super-Alfv\'{e}nic case, the magnetic field is dominated by the turbulent component. Numerical studies have shown that both the turbulent magnetic energy and the RMS coupling-term magnetic energy fluctuation are smaller than the turbulent kinetic energy in super-Alfv\'{e}nic simulations \citep{2008ApJ...679..537F, 2016JPlPh..82f5301F, 2021ApJ...919...79L}. This energy non-equipartition in super-Alfv\'{e}nic cases could lead to an overestimation of the magnetic field strength. \citet{2022arXiv220409731L} suggested that the super-Alfv\'{e}nic turbulence transfers to sub-Alfv\'{e}nic turbulence at $< L_{inj}\mathcal{M}_{A}^{-3}$ scales due to the turbulence cascade in the absence of gravity.
\subsubsection{Isotropic turbulence}
The original DCF method assumes isotropic turbulent velocity dispersion, thus the unobserved $\delta v_{\mathrm{pos\perp}}$ in Equation \ref{eq:alfvenpos} can be replaced by the observable $\delta v_{\mathrm{los}}$, which implies
\begin{equation}\label{eq:alfveniso}
\frac{1}{2} \rho \delta v_{\mathrm{los}}^2= \frac{(B^{\mathrm{t}}_{\mathrm{pos\perp}})^2}{2 \mu_0}.
\end{equation}
On the other hand, the modified DCF method in \citet{2021AA...647A.186S} (hereafter ST21) requires an assumption of isotropic turbulent magnetic field, so that the $B^{\mathrm{t}}_{\mathrm{pos\|}}$ in Equation \ref{eq:eqcouplingpos1} can be replaced by $B^{\mathrm{t}}_{\mathrm{pos\perp}}$ \citep{2021AA...656A.118S}. Then there is
\begin{equation}\label{eq:eqcouplingpos1iso}
B^{\mathrm{u}}_{\mathrm{pos}} \sim \frac{\mu_0 \rho \delta v_{\mathrm{los}}^2}{2 B^{\mathrm{t}}_{\mathrm{pos\perp}}}.
\end{equation}
However, both incompressible and compressible MHD turbulence are anisotropic in the presence of a strong magnetic field \citep{1983JPlPh..29..525S, 1984ApJ...285..109H, 1995ApJ...438..763G}. In particular, the fluctuations of the Alfv\'{e}nic and slow modes are anisotropic, while only the fast mode has isotropic fluctuations \citep{2022arXiv220409731L}. The anisotropic velocity field in low-density non-self-gravitating regions has been confirmed by observations \citep{2008ApJ...680..420H}. \citet{2022arXiv220409731L} suggested that the anisotropy of MHD turbulence is a function of $\mathcal{M}_{A}$ and relative fraction of MHD modes. The anisotropy of turbulence is also scale-dependent, which increases with decreasing scales \citep{1999ApJ...517..700L}. The recently developed DMA does not require the assumption of isotropic turbulence, but analytically considers the anisotropy of MHD turbulence in non-self-gravitating turbulent media. In particular, the DMA has written the POS transverse magnetic field structure function ($\tilde{D}_{yy}$) and the POS velocity centroid structure function ($\tilde{D}^{v}$) as functions of the turbulent field fluctuation and the velocity dispersion, respectively. Both $\tilde{D}_{yy}$ and $\tilde{D}^{v}$ are complicated functions of the mean field inclination angle\footnote{The inclination angle $\gamma$ corresponds to the angle between the 3D mean field and the LOS throughout this paper.} $\gamma$, $\mathcal{M}_{A}$, composition of turbulence modes, and distance displacements. We refer the readers to the original DMA paper \citep{2022arXiv220409731L} for the derivation of $\tilde{D}_{yy}$ and $\tilde{D}^{v}$.
With non-self-gravitating simulations, \citet{2001ApJ...561..800H} and \citet{2021AA...656A.118S} found that the full cube turbulent magnetic field is approximately isotropic within a factor of 2 with their $\mathcal{M}_{A}$=0.1-14 simulations, but they did not investigate the property of turbulence in sub-regions at smaller scales where the anisotropy is expected to be more prominent. Both works found the turbulent field is more anisotropic for models with smaller $\mathcal{M}_{A}$ values.
Studies on the anisotropy of MHD turbulence in the high-density self-gravitating regime are rarer. Spectroscopic observations toward high-density regions did not find obvious evidences for velocity anisotropy \citep{2012MNRAS.420.1562H}. A recent numerical work by \citet{2021ApJ...919...79L} found both the turbulent magnetic field and the turbulent velocity dispersion are approximately isotropic within a factor of 2 at 1-0.01 pc scales in their trans-to-super-Alfv\'{e}nic simulations of clustered star-forming regions. They did not find obvious relation between the anisotropy and levels of initial magnetization in their simulations. This may be due to the local super-Alfv\'{e}nic turbulence at high densities and/or the complex averaging along the LOS for different local anisotropic turbulent field at various directions. Moreover, \citet{2017ApJ...836...95O} found that the velocity anisotropy due to magnetic fields disappears in high-density and sub-Alfv\'{e}nic (local $\mathcal{M}_{A}=0.54$) regions of their self-gravitating simulations. Thus, the uncertainty brought by the anisotropic MHD turbulence should be a minor contributor for the DCF method in star-forming clumps/cores where self-gravity is important.
\subsubsection{Tracing ratios between field components with field orientations}\label{sec:ratiobori}
In the POS, the local total field at position i is a vector sum of the underlying field and the local turbulent field: $\boldsymbol{B^{\mathrm{tot,i}}_{\mathrm{pos}}} = \boldsymbol{B^{\mathrm{u}}_{\mathrm{pos}}} + \boldsymbol{B^{\mathrm{t,i}}_{\mathrm{pos}}}$. Equation \ref{eq:alfveniso} can be rewritten as
\begin{equation}\label{eq:eqequitrans}
B^{\mathrm{u}}_{\mathrm{pos}} = \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}}.
\end{equation}
or
\begin{equation}\label{eq:eqequitranstot}
B^{\mathrm{tot}}_{\mathrm{pos}} = \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}}
\end{equation}
to derive the POS underlying magnetic field strength $B^{\mathrm{u}}_{\mathrm{pos}}$ or the POS total magnetic field strength $B^{\mathrm{tot}}_{\mathrm{pos}}$. Similarly for the ST21 approach, Equation \ref{eq:eqcouplingpos1iso} can be rewritten as
\begin{equation}\label{eq:eqequitransst21}
B^{\mathrm{u}}_{\mathrm{pos}} = \sqrt{\frac{\mu_0 \rho } {2B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}}}\delta v_{\mathrm{los}}
\end{equation}
to derive the POS underlying magnetic field strength. Both the original DCF method and the ST21 approach have adopted the assumption that the turbulent-to-underlying field strength ratio $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ or turbulent-to-total field strength ratio $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$ in the above equations can be estimated from statistics of the POS magnetic field orientations, which is usually done by calculating the dispersion of magnetic field position angles or applying the angular dispersion function method \citep[hereafter the ADF method, ][]{2009ApJ...696..567H, 2009ApJ...706.1504H, 2016ApJ...820...38H}. On the other hand, the DMA assumes that the POS polarization angle structure function $D^{\phi}$ can be expressed as a function of the ratio between the POS transverse and total magnetic field structure functions ($\tilde{D}_{yy}/\tilde{D}_{Bpos}$), where the term $\tilde{D}_{yy}/\tilde{D}_{Bpos}$ can be expressed as a function of the turbulent-to-underlying field strength ratio.
\paragraph{Angular dispersions}\label{sec:angles}
The underlying magnetic field reflects the intrinsic property of an unperturbed magnetic field. There are different approaches to relate the dispersion of magnetic field position angles with $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$. Table \ref{tab:formulaBu} summarises these angular relations and the corresponding DCF formulas for $B^{\mathrm{u}}_{\mathrm{pos}}$.
\begin{table}[tbh]
\tiny
\caption{Angular relations and corresponding formulas to derive $B^{\mathrm{u}}_{\mathrm{pos}}$. \label{tab:formulaBu}}
\begin{tabular}{ccc}
\hline \noalign {\smallskip}
Relation ($B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}\sim$) & Formula ($ \sqrt{\mu_0 \rho }\delta v_{\mathrm{los}} \times) $ & Reference$^a$ \\
\hline \noalign {\smallskip}
$\delta\phi$ & $1/\delta\phi$ & 1,2 \\ %
$\delta (\tan \phi)$ & $1/ \delta (\tan \phi)$ & 3,4 \\%Hei01, Liu21
$\tan \delta \phi$ & $1/\tan \delta \phi$ & 5 \\%Li21
$\delta\phi$ & $(1/\sqrt{2\delta\phi})^b$ & 6 \\ %
\hline \noalign {\smallskip}
\end{tabular}
\normalsize{Notes}\\
\normalsize{$^{a}$ References: (1) \citet{1951PhRv...81..890D}; (2) \citet{1953ApJ...118..113C}; (3) \citet{2001ApJ...561..800H}; (4) \citet{2021ApJ...919...79L}; (5) \citet{2022MNRAS.510.6085L}; (6) \citet{2021AA...656A.118S}. }\\
\normalsize{$^{b}$ This formula adopts Equation \ref{eq:eqequitransst21}, while the rest of the formulas adopt Equation \ref{eq:eqequitrans}.}\\
\end{table}
All the relations listed in Table \ref{tab:formulaBu} have assumed that $B^{\mathrm{t}}_{\mathrm{pos\|}} \ll B^{\mathrm{u}}_{\mathrm{pos}}$, so the $B^{\mathrm{t}}_{\mathrm{pos\|}}$ can be neglected and the dispersion on the field angle \footnote{The field angle considers the direction of the magnetic field and is in the range of -180$\degr$ to 180$\degr$ \citep{2022MNRAS.510.6085L}, while the position angle $\phi$ only considers the orientation of the magnetic field and is in the range of -90$\degr$ to 90$\degr$. Due to the 180$\degr$ ambiguity of dust polarization observations, only the magnetic field position angle is observable.} of the magnetic field can be approximated with the dispersion of the position angle. The angular relation $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}} \sim \delta\phi$ has adopted an additional small angle approximation (i.e., $\delta\phi \sim \tan\delta\phi \sim \delta (\tan \phi) \ll 1$ or $B^{\mathrm{t}}_{\mathrm{pos\perp}} \ll B^{\mathrm{u}}_{\mathrm{pos}}$). \citet{2001ApJ...546..980O} and \citet{2021ApJ...919...79L} found that the angle limit for the small angle approximation is $\delta\phi \lesssim 25 \degr$ in their simulations. \citet{2021ApJ...919...79L} also suggested that $\delta\phi$ can significantly underestimate $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ for large $\delta\phi$ values. For the relation $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}\sim \delta (\tan \phi)$, the simulations by \citet{2001ApJ...561..800H} and \citet{2021ApJ...919...79L} suggested that $\delta (\tan \phi)$ can show significant scatters due to large values of $\delta (\tan \phi)$ when $\phi \sim 90\degr$. Thus, $\delta (\tan \phi)$ is valid to trace the $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}} $ when $\phi$ is small, and $\delta (\tan \phi)$ reduces to $\delta \phi$ in such cases. In addition, \citet{2022MNRAS.514.1575C} found that $\tan \delta \phi$ does not trace $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ well in their simulations.
The total magnetic field is the sum of the underlying magnetic field and the turbulent magnetic field. There are also different approaches trying to relate the dispersion of magnetic field position angles with $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$. Table \ref{tab:formulaBtot} summarises these angular relations and the corresponding DCF formulas for $B^{\mathrm{tot}}_{\mathrm{pos}}$.
\begin{table}[tbh]
\tiny
\caption{Angular relations and corresponding formulas to derive $B^{\mathrm{tot}}_{\mathrm{pos}}$. \label{tab:formulaBtot}}
\begin{tabular}{ccc}
\hline \noalign {\smallskip}
Relation ($B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}\sim$) & Formula ($ \sqrt{\mu_0 \rho }\delta v_{\mathrm{los}} \times) $ & Reference$^a$ \\
\hline \noalign {\smallskip}
... & $(\frac{(1+3\delta (\tan \phi)^2)^{1/2}}{\delta (\tan \phi)})^b$ & 1 \\%Hei01
$\tan \delta \phi$ & $(1/\tan \delta \phi)^c$ & 2 \\%Fal08
$\delta\phi$ & $1/\delta\phi$ & 3 \\ %
$\delta (\sin \phi)$ & $1/ \delta (\sin \phi)$ & 3 \\ %
$\sin(\delta \phi)$ & $1/ \sin(\delta \phi)$ & 4 \\ %
\hline \noalign {\smallskip}
\end{tabular}
\normalsize{Notes}\\
\normalsize{$^{a}$ References: (1) \citet{2001ApJ...561..800H}; (2) \citet{2008ApJ...679..537F}; (3) \citet{2021ApJ...919...79L}; (4) \citet{2022MNRAS.514.1575C}. }\\
\normalsize{$^{b}$ \citet{2001ApJ...561..800H} assumes the underlying magnetic field is along the POS. The formula derives $B^{\mathrm{tot}}_{\mathrm{3D}}$ instead of $B^{\mathrm{tot}}_{\mathrm{pos}}$. }\\
\normalsize{$^{c}$ \citet{2008ApJ...679..537F} neglected the transverse turbulent field in the total field. The formula derives $B^{\mathrm{tot}}_{\mathrm{pos\|}}$ instead of $B^{\mathrm{tot}}_{\mathrm{pos}}$. }\\
\end{table}
Similarly, all the relations listed in Table \ref{tab:formulaBtot} have approximated the dispersion on the field angle of the magnetic field with the dispersion of the position angle, which requires $B^{\mathrm{t}}_{\mathrm{pos\|}} < B^{\mathrm{u}}_{\mathrm{pos}}$. The numerical study by \citet{2021ApJ...919...79L} found that $\delta\phi$ correlates with $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ or $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$ in small angle approximation, but $\delta\phi$ estimates $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$ rather than $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ for large $\delta\phi$ values. \citet{2021ApJ...919...79L} also suggested that $\delta (\sin \phi)$ provides a better estimation of $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$ than $\delta\phi$. Due to the scatters of $\delta (\tan \phi)$, the formula $\sqrt{\mu_0 \rho }\delta v_{\mathrm{los}} \times \frac{(1+3\delta (\tan \phi)^2)^{1/2}}{\delta (\tan \phi)} $ does not correctly estimate the total magnetic field strength \citep{2001ApJ...561..800H, 2021ApJ...919...79L}. The angular relation $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}\sim \tan \delta \phi$ has not been tested by simulations yet. In addition, \citet{2022MNRAS.514.1575C} found that $\sin \delta \phi$ and $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$ are well correlated in regions where the polarization percentage is larger than 20\% of its maximum value in the synthetic polarization maps.
\paragraph{The ADF method}\label{sec:adf}
Structure functions and correlation functions have been widely used in astrophysical studies. \citet{2008ApJ...679..537F} introduced the structure function in the study of polarization position angles. Later, the ADF method was developed by \citet{2009ApJ...696..567H} to estimate the POS turbulent-to-ordered field strength ratio $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{o}}_{\mathrm{pos}}$ based on the structure function of magnetic field position angles, where $B^{\mathrm{o}}$ is the ordered field. The ADF approach developed initially in \citet{2009ApJ...696..567H} (Hil09) only corrects for the large-scale ordered field structure (see Section \ref{sec:unorder} for more discussions about the ordered field). Later, the ADF technique was extended by \citet{2009ApJ...706.1504H} (Hou09) and \citet{2016ApJ...820...38H} (Hou16) to be applied to single-dish and interferometric observational data by additionally taking into account the LOS signal integration over multiple turbulent cells, the beam-smoothing effect, and the interferometer filtering effect. The Hou09 and Hou16 variants of the ADF are in the form of the auto-correlation function, which transform to the structure function in the small angle limit. Basically, the ADF of magnetic field orientations are fitted to derive $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{o}}_{\mathrm{pos}}$ or $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$. The simplest ADF only accounting for the ordered field structure has the form \citep{2009ApJ...696..567H, 2009ApJ...706.1504H, 2016ApJ...820...38H}
\begin{equation} \label{eq:adforder}
1 - \langle \cos \lbrack \Delta \Phi (l)\rbrack \rangle \sim a_2' l^2 + (\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{tot}}_{\mathrm{pos}}})^2_{\mathrm{adf}},
\end{equation}
where $\Delta \Phi (l)$ is the POS angular difference of two magnetic field segments separated by a distance $l$, $a_2' l^2$ is the second-order term of the Taylor expansion of the ordered component of the correlation function, $(\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{tot}}_{\mathrm{pos}}})^2_{\mathrm{adf}} = 1/(1+1/(\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{o}}_{\mathrm{pos}}})^2_{\mathrm{adf}})$, and the subscript ``adf'' indicates ADF-derived parameters. The variation of the ordered field is characterised by a scale $l_d$ and it is assumed that the higher-order terms of the Taylor expansion of the ordered field do not have significant contributions to the ADF at $l<l_d$. The ADF method also assumes that the local turbulent correlation scale (see Section \ref{sec:unturb}) is smaller than $l_d$. Because $\Delta \Phi (l)$ is constrained to be within $[-90, 90]$ degrees (i.e., the field angular dispersion approximated with the position angular dispersion) and $a_2'$ is defined as positive values (i.e., positive ordered field contribution), the maximum values of the derivable integrated turbulent-to-ordered and -total strength ratio from the simplest ADF are $\sim$0.76 and $\sim$0.6 \citep{2021ApJ...919...79L}, respectively. Due to the space limit of this review, we refer readers to the original ADF papers for more complicated forms of ADFs and their detailed derivations. The validity of the ADF method is further discussed in Section \ref{sec:uncer}.
\paragraph{The DMA}
Combining Equation \ref{eq:alfven42} and the ratio between the polarization angle structure function $D^{\phi}$ and the velocity centroid structure function $\tilde{D}^{v}$, the DMA estimates the field strength as
\begin{equation}\label{eq:eqdma}
B^{\mathrm{u}}_{\mathrm{pos}} = \sqrt{\mu_0 \rho } f \frac{\tilde{D}^{v}}{D^{\phi}},
\end{equation}
where the factor $f$ is a function of $\gamma$, $\mathcal{M}_{A}$, and composition of turbulence modes. Note that the DMA assumes the velocity and magnetic field have the same scaling, therefore, $f$ does not depend on the distance displacement. \citet{2022arXiv220409731L} has listed the formula of $f$ in different physical conditions (see their Table 3). Note that their structure function $D^{\phi}(l) = \frac{1}{2}\langle 1 - \cos \lbrack 2\Delta \Phi (l)\rbrack \rangle$ is different from the one adopted by the ADF method (the left term of Equation \ref{eq:adforder}). \citet{2022arXiv220409731L} claims that $D^{\phi}(l)$ is applicable to cases of large angle fluctuations while the $1 - \langle \cos \lbrack \Delta \Phi (l)\rbrack$ adopted by the ADF method is not.
\subsection{Uncertainties in the statistics of field orientations}\label{sec:uncer}
As stated in Section \ref{sec:ratiobori}, the turbulent-to-underlying or -total magnetic field strength ratio is assumed to be traced by statistics of magnetic field position angles. Other than the uncertainty on the assumption itself, there are various effects that could introduce uncertainties in the statistics of position angles. Here we describe these effects and summarize on how they are treated in different approaches. Note that the estimation of gas density from dust emission is associated with uncertainties on the dust-to-gas ratio,
temperature, dust opacity, and source geometry \citep[e.g., ][]{1983QJRAS..24..267H, 1994A&A...291..943O}, whereas the statistics of turbulent velocity field using spectroscopic data is affected by the chemical processes and excitation conditions of particular line tracers \citep[e.g.,][]{1993prpl.conf..163V}, density fluctuations \citep[e.g.,][]{2021ApJ...910..161Y, 2022arXiv220413760Y}, and ordered velocity fields due to star-forming activities (e.g., infall, rotation, and outflows). We do not discuss the uncertainty on the gas density and turbulent velocity field in this paper as it is beyond the scope of this review.
\subsubsection{Ordered field structure}\label{sec:unorder}
The original DCF method was derived assuming the large-scale ordered field lines are straight. For a non-linear large-scale field, the contribution from the ordered field structure can overestimate the angular dispersion that should be only attributed to turbulence. For highly ordered magnetic fields, the underlying field structure can be fitted with simple hourglass models \citep[hereafter the HG technique. e.g.,][]{2002ApJ...566..925L, 2006Sci...313..812G, 2009ApJ...707..921R, 2014ApJ...794L..18Q} or even more complex models \citep[e.g.,][]{2018ApJ...868...51M}.
\citet{2015ApJ...799...74P} (the spatial filtering technique. Hereafter the Pil15 technique) and \citet{2017ApJ...846..122P} (the unsharp masking technique. Hereafter the Pat17 technique) tried to universally derive the ordered field orientation at each position by smoothing the magnetic field position angle among neighboring positions. \citet{2017ApJ...846..122P} tested the Pat17 technique with a set of Monte Carlo simulations and found that this technique does correctly recover the true angular dispersion if the measurement error is small. By varying the smoothing size, the Pil15 and Pat17 techniques can account for the ordered structure at different scales. %
The ADF method analytically takes into account the ordered field structure (see Section \ref{sec:adf} for detail) and has been the most widely used method to remove the contribution from the ordered field to the angular dispersion. With a set of Monte Carlo simulations, \citet{2021ApJ...919...79L} found that the ADF method works well on accounting for the ordered field. Figure \ref{fig:dang_btb0_n} shows the overestimation factor of the angular dispersion due to the POS ordered field structure, which is quantified by the ratio $R_o$ between the directly estimated angular dispersion $\delta \phi_{\mathrm{obs}}$ and the ADF-derived integrated (i.e., without corrections for the LOS signal integration. See Section \ref{sec:unlossi}) turbulent-to-ordered magnetic field strength ratio\footnote{The Hil09 variant of ADF directly estimates the $(\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{o}}_{\mathrm{pos}}})_{\mathrm{adf,int}}$. For the Hou09 and Hou16 variants, the $(\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{o}}_{\mathrm{pos}}})_{\mathrm{adf,int}}$ is derived by dividing the estimated $(\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{o}}_{\mathrm{pos}}})_{\mathrm{adf}}$ by a factor of $\sqrt{N_{\mathrm{adf}}}$, where $N_{\mathrm{adf}}$ is the number of turbulent cells contained in the column of dust probed by the telescope beam.} $(\frac{B^{\mathrm{t}}_{\mathrm{pos\perp}}}{B^{\mathrm{o}}_{\mathrm{pos}}})_{\mathrm{adf,int}}$ from a compilation of previous DCF estimations \citep{2022ApJ...925...30L}. Figure \ref{fig:dang_btb0_n} does not show strong relations between $R_o$ and $n_{\mathrm{H_2}}$, which contradicts with the expectation that the ordered field structure is more prominent in higher-density regions where gravity is more dominant \citep[e.g.,][]{2019FrASS...6....3H}. However, most of the estimations shown in Figure \ref{fig:dang_btb0_n} were from the simplest Hil09 variant of ADF, which considers less effects and gives more uncertain results. A revisit of the data in the literature with the more refined Hou09/Hou16 approaches should give a more reliable relation between the ordered field contribution and the gas density. There is a group of data points with $R_o<1$ values derived from the Hou09 approach, which implies that the contribution from the ordered field is negative (i.e., $a_2'< 0$) and is not physical. This is because the original studies for those data points applied the Hou09 variant of ADF to interferometric data and/or fitted the ADF within an upper limit of $l$ that is too large. With Monte Carlo simulations, \citet{2019ApJ...877...43L} found that the sparse sampling of magnetic field detections can generate an artificial trend of decreasing ADF with increasing $l$ at large $l$, which can explain the unphysical $a_2'< 0$ values. The average $R_o$ for $R_o>1$ values is $2.2\pm1.1$.
A deficiency of the original ADF methods (Hil09/Hou09/Hou16) is a lack of a universal way to define the fitting upper limit of $l$ for the $a_2' l^2$ term. Therefore, users usually perform the fitting of the ADF within an arbitrary range of $l$, which can give very different results depending the adopted $l$ range. \citet{2022arXiv220409731L} suggested that the ordered field contribution to the ADF can be removed with multi-point ADFs \citep{2019ApJ...874...75C}, which has the advantage of not requiring fitting the ordered field contribution but it is at the expense of an increased noise.
It should be noted that the concept of the ``ordered'' field is vague and is not well defined. The referred entity of an local ordered field depends on the range of scales (i.e., resolution to maximum size) of the region of interest. An example is that the simple hourglass-like magnetic field in G31.41 at a lower resolution \citep{2009Sci...324.1408G} show complex poloidal-like structures at a higher resolution \citep{2019A&A...630A..54B}. It should also be noted that the non-linearly ordered field structure is not only due to non-turbulent processes such as gravity, shocks, or collisions, but can also result from larger-scale turbulence, where the curvature of the ordered field generated by pure turbulence depends on $\mathcal{M}_{A}$ \citep[e.g.,][]{2020ApJ...898...66Y}. The above mentioned techniques (except for the HG technique, where the hourglass shapes are often associated with structures under gravitational contraction) remove the contribution from non-turbulent ordered field structures as well as the contribution from the low spatial frequency turbulent field.
\subsubsection{Correlation with turbulence}\label{sec:unturb}
It is assumed that the turbulent magnetic field is characterized by a correlation length $l_\delta$ \citep{1991ApJ...373..509M, 2009ApJ...696..567H}. The turbulent magnetic fields within a turbulent cell of length $l_\delta$ are mostly correlated with each other, while the turbulent fields separated by scales larger than $l_\delta$ are mostly independent.
Hou09 and Hou16 assumed a Gaussian form for the turbulent autocorrelation function and included it in the ADF analysis. Figure \ref{fig:lbeampc_ltcpc} shows the relation between $l_\delta$ derived from the ADF method and the resolution $l_{\mathrm{beam}}$ of the observations from the DCF catalogue compiled by \citet{2022ApJ...925...30L}. There is an overwhelming trend that the $l_\delta$ and $l_{\mathrm{beam}}$ are correlated with each other within a factor of 2. At $l_{\mathrm{beam}} \sim 5$ mpc, there is a group of data points with $l_\delta > 2l_{\mathrm{beam}}$, which mostly correspond to the estimations from \citet{2021ApJ...912..159P}.
The smallest observed $l_\delta$ is $l_{\delta,\mathrm{min}}\sim$0.6 mpc estimated by \citet{2017ApJ...847...92H} in Serpens-SMM1. This scale is consistent with the lower limit of the observed ambipolar diffusion scale\footnote{Note that there is a factor of $\sqrt{3}$ difference between the correlation length of the autocorrelation funciton in \citet{2009ApJ...706.1504H} and the correlation length of a Kolmogorov-like turbulent power spectrum.} \citep[$\sim$1.8-17 mpc, ][]{2008ApJ...677.1151L, 2010ApJ...720..603H, 2011ApJ...733..109H} of ion-neutral decoupling, although recently \citet{2018ApJ...862...42T} suggested that the previous observational estimates of ambipolar diffusion scales were biased towards small values due to the imperfect method used to analyse the turbulent spectra and the true ambipolar diffusion scale may be much larger (e.g., $\sim$0.4 pc in NGC 6334). The estimated $l_{\delta,\mathrm{min}}$ is an order of magnitude smaller than the scale of the observed lower end of the Larson's law \citep[$\sim$10 mpc, ][]{2009ApJ...707L.153P, 2013ApJ...779..185K, 2022arXiv220413760Y}.
What can we learn from the correlation between $l_\delta$ and $l_{\mathrm{beam}}$? One may intuitively think that there is an intrinsic turbulent correlation scale, which is overestimated at insufficient beam resolution. However, in this scenario, the observed angular dispersions at larger scales should be smaller than those at smaller scales due to the signal integration across more turbulent cells along the LOS (see Section \ref{sec:unlossi}), which contradicts observational results \citep{2022ApJ...925...30L}. Thus, it is reasonable to think that the magnetic field and turbulence are correlated at different scales instead of only at the smallest scale \citep{2001ApJ...561..800H}.
Alternatively, we propose that the local turbulence confined by the range of scales of the observations (from the size of the considered region, the effective depth along the LOS, or the filtering scale of interferometric observations to the beam resolution) is responsible for generating the local turbulent magnetic field at the particular scales. Note that the local ordered magnetic field partly consists of the turbulent magnetic field at larger scales and the two are not distinguishable in observations of limited scale ranges. i.e., the contribution of low spatial frequency turbulent fields can be removed by the ordered field term in the ADF analysis. In addition, coarser resolutions cannot resolve the high spatial frequency turbulence. Thus, the turbulent correlation scale derived from the ADF method actually corresponds to the local turbulent power spectra with cutoffs at the resolution and the maximum recoverable scale of particular observations. This local turbulence correlation scale in observations of molecular clouds should be much smaller than the driving scale of interstellar supersonic turbulence \citep[$\sim$100 pc, e.g.,][]{2004ApJ...615L..45H, 2008ApJ...680..362H}.
Densities may be another factor that affects the local turbulence. The property of the local turbulence relies on the gas component probed by the observations. At smaller scales, the higher densities decrease the mean free path of gas particles and thus the turbulent correlation length, which can also explain the scaling relation seen in Figure \ref{fig:lbeampc_ltcpc}.
\subsubsection{LOS signal integration}\label{sec:unlossi}
The contribution of the turbulent field responsible for the observed polarization is the mean of the $N$ independent samples of the turbulent field along the LOS \citep{1990ApJ...362..545Z, 1991ApJ...373..509M}. The integrated turbulent field seen by observations should be $1/\sqrt{N}$ times of the intrinsic turbulent field. Note that the measured angular dispersion in polarization maps is an approximation of $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ or $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{tot}}_{\mathrm{pos}}$. If both the intrinsic and integrated turbulent field dominate the underlying field, the observed and intrinsic angular dispersions should be close to the value expected for a random field and there is little difference between them. Thus, the factor $\sqrt{N}$ is only an upper limit of the underestimation factor for the angular dispersion.
\citet{2022arXiv220409731L} suggested that the underestimation of angular dispersion is dependent on the mixture of turbulence modes and the inclination of the mean field. With pure Alfv\'{e}nic simulations and simulations of equal Alfv\'{e}nic and slow modes, they found that the integrated angular dispersion decreases slower than $N^{-1/2}$ when the LOS integration depth $L$ is smaller than the turbulence injection scale $L_{inj}$, but decreases faster than $N^{-1/2}$ when $L \gtrsim L_{inj}$. They also suggested that pure Alfv\'{e}nic fluctuations decrease as $N^{-1}$ instead of $N^{-1/2}$ when the mean field is on the POS, which can quickly vanish if the integration depth is greater than the scale of the studied region. \citet{2021ApJ...919...79L} tested the LOS signal integration effect of angular dispersions with self-gravitating simulations. They found that the angular dispersion is only underestimated by a factor of $<$2 at scales $>$0.1 pc, while the angular dispersion can be significantly underestimated at scales $<$0.1 pc. However, both works only investigated the underestimation of the angular dispersion, which does not necessarily reflect the underestimation of the turbulent magnetic field. Future numerical studies should investigate the effect of LOS signal integration on the turbulent magnetic field as well as on the angular dispersion.
The ADF method derives the POS turbulent correlation length by fitting the ADFs and uses this information to derive the number of turbulent cells $N_{\mathrm{adf}}$ along the LOS under the assumption of identical LOS and POS turbulent correlation lengths. Note that this assumption is not satisfied in anisotropic MHD turbulence. \citet{2021ApJ...919...79L} applied the ADF method to simulations and found a significant amount of observed angular dispersions corrected by $\sqrt{N_{\mathrm{adf}}}$ exceed the value expected for a random field, which they interpreted as $N_{\mathrm{adf}}$ being overestimated by the ADF method. However, as mentioned above, $\sqrt{N_{\mathrm{adf}}}$ is only an upper limit of the underestimation factor for angular dispersions. Thus their results do not necessarily mean that the ADF method inaccurately accounts for the LOS signal integration. More numerical tests of the ADF method is required to address its validity on the signal integration effect.
\citet{2016ApJ...821...21C} proposed an alternative approach (here after the CY16 technique) to estimate the number of turbulent cells along the LOS
\begin{equation}
N_{\mathrm{cy16}} = (\frac{\delta v_{\mathrm{los}}}{\delta V_c})^2,
\end{equation}
where $\delta V_c$ is the standard deviation of centroid velocities. The CY16 technique was developed to account for the LOS signal integration at scales larger than the injection scale in non-self-gravitation media, which does not naturally extend to small-scale and high-density media where self-gravity is important. \citet{2021ApJ...919...79L} tested the CY16 technique with self-gravitating simulations. They found that the observed angular dispersions in synthetic polarization maps corrected by $\sqrt{N_{\mathrm{cy16}}}$ agrees with the angular dispersions in simulation grids at scales $>$0.1 pc, but the correction fails at $<$0.1 pc. The failure of the CY16 technique at $<$0.1 pc scales may be related to the complex velocity fields associated with star formation activities. Similar to the removal of ordered magnetic fields, \citet{2022arXiv220409731L} suggests that the contribution from non-turbulent velocity fields to $\delta V_c$ can be removed by multi-point structure functions.
The non-uniform and complex density and magnetic field structures along the LOS tend to reduce the measured angular dispersion \citep[e.g., ][]{1990ApJ...362..545Z}. The distribution of the LOS grain alignment efficiency can also affect the derived angular dispersion \citep[e.g.,][]{2001ApJ...559.1005P}. The reduction factor of angular dispersions due to these effects is highly dependent on the physical conditions of individual sources and cannot be solved universally.
\subsubsection{Observational effects}\label{sec:unobs}
The observed sky signals are limited by the angular resolution and the sampling of the particular observations. Interferometric observations are further affected by the filtering of large-scale signals. %
As discussed in Section \ref{sec:unturb}, the magnetic field is likely perturbed by the turbulence of different wavenumbers at different scales. In this case, the loss of turbulent power due to beam-smoothing can underestimate the angular dispersion of magnetic field position angles and thus overestimate the magnetic field strength \citep{2001ApJ...561..800H}. \citet{2008ApJ...679..537F} investigated the resolution effect with numerical simulations and suggested that the ratio between the derived field strength at different spatial resolutions ($R_s$) and the intrinsic field strength at an infinite resolution follows an empirical relation $(1+C/R_s^{0.5})$, where $C$ is a constant obtained by fitting the relation. It is unclear whether this empirical relation is applicable to observations or other simulations. The ADF method is another approach trying to universally solve the beam-smoothing effect, which analytically introduces a Gaussian profile to describe the telescope beam. \citet{2021ApJ...919...79L} tested the ADF method with simulations and found that this method does correctly account for the beam-smoothing effect. \citet{2021ApJ...919...79L} also suggested that the information of the turbulent magnetic field is not recoverable if the polarization source is not well resolved (i.e., size of the polarization detection area smaller than 2-4 times of the beam size).
The minimal separation of antenna pairs in interferometric observations limits the largest spatial scale that the observations are sensitive to. This filtering effect also introduces uncertainties in the estimation of the turbulent magnetic field. The ADF method attempts to analytically solve this problem by modeling the dirty beam of interferometers with a twin Gaussian profile. With their numerical test, \citet{2021ApJ...919...79L} found the ADF method correctly accounts for this large-scale filtering effect as well.
For observations of polarized starlight extinction or low signal-to-noise-ratio dust polarization emission, the polarization detection is sparsely sampled. \citet{2016A&A...596A..93S} has suggested that this sparse sampling effect can introduce jitter-like features in the ADFs and affect the accuracy of the ADF analysis. Although there are no universal solutions to correct for the effect of sparse sampling, the uncertainties of the ADFs due to the pure statistics can be modeled with Monte Carlo analyses \citep[e.g.,][]{2019ApJ...877...43L}. With a simple Monte Carlo test, \citet{2019ApJ...877...43L} found that the ADF of sparsely sampled magnetic field orientations is underestimated at large distance intervals and has larger scatters compared to the ADF of uniformly sampled field orientations. The sparse sampling not only affects the ADF, but can also affect the velocity structure function (VSF). \citet{2021ApJ...907L..40H} found similar jitter-like features on the VSF of sparsely sampled stars and estimated the uncertainty of the VSFs with a Monte Carlo random sampling analysis.
\subsubsection{Projection effects}\label{sec:unproj}
The angular relations in Section \ref{sec:ratiobori} are originally derived in 3D spaces where the orientation of the 3D magnetic field is known. Dust polarization observations only traces the POS field orientations, thus the DCF method can only measure the POS magnetic field strength. There are different attempts to reconstruct the 3D magnetic field from the DCF estimations.
The 3D magnetic field can be derived by including the inclination angle of the 3D field in the DCF equations \citep[e.g.,][]{2004ApJ...616L.111H, 2022arXiv220409731L}. Note that the inclination angle of the 3D field is only meaningful when the underlying field is prominent (i.e., $\mathcal{M}_{A} \lesssim 1$). \citet{2002ApJ...569..803H} proposed a technique to derive the inclination angle with the combination of the polarimetric data and ion–to–neutral line width ratios. However, \citet{2011ASPC..449..213H} later suggested that this technique cannot be widely applied due to the degeneracy between the inclination angle and the POS field strength. \citet{2019MNRAS.485.3499C} developed a technique to estimate the inclination angle from the statistical properties of the observed polarization fraction, but this technique is subject to large uncertainties (up to 30$^{\circ}$). On the other hand, \citet{2021ApJ...915...67H} suggested that the field inclination angle can be derived with the anisotropy of the intensity fluctuations in spectroscopic data. However, this approach suggested by \citet{2021ApJ...915...67H} requires sophisticated multi-parameter fittings of idealized datasets, which limits its application to observations. The applicability of their technique in self-gravitating regime is also questionable since the gravity-induced motion can significantly affect the pure velocity statistics \citep{2022MNRAS.513.2100H}.
The 3D field strength can also be estimated by combining the POS and LOS components of the magnetic field. This can be done by combining DCF estimations and Zeeman estimations of the same material, where Zeeman observations are the only way to directly derive the LOS magnetic field strength. Recently, \citet{2018A&A...614A.100T} proposed a new method to estimate the LOS magnetic field strength based on Faraday rotation measurements, which can also be used to reconstruct the 3D magnetic field in combination with the POS magnetic field estimations \citep[e.g.,][]{2019A&A...632A..68T}.
In most cases, the information on the inclination angle and the LOS magnetic field is missing, or the measured POS and LOS magnetic fields do not correspond to the same material. One may still obtain an estimate of the 3D field from the POS field based on statistical relations. \citet{2004ApJ...600..279C} suggested that the statistical relation between the 3D and POS underlying magnetic field is $B^{\mathrm{u}}_{\mathrm{3D}} = \frac{4}{\pi}B^{\mathrm{u}}_{\mathrm{pos}}$ for a randomly inclined field between 0 and $\pi/2$. Note that this statistical relation only applies to a collection of DCF estimations where the 3D field orientation is expected to be random, but should not be applied to individual observations. For an isotropic turbulent magnetic field, the relation between the 3D and POS turbulent field
is $B^{\mathrm{t}}_{\mathrm{3D}} = \sqrt{\frac{3}{2}}B^{\mathrm{t}}_{\mathrm{pos}}$. Therefore, the total field has the statistical relation $B^{\mathrm{tot}}_{\mathrm{3D}} = f B^{\mathrm{tot}}_{\mathrm{pos}}$, where the factor $f$ should be between $\frac{4}{\pi}\sim1.27$ and $\sqrt{\frac{3}{2}}\sim1.22$ depending on whether the underlying or turbulent field is more dominant. For an anisotropic turbulent field, the statistical relation between 3D and POS turbulent or total field should depend on the extent of the anisotropy.
\subsection{Simulations and correlation factors}\label{sec:cor}
Due to the uncertainties in the assumptions of the DCF method (see Section \ref{sec:assump}) and on the statistics of field orientations (see Section \ref{sec:uncer}), a correction factor is required to correct for the magnetic field strength estimated from different modified DCF formulas. Several studies \citep{2001ApJ...546..980O, 2001ApJ...559.1005P, 2001ApJ...561..800H, 2008ApJ...679..537F, 2021AA...647A.186S, 2021ApJ...919...79L, 2022MNRAS.514.1575C} have numerically investigated the correction factor $Q_c$ at different physical conditions with 3D ideal compressible MHD simulations. Table \ref{tab:qcf} summarizes these numerically derived correction factors. Recently, \citet{2022arXiv220409731L} made the attempt to analytically derive the correction factor\footnote{Here we use $f$ for the analytically derived correction factor to differ with the numerically derived correction factor $Q_c$.} $f$ for their proposed DMA formula.
\begin{table}[tbh]
\tiny
\caption{Correction factors and corresponding simulation parameters. \label{tab:qcf}}
\begin{tabular}{cccccccc}
\hline \noalign {\smallskip}
& $\overline{Q_c}$ (range) & Formula ($\sqrt{\mu_0 \rho }\delta v_{\mathrm{los}} \times $) & Size (pc) & $n_{\mathrm{0}}$ or $n$ (cm$^{-3}$) & Gravity & $\mathcal{M}_{A0}$ or $\mathcal{M}_{A}$ $^f$ & Ref.$^g$ \\ %
\hline \noalign {\smallskip}
$B^{\mathrm{u}}_{\mathrm{pos}}$ & 0.5 (0.46$-$0.51)$^a$ & $1/\delta\phi_{\mathrm{obs}}$ &8 & $10^2$ & Yes & 0.7 & 1 \\ %
$B^{\mathrm{u}}_{\mathrm{pos}}$ & 0.4 (0.29$-$0.63) & $1/\delta\phi_{\mathrm{obs}}$ &1$^b$ & $< 10^5$ $^b$ & Yes & $\gtrsim$1 & 2 \\ %
$\sqrt{B^{\mathrm{u}}_{\mathrm{3D}}B^{\mathrm{tot}}_{\mathrm{3D}}}$ & (0.4$-$2.5)$^c$ & $\frac{(1+3\delta (\tan \phi_{\mathrm{obs}})^2)^{1/4}}{\delta (\tan \phi_{\mathrm{obs}})} $ & scale-free & ... & No & 0.8$-$14 & 3 \\ %
$B^{\mathrm{tot}}_{\mathrm{pos\|}}$ & (0.24$-$1.41)$^d$ & $(1/\tan \delta \phi_{\mathrm{obs}})^d$ & scale-free & ... & No & 0.7-2.0 & 4 \\%Fal08. 0-90&
$B^{\mathrm{u}}_{\mathrm{pos}}$ & ... & $1/\sqrt{2\delta\phi_{\mathrm{obs}}}$ & scale-free & ... & No & 0.1-2.0 & 5 \\ %
$B^{\mathrm{u}}_{\mathrm{pos}}$ & 0.28$^a$ & $1/\delta\phi_{\mathrm{obs}}$ & 1$-$0.2$^e$ & $\sim 10^4 - 10^6$ & Yes & $\sim$0.7-2.5 & 6 \\ %
$B^{\mathrm{tot}}_{\mathrm{pos}}$ & 0.62 & $1/\delta\phi_{\mathrm{obs}}$ & 1$-$0.2$^e$ & $\sim 10^4 - 10^6$ & Yes & $\sim$0.7-2.5 & 6 \\
$B^{\mathrm{tot}}_{\mathrm{pos}}$ & 0.53 & $1/\delta(\sin \phi_{\mathrm{obs}})$ & 1$-$0.2$^e$ & $\sim 10^4 - 10^6$ & Yes & $\sim$0.7-2.5 & 6 \\
$B^{\mathrm{tot}}_{\mathrm{pos}}$ & 0.21 & $(\frac{B^{\mathrm{tot}}_{\mathrm{pos}}}{B^{\mathrm{t}}_{\mathrm{pos\perp}}})_{\mathrm{adf,int}}$ & 1$-$0.2$^e$ & $\sim 10^4 - 10^6$ & Yes & $\sim$0.7-2.5 & 6 \\
$B^{\mathrm{tot}}_{\mathrm{pos}}$ & 0.39 & $(\frac{B^{\mathrm{tot}}_{\mathrm{pos}}}{B^{\mathrm{t}}_{\mathrm{pos\perp}}})_{\mathrm{adf}}$ & 1$-$0.2$^e$ & $\sim 10^4 - 10^6$ & Yes & $\sim$0.7-2.5 & 6 \\
\hline \noalign {\smallskip}
\end{tabular}
\normalsize{Notes}\\
\normalsize{$^{a}$ $\delta\phi_{\mathrm{obs}}<25 \degr$.}\\
\normalsize{$^{b}$ The simulations in \citet{2001ApJ...559.1005P} have a box size of 6.25 pc and initial $n_{\mathrm{0}} = 320$ cm$^{-3}$. They selected three 1 pc clumps for study.}\\
\normalsize{$^{c}$ After correction for the energy non-equipartition and with the assumption that the magnetic field is on the POS \citep{2001ApJ...561..800H}.}\\
\normalsize{$^{d}$ The formula in \citet{2008ApJ...679..537F} is to derive the $B^{\mathrm{tot}}_{\mathrm{pos\|}}$, but the correction factors refer to the ratio between the derived $B^{\mathrm{tot}}_{\mathrm{pos\|}}$ and the initial input $B^{\mathrm{u}}_{\mathrm{3D}}$. }\\
\normalsize{$^{e}$ The simulations in \citet{2021ApJ...919...79L} have a box size of 1-2 pc. The correction factors are derived within sub-spheres of different radii. }\\
\normalsize{$^{f}$ All the $\mathcal{M}_{A}$ were derived using the mean-field, which did not consider the ordered field structure. The ordered field contribution should be more significant for self-gravitating models.}\\
\normalsize{$^{g}$ References: (1) \citet{2001ApJ...546..980O}; (2) \citet{2001ApJ...559.1005P}; (3) \citet{2001ApJ...561..800H}; (4) \citet{2008ApJ...679..537F}; (5) \citet{2021AA...656A.118S}; (6) \citet{2021ApJ...919...79L}. For \citet{2001ApJ...559.1005P} and \citet{2021ApJ...919...79L}, the parameters reported correspond to sub-regions of the simulation at the studied time snapshot, while the parameters reported for other references are the initial parameter of the whole simulation box. }\\
\end{table}
For the most widely used formula $B^{\mathrm{u}}_{\mathrm{pos}} \sim \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\delta\phi_{\mathrm{obs}}}$, \citet{2001ApJ...546..980O} made the first attempt to quantify its uncertainty and derived a correction factor of $\sim$0.5 for an initially slightly sub-Alfv\'{e}nic ($\mathcal{M}_{A0}$=0.7) model with $n_0 \sim 10^2$ cm$^{-3}$ and a box length of 8 pc. Later, \citet{2001ApJ...559.1005P} found $Q_c \sim 0.4$ for three selected $\sim$1 pc and $n < 10^5$ cm$^{-3}$ clumps in a initially slightly super-Alfv\'{e}nic model with a box length of 6.25 pc. Recently, \citet{2021ApJ...919...79L} has expanded the analysis to high-density ($\sim 10^4 - 10^6$ cm$^{-3}$) trans-Alfv\'{e}nic clumps/cores at 1-0.2 pc scales and obtained $Q_c \sim 0.28$ for several strong-field models. \citet{2021ApJ...919...79L} also found $Q_c \ll 0.28$ at $<$0.1 pc or $n > 10^6$ cm$^{-3}$ regions. Both \citet{2001ApJ...546..980O} and \citet{2021ApJ...919...79L} proposed that their correction factors are only valid when $\delta\phi_{\mathrm{obs}}<25 \degr$. Applying the same $\delta\phi_{\mathrm{obs}}<25 \degr$ criteria to the results in \citet{2001ApJ...559.1005P}, their correction factor changes to $Q_c \sim 0.31$. From the three studies, it is very clear that there is a decreasing trend of $Q_c$ with increasing density and decreasing scale in self-gravitating regions. This could be due to reduced turbulent correlation scales and more field tangling at higher densities, which leads to a more significant LOS signal averaging effect.
\citet{2001ApJ...561..800H} proposed a formula to estimate the geometric mean of $B^{\mathrm{u}}_{\mathrm{3D}}$ and $B^{\mathrm{tot}}_{\mathrm{3D}}$ ($\sqrt{B^{\mathrm{u}}_{\mathrm{3D}}B^{\mathrm{tot}}_{\mathrm{3D}}}$, see Table \ref{tab:qcf}). However, their formula includes $\delta (\tan \phi_{\mathrm{obs}})$, which was found to have large scatters \citep{2021ApJ...919...79L}. Also, the term $\sqrt{B^{\mathrm{u}}_{\mathrm{3D}}B^{\mathrm{tot}}_{\mathrm{3D}}}$ does not have physical meaning. Thus, it is not suggested to use the geometric mean formula to estimate the magnetic field strength.
\citet{2008ApJ...679..537F} proposed a formula $B^{\mathrm{tot}}_{\mathrm{pos\|}} \sim \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\tan \delta \phi_{\mathrm{obs}}}$ and derived the correction factor for the total magnetic field strength for the first time. They compared the estimated $B^{\mathrm{tot}}_{\mathrm{pos\|}}$ with the initial input $B^{\mathrm{u}}_{\mathrm{3D}}$ in their non-self-gravitating simulations and suggested that $B^{\mathrm{tot}}_{\mathrm{pos\|}}$ is slightly larger than $B^{\mathrm{u}}_{\mathrm{3D}}$ when the magnetic field is in the POS. They also found $B^{\mathrm{tot}}_{\mathrm{pos\|}}<B^{\mathrm{u}}_{\mathrm{3D}}$ for large inclination angles. It is unclear how accurate the estimated $B^{\mathrm{tot}}_{\mathrm{pos\|}}$ values are compared to the total field strengths in their simulation. Recently, \citet{2022MNRAS.514.1575C} adopted the same formula $\sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\tan \delta \phi_{\mathrm{obs}}}$ but suggested that it estimates the underlying field strength than the total field strength. They also suggested that $\sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\sin \delta \phi_{\mathrm{obs}}}$ gives a better estimation of the total field strength. They tested the formulas with self-gravitating and initially trans-Alfv\'{e}nic ($\mathcal{M}_{A0} = 1.04-1.45$) simulations. Instead of using the area-averaged parameters ($\rho$, $\delta v_{\mathrm{los}}$, and $\delta\phi_{\mathrm{obs}}$) for the calculation, \citet{2022MNRAS.514.1575C} suggested that the average of the local pseudo-field strength calculated using the local physical parameters gives a better estimation of the true field strength. They proposed that the local gas density can be estimated with the velocity fitting method \citep{2014ApJ...794..165S} or the equilibrium method \citep{1978ApJ...220.1051E}, which, in combination with the local velocity dispersion and angular difference measurements, gives accurate field strength estimations within a factor of 2 at 1-20 pc scales and $n\sim10^{1.5} - 10^4$ cm$^{-3}$ when $\gamma>30\degr$. They also found that the correction factor decreases with increasing density and smaller scales if the analytic gas density in the simulation is adopted in the field strength estimation, which agrees with the trend found in the numerical studies of the simplest DCF formula (see discussions above in the same section).
\citet{2021ApJ...919...79L} also investigated the correction factor for the total magnetic field strength. They found $Q_c \sim 0.62$ for $B^{\mathrm{tot}}_{\mathrm{pos}} \sim \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\delta\phi_{\mathrm{obs}}}$ and $Q_c \sim 0.53$ for $B^{\mathrm{tot}}_{\mathrm{pos}} \sim \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\delta (\sin \phi_{\mathrm{obs}})}$. Although $\sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\delta\phi_{\mathrm{obs}}}$ is often used to derive the underlying field strength, it is more likely correlated with the total field strength when the angular dispersion is large. Note that the correction factors in \citet{2021ApJ...919...79L} only apply to scales of clumps and cores when densities are greater than 10$^4$ cm$^{-3}$ and are not applicable at larger scales (e.g., ISM or clouds.). Also note that the simulations in \citet{2021ApJ...919...79L} have physical conditions similar to clustered star-forming regions. It is possible that the correction factor for isolated low-mass star formation regions could be larger than those reported by \citet{2021ApJ...919...79L} at the same scales due to lower densities. \citet{2021ApJ...919...79L} did not find significant difference among the correction factors for different inclination angles.
\citet{2021AA...647A.186S} and \citet{2021AA...656A.118S} proposed a new formula $B^{\mathrm{u}}_{\mathrm{pos}} = \sqrt{\frac{\mu_0 \rho } {2\delta\phi_{\mathrm{obs}}}} \delta v_{\mathrm{los}}$ and tested their formula with scale-free non-self-gravitating models. They claimed that their formula does not need a correction factor. However, several assumptions in the derivation of the ST21 formula are approximations (see Section \ref{sec:assump}) and there are also uncertainties in statistics of magnetic field position angles (see Section \ref{sec:uncer}). Therefore correction factors should be still required for this method to compensate these uncertainties. More tests are needed to understand the validity of the ST21 formula under different physical conditions (e.g., at high-density self-gravitating medium).
Other than in situations when the ADF method is improperly applied to observations (e.g., obtaining negative ordered field contribution or only adopting $a_2' l^2$ when $l>l_d$), the uncertainty of the ADF method may mainly come from the maximum derivable values of the integrated turbulent-to-ordered or -total field strength ratio (see Section \ref{sec:adf}), which underestimates the turbulent-to-ordered or -total field strength ratio and overestimates the field strength. \citet{2021ApJ...919...79L} has estimated the correction factor for the total field strength derived from the ADF method in trans-Alfv\'{e}nic clumps/cores. They failed to derive the strength of the non-linearly ordered field in their simulations, thus they did not compare the ADF-derived ordered field strength with simulation values.
A recent and important modification of the DCF method is the DMA \citep{2022arXiv220409731L}, which theoretically derives the formula $B^{\mathrm{u}}_{\mathrm{pos}} = \sqrt{\mu_0 \rho } f \frac{\tilde{D}^{v}}{D^{\phi}}$ in the non-self-gravitating regime. The analytical correction factor $f$ is a function of $\gamma$, $\mathcal{M}_{A}$, and fraction of turbulence modes. \citet{2022arXiv220409731L} has listed the asymptotic forms of $f$ and the DMA formula in typical conditions of the ISM and molecular clouds (see their Table 3). They tested the DMA formula with a set of non-self-gravitating simulations and found the analytically and numerically derived correction factors agree well with each other in typical interstellar conditions. They suggested a pronounced dependence of $f$ on the mean field inclination angle and fraction of turbulence modes in molecular clouds. However, both parameters are difficult to obtain observationally, which makes it difficult to apply the DMA to observational data. A further extension of the DMA by including self-gravity is essential to increase its accuracy in the determination of magnetic field strengths in self-gravitating molecular clouds.
\subsection{Observational DCF estimations in star-forming regions}\label{sec:dcfobs}
The original DCF method and its modified forms have been widely applied to observations of magnetic fields in star-forming regions to estimate the magnetic field strength. Statistical studies of DCF estimations are of significant value to extend our understanding of the role of magnetic fields in star formation \citep[e.g.,][]{2019FrASS...6...15P, 2021ApJ...917...35M}. Recently, \citet{2022ApJ...925...30L} compiled all the previous DCF estimations published between 2000 and June 2021 from polarized dust emission observations within star-forming molecular clouds. Similarly, \citet{2022arXiv220311179P} made a compilation of all types of DCF measurements published between 2001 and May 2021. Here we briefly summarise the previous observational DCF estimations.
\subsubsection{Comparing magnetic field with gravity}
\paragraph{$B-n$ relation}
During the contraction of molecular clouds, the gravity can compress and amplify the magnetic field. The power-law index of the $B-n$ relation ($B \propto n^{j}$) characterise the dynamical importance of magnetic field during the cloud collapse. In the case of extremely weak magnetic fields where gas collapses isotropically due to magnetic freezing, there is relation $B \propto n^{2/3}$ for a self-gravitating cloud core during its temporal evolution \citep{1966MNRAS.133..265M}. In such case, the radial component of the magnetic field also has the 2/3 scaling dependence on the gas density at any time snapshots, whereas the tangential component does not follow this scaling relation. For stronger fields, the density increases faster than the magnetic field due to ambipolar diffusion at higher densities, which results in shallower power-law slopes \citep[e.g., $j\lesssim0.5$, ][]{1999ASIC..540..305M}.
However, the temporal $B-n$ relation of a single cloud is not obtainable observationally due to the long evolutionary time scale. Studies of the spatial $B-n$ relation for a single cloud \citep[e.g.,][]{2015Natur.520..518L} are also rare. Instead, observational $B-n$ studies usually focus on the spatial $B-n$ relation for an ensemble of star-forming regions at different evolution stages and different scales. \citet{2010ApJ...725..466C} made a pioneering study of the spatial $B-n$ relation based on the Bayesian analysis of a collection of Zeeman observations. They found that the magnetic field does not scale with density at $n_{\mathrm{H_2}} < 300$ cm$^{-3}$, but scales with density as $B \propto n^{0.65}$ at $n_{\mathrm{H_2}} > 300$ cm$^{-3}$. \citet{2021ApJ...917...35M} compiled the DCF estimation in 17 dense cores and reported $B \propto n^{0.66}$. With compilations of DCF estimations, \citet{2022ApJ...925...30L} and \citet{2022arXiv220311179P} found a similar trend to the Zeeman results in that the magnetic field does not scale with density at lower densities, but scales with density at higher densities. Due to the large scatters and the uncertainty in correction factors, they did not report the critical density and magnetic field strength for the transition. \citet{2022ApJ...925...30L} reported $B \propto n^{0.57}$ with a simple power-law fit for the high-density regime.
Despite the progress in the observational $B-n$ studies, concerns have been raised on whether the $B-n$ relation from a collection of different sources can be compared with model predictions for individual sources \citep{2021Galax...9...41L}. For the Zeeman observations, \citet{2015MNRAS.451.4384T} and \citet{2020ApJ...890..153J} found that adopting different observational uncertainties of $n$ other than the factor of 2 uncertainty adopted by \citet{2010ApJ...725..466C} can affect the fitted slope of the $B-n$ relation, which questions the validity of the Bayesian analysis in \citet{2010ApJ...725..466C}. \citet{2015MNRAS.451.4384T} further found that the samples collected in \citet{2010ApJ...725..466C} are preferentially non-spherical, which is inconsistent with the $B \propto n^{2/3}$ scaling. The DCF-derived $B - n$ relation is also very uncertain due to the scatters on the DCF estimations and the intrinsic $B \propto n^{0.5}$ dependence of the DCF method. We do not aim to present a detailed review of the $B-n$ relation, thus we refer readers to \citet{2015MNRAS.451.4384T}, \citet{2019FrASS...6...66C}, \citet{2019FrASS...6....5H}, \citet{2021Galax...9...41L}, and \citet{2022arXiv220311179P} for additional detailed discussions.
\paragraph{Mass-to-flux-ratio to critical value} \label{sec:dcflambda}
The relative importance between the magnetic field and the gravity of individual sources is usually parameterized by the magnetic critical parameter $\lambda$ (i.e., mass-to-flux-ratio in units of the critical value). The critical value of the mass-to-flux-ratio is model-dependent \citep[e.g., ][]{1966MNRAS.132..359S, 1976ApJ...210..326M, 1978PASJ...30..671N}. The magnetic critical parameter for an isothermal disk-like structure is given by \citep{1978PASJ...30..671N, 2004ApJ...600..279C}
\begin{equation}\label{eq:lambdac04}
\lambda = \mu_{\mathrm{H_2}} m_{\mathrm{H}} \sqrt{\mu_0\pi G} \frac{N_{\mathrm{H_2}}}{B} \sim 7.6 \times 10^{-21} \frac{N_{\mathrm{H_2}}/(\mathrm{cm}^{-2})}{B/(\mu G)},
\end{equation}
where $G$ is the gravitational constant, $\mu_{\mathrm{H_2}}$ is the mean molecular weight per hydrogen molecule, and $m_{\mathrm{H}}$ is the atomic mass of hydrogen. The magnetic critical parameter for a spherical structure with a density profile of $n \propto r^{-i}$ is given by \citep{2022ApJ...925...30L}
\begin{equation} \label{eq:lambliu22}
\lambda = \mu_{\mathrm{H_2}} m_{\mathrm{H}} \sqrt{1.5\mu_0\pi G/k_i} \frac{N_{\mathrm{H_2}}}{B},
\end{equation}
where $k_i = (5-2i)/(3-i)$. Equation \ref{eq:lambliu22} reduces to $\lambda \sim 8.7 \times 10^{-21} \frac{N_{\mathrm{H_2}}/(\mathrm{cm}^{-2})}{B/(\mu G)}$ when $i\sim$1.83 \citep{2022ApJ...925...30L}. $\lambda > 1$ indicates the gravity dominates the magnetic field (i.e., magnetically super-critical), and vice versa. Alternatively, the importance between the magnetic field and the gravity can also be compared with the magnetic virial parameter $\alpha_{\mathrm{B}} = 1/\lambda $ or with their energy ratios. The magnetic critical parameter or the magnetic virial parameter can also be expressed as a function of the number density $n_{\mathrm{H_2}}$ and radius $r$ through the relation $N_{\mathrm{H_2}} = 4n_{\mathrm{H_2}}r/3$.
Statistical DCF studies have suggested that while molecular clouds are magnetically sub-critical \citep{2016AA...586A.138P}, the molecular dense cores within clouds are super-critical \citep{2021ApJ...917...35M}. The recent more complete DCF compilation by \citet{2022ApJ...925...30L} found a clear trend of increasing $\lambda$ with increasing $N_{\mathrm{H_2}}$ (see Figure \ref{fig:lamb_Ncol}), where the average state transits from sub-critical to super-critical at $\sim3.4 \times 10^{21}$ cm$^{-2}$. This trend appears to agree with the prediction of magnetic field controlled star formation theories \citep{2006ApJ...646.1043M}, where magnetically sub-critical molecular clouds gradually form trans-to-super-critical substructures that collapse. The collapse is more dynamical in higher density regions. The dissipation of magnetic flux at higher densities may be due to ambipolar diffusion \citep{1999ASIC..540..305M} or magnetic reconnection \citep{1999ApJ...517..700L}. Mass accumulation along magnetic field lines can also increase the mass-to-flux ratio at higher densities. Despite the general trend seen in Figure \ref{fig:lamb_Ncol}, the samples collected by \citet{2022ApJ...925...30L} are mostly from different regions. Future multi-scale studies of the same region would be of great significance. High-mass star-forming regions tend to have higher column densities than low-mass star-forming regions at the same scales \citep{2014prpl.conf..149T}, thus high-mass star formation may be more magnetically super-critical than low-mass star formation within molecular clouds. There is only one DCF estimation of high-mass star-forming clouds at 10 pc scales so far \citep[Orion, ][]{2016AA...586A.138P}. More DCF estimations would be helpful for us to better understand the dynamical states of massive star formation at cloud scales. With better calibrated modified DCF methods (e.g., the extension of the DMA to the self-gravitating regime) and observational constraints for the physical parameters required for the DCF estimations, future magnetic criticality studies in molecular clouds could shed light on constraining the critical density in specific clouds and on comparing the mass-to-flux ratio of sources at different evolutionary stages.
\subsubsection{Comparing magnetic field with turbulence}\label{sec:dcfobs_BVSturb}
The relative importance between the underlying magnetic field and the turbulence of individual sources is usually parameterized by the Alfv\'{e}nic Mach number
\begin{equation}\label{eq:alfven}
\mathcal{M}_{A} = \frac{\delta v_{\mathrm{3D}}\sqrt{\mu_0 \rho }}{B^{\mathrm{u}}_{3D}}.
\end{equation}
If there is an equipartition between the turbulent magnetic and kinetic energies, Equation \ref{eq:alfven} reduces to $\mathcal{M}_{A} = B^{\mathrm{t}}_{\mathrm{3D}}/B^{\mathrm{u}}_{3D}$. The ratio $B^{\mathrm{t}}_{\mathrm{3D}}/B^{\mathrm{u}}_{3D}$ can be derived with the statistics of the observed polarization angles if the mean field inclination angle and the anisotropy of the turbulent field are known \citep{2022arXiv220409731L}. With the relations $B^{\mathrm{u}}_{\mathrm{3D}} = f_u B^{\mathrm{u}}_{\mathrm{pos}}$ and $B^{\mathrm{t}}_{\mathrm{3D}} = f_t B^{\mathrm{t}}_{\mathrm{pos\perp}}$, there is $\mathcal{M}_{A} \sim (f_t/f_u) B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$, where $f_u$ and $f_t$ are factors for the 3D to POS conversion. In small angle approximation, we obtain $\mathcal{M}_{A} \sim (f_t/f_u) \delta\phi_{\mathrm{obs}}$. The term $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ can also be derived from the ADF method (see Section \ref{sec:ratiobori}).
The relation between $\mathcal{M}_{A}$ and $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ or $\delta\phi_{\mathrm{obs}}$ can be regarded as an extension of the DCF formula, thus the correction factors for the DCF formula should also be applied to $\mathcal{M}_{A}$ under the same equipartition assumption. i.e., we have $\mathcal{M}_{A} \sim (f_t/f_u/Q_c) (B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}})_{\mathrm{adf}}$ or $\mathcal{M}_{A} \sim (f_t/f_u/Q_c) \delta\phi_{\mathrm{obs}}$. Adopting an additional correction factor $f_o$ to account for the ordered field contribution to the angular dispersion, the relation between $\mathcal{M}_{A}$ and $\delta\phi_{\mathrm{obs}}$ becomes $\mathcal{M}_{A} \sim (f_t/f_u/Q_c/f_o) \delta\phi_{\mathrm{obs}}$.
\citet{2022ApJ...925...30L} has compiled the observed angular dispersion from previous DCF studies (see Figure \ref{fig:angdcf_dens}). They suggested that the observed angular dispersion does not provide too much information on the Alfv\'{e}nic states of molecular clouds due to the maximum angle limit of a random field ($\delta\phi_{\mathrm{obs}} < 52 \degr$) and the lack of an appropriate DCF correction factor at cloud scales. Without knowledge of the inclination angle and turbulence anisotropy for most observations in the compilation, they adopted the statistical correction factor $f_u\sim4/\pi$ for a randomly distributed 3D mean field orientation \citep{2004ApJ...600..279C}, $f_t\sim\sqrt{3}$ for approximate isotropic turbulent fields in self-gravitating regions \citep{2021ApJ...919...79L}, and $f_o\sim2.5$ based on the numerical work by \citet{2021ApJ...919...79L}. Additionally adopting the $Q_c$ for different modified DCF approaches reported in \citet{2021ApJ...919...79L}, \citet{2022ApJ...925...30L} found that the average $\mathcal{M}_{A}$ for the substructures in molecular clouds is $\sim$0.9, which indicates the average state is approximately trans-Alfv\'{e}nic. They also suggested that both sub- and super-Alfv\'{e}nic states exist for the cloud substructures and did not find strong relations between $\mathcal{M}_{A}$ and $n$. \citet{2022arXiv220311179P} did a similar study of $\mathcal{M}_{A}$ with their compilation of observed angular dispersions. Unlike \citet{2022ApJ...925...30L}, \citet{2022arXiv220311179P} corrected the estimated $\mathcal{M}_{A}$ with the ratio between the DCF-derived POS field strength and Zeeman-derived LOS field strength at similar densities assuming that the Zeeman estimations are accurate references. They found that the turbulence is approximately trans-Alfv\'{e}nic on average and $\mathcal{M}_{A}$ has no clear dependence on $n$, which agree with \citet{2022ApJ...925...30L}. Note that both \citet{2022ApJ...925...30L} and \citet{2022arXiv220311179P} are statistical studies, where the adopted statistical relations may not apply to individual sources.
As the ADF method removes the contribution from the ordered field, the turbulent-to-ordered field strength ratio derived by the ADF method should be more suitable for the study of the Alfv\'{e}nic state than the directly estimated angular dispersions. However, the applicability of the ADF method to determine the $\mathcal{M}_{A}$ value is limited by the maximum derivable turbulent-to-ordered field strength ratio (see Section \ref{sec:adf}), its uncertainty on the LOS signal integration (see Section \ref{sec:unlossi}), and the lack of appropriate numerically-derived correction factors for these uncertainties (see Section \ref{sec:cor}).
If an alternative assumption of the Fed16 equipartition (see Section \ref{sec:energyeq}) is adopted, the $\mathcal{M}_{A}$ should be correlated with $\sqrt{B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}}$ or $\sqrt{\delta\phi_{\mathrm{obs}}}$ instead of $B^{\mathrm{t}}_{\mathrm{pos\perp}}/B^{\mathrm{u}}_{\mathrm{pos}}$ or $\delta\phi_{\mathrm{obs}}$ \citep{2021AA...656A.118S}. However, the applicability of these alternative relations is limited by the lack of correction factors at different physical conditions.
In summary, the average state of star-forming substructures within molecular clouds may be approximately trans-Alfv\'{e}nic, but the observed angular dispersions do not yield clues on the Alfv\'{e}nic state of molecular clouds themselves. Note that the equipartition assumption (either the DCF53 or the Fed16 assumption), which should be independently confirmed, is a prerequisite for using the angular dispersion to determine the Alfv\'{e}nic Mach number. If the equipartition assumption is not satisfied for some of the sources, the average state should be more super-Alfv\'{e}nic.
\subsubsection{Equilibrium state}
The equilibrium state of a dense structure is usually parameterized by the virial parameter. Neglecting the surface kinetic pressure and the thermal pressure, the total virial parameter for a spherical structure considering the support from both the magnetic field and the turbulence is estimated as the ratio of the total virial mass and the gas mass
\begin{equation}\label{eq:virialtotal}
\alpha_{\mathrm{turb+B}} = \frac{M_{\mathrm{turb+B}}}{M}.
\end{equation}
The total virial mass is given by \citep{2020ApJ...895..142L}
\begin{equation}
M_{\mathrm{turb+B}} = \sqrt{M^2_{\mathrm{B}} + (\frac{M_{\mathrm{turb}}}{2})^2} + \frac{M_{\mathrm{turb}}}{2},
\end{equation}
where the magnetic virial mass is estimated with
\begin{equation}
M_{\mathrm{B}} = \frac{\pi r^2 B}{\sqrt{1.5\mu_0\pi G/k_i}},
\end{equation}
and the turbulent virial mass is estimated with
\begin{equation}
M_{\mathrm{turb}} = \frac{3k_i\delta v_{\mathrm{los}}^2r}{G}.
\end{equation}
Alternatively, the equilibrium state can also be studied by comparing $2E_{\mathrm{turb}} + E_{\mathrm{B}}$ with $E_{\mathrm{G}}$, where $E_{\mathrm{turb}}$ is the turbulent kinetic energy, $E_{\mathrm{B}}$ is the magnetic energy, and $E_{\mathrm{G}}$ is the gravitational energy. %
Figure \ref{fig:alpha_Ncol} shows the total virial parameter as a function of column density for the dense substructures within molecular clouds based on the DCF compilation by \citet{2022ApJ...925...30L}. Due to the lack of mass estimations, we do not show the $\alpha_{\mathrm{turb+B}}$ for the molecular clouds observed by the Planck. Since the magnetic field can solely support the clouds (see Section \ref{sec:dcflambda}), these clouds should have $\alpha_{\mathrm{turb+B}}>1$ (i.e., super-virial). Low-mass star-forming regions with $M \leq M_{\mathrm{crit}} = 870 M_{\odot} (r/pc)^{1.33}$ and high-mass star-forming regions with $M > M_{\mathrm{crit}}$ \citep{2010ApJ...716..433K} are indicated with different colors. In Figure \ref{fig:alpha_Ncol}, the high-mass regions with highest column densities (e.g., $N_{\mathrm{H_2}}> 10^{24}$ cm$^{-2}$) tend to be trans- or sub-virial, but both super- and sub-virial states exist at lower column densities. The median $\alpha_{\mathrm{turb+B}}$ in low-mass and high-mass regions are $\sim$1.1 and $\sim$0.66, respectively, suggesting that the gravity may be more dominant in high-mass star formation. It may also indicate that high-mass star formation within molecular clouds tends to be more likely in non-equilibrium. It is possible that the magnetic field strength is overestimated for some sources due to the energy non-equipartition, which suggests a even more dynamical massive star formation. In summary, it appears that star-forming regions with higher column densities may have smaller total virial parameters due to more significant roles of gravity, but this trend is highly uncertain due to the large scatters.
\section{The HRO analysis}\label{sec:hro}
\subsection{Basics}
\citet{2013ApJ...774..128S} developed the HRO analysis to characterize the relative orientation of the magnetic field with respect to the density structures, which can be used to establish a link between observational results and the physics in simulations. In 3D, the HRO is the relation between the 3D magnetic field orientation and the number density gradient $\nabla n$. In the POS, the HRO is the relation between the POS magnetic field orientation and the column density gradient $\nabla N$. The calculation of the density gradient is by applying Gaussian derivative kernels to the density structure \citep{2013ApJ...774..128S}. The density gradient at different scales can be studied by varying the size of the derivative kernels. The orientation of the iso-density structure is perpendicular to the direction of the density gradient.
For observational data in the POS, the angle $\phi_{\mathrm{B-N}}$ between the orientation of the iso-column density structure and the POS magnetic field orientation\footnote{Note that there is a 90$\degr$ difference between the $\phi_{\mathrm{B-N}}$ defined in \citet{2013ApJ...774..128S} and in the subsequent HRO papers. In \citet{2013ApJ...774..128S}, $\phi_{\mathrm{B-N}}$ is defined as the angle between the column density gradient and the POS magnetic field orientation.} is given by
\begin{equation}
\phi_{\mathrm{B-N}} = \arctan (\vert \nabla N\times \boldsymbol{E} \vert, \nabla N\cdot\boldsymbol{E}),
\end{equation}
where $\boldsymbol{E}$ is the POS electric field pseudo-vector \citep{2016AA...586A.138P}. The angle $\phi_{\mathrm{B-N}}$ is within $\pm 90\degr$. When $\phi_{\mathrm{B-N}} = 0 \degr$, the magnetic field is parallel to the orientation of the column density structure and is perpendicular to the column density gradient. When $\vert\phi_{\mathrm{B-N}}\vert = 90 \degr$, the magnetic field is perpendicular to the orientation of the column density structure and is parallel to the column density gradient. The preferential relative orientation between the magnetic field and the density structure is characterized by the histogram shape parameter \citep{2016AA...586A.138P, 2017AA...603A..64S}
\begin{equation}
\xi = \frac{A_0 - A_{90}}{A_0 + A_{90}},
\end{equation}
where $A_0$ is the percentage of pixels with $\vert\phi_{\mathrm{B-N}}\vert < 22.5 \degr$ and $A_{90}$ is the percentage of pixels with $67.5 \degr < \vert\phi_{\mathrm{B-N}}\vert < 90 \degr$ in a polarization map. Positive $\xi$ values indicate the column density structure is more likely aligned with the magnetic field, and vice versa. The $\xi$ in 3D can be derived similarly.
Using the parameter $\xi$ to characterize the relative orientation has some drawbacks. For instance, the derivation of the parameter $\xi$ completely ignores angles within $22.5 \degr < \vert\phi_{\mathrm{B-N}}\vert < 67.5 \degr$. It also suffers from intrinsic deficiencies of binning of angles. To overcome these shortcomings, \citet{2018MNRAS.474.1018J} improved the HRO analysis with the projected Rayleigh statistic (PRS), which uses the PRS parameter $Z_x$ instead of $\xi$ to characterize the preferential relative orientation. For a set of $n'$ angles of relative orientation $\{\phi_{B-N,i}\}$, $Z_x$ is estimated with
\begin{equation} \label{eq:prs}
Z_x = \frac{\Sigma^{n'}_i \cos 2\phi_{B-N,i}}{\sqrt{n'/2}}.
\end{equation}
$Z_x>0$ indicates a preferential parallel alignment between the magnetic field and column density structure, and vice versa. \citet{2018MNRAS.474.1018J} suggested that the parameter $Z_x$ is more statistically powerful than the parameter $\xi$, especially when the sample size is small or when the angles $\{\phi_{B-N,i}\}$ are more uniformly distributed. Equation \ref{eq:prs} cannot be directly applied to 3D data. The formula for the 3D PRS parameter still needs more theoretical investigations \citep{2021MNRAS.503.5425B}.
The VGT group \citep[e.g.,][]{2017ApJ...835...41G, 2018ApJ...853...96L} introduced an alignment measure ($AM$) parameter to study the relative orientation between magnetic field orientations and velocity gradients, where the $AM$ can also be used to study the relative orientation between magnetic fields and density structures. The $AM$ for $\phi_{\mathrm{B-N}}$ can be expressed as
\begin{equation} \label{eq:am}
AM = \langle \cos 2\phi_{B-N} \rangle.
\end{equation}
Similar to $Z_x$, the $AM$ is also based on the Rayleigh statistic. $AM$ is in the range of -1 to 1, which can be regarded as a normalized version of $Z_x$.
\subsection{Observations}\label{sec:hroobs}
The relation between the cloud density structures and magnetic field orientations has been extensively studied observationally. For example, \citet{2013MNRAS.436.3707L} found the orientation of Gould Belt clouds tends to be either parallel or perpendicular to mean orientation of the cloud magnetic field, which they interpreted as strong fields channeling sub-Alfv\'{e}nic turbulence or guiding gravitational contraction. Toward the same sample, clouds elongated closer to the field orientation were found to have (1) higher star formation rates, which was suggested to be due to their smaller magnetic fluxes as well as weaker magnetic support against gravitational collapse \citep{2017NatAs...1E.158L}; (2) shallower mass cumulative function slopes \citep[][]{2020MNRAS.498..850L}, i.e., shallower column density probability distribution functions (N-PDFs), or, in other words, more mass at high densities. In filamentary clouds, there is evidence that the magnetic field is parallel to the low-density striations and is perpendicular to the high-density main filament \citep[e.g.,][]{2011ApJ...741...21C, 2016A&A...590A.110C}, which implies that the main filament accrete gas through the striations along the field lines. Besides the success of those observational studies, the HRO analysis has enabled a way to perform pixel-by-pixel statistics for the local alignment between the column density structure and the magnetic field. Observational studies using the HRO analysis have been focused on studying this alignment at different column densities.
The first HRO analyses were made with observations from the Planck/HFI at large scales. With a smoothed resolution of 15$\arcmin$, \citet{2016A&A...586A.135P} found that $\xi$ is mostly positive and is anti-correlated with the column density\footnote{A significant amount of the diffuse ISM is in the atomic phase, while molecular clouds are in the molecular phase. Here we use $N_{\mathrm{H_2}}$ to describe the column density in diffuse ISM and molecular clouds for uniformity.} $N_{\mathrm{H_2}}$ over the whole-sky at $N_{\mathrm{H_2}}\lesssim 5\times10^{21}$ cm$^{-2}$. The Planck observations toward 10 nearby ($d<450$ pc) clouds \citep{2016AA...586A.138P} with a smoothed resolution of 10$\arcmin$ ($\sim$0.4-1.3 pc) have revealed a prevailing trend of decreasing $\xi$ with increasing $N_{\mathrm{H_2}}$, with $\xi$ being positive at lower $N_{\mathrm{H_2}}$ and becoming negative at higher $N_{\mathrm{H_2}}$ in most clouds except for those with low column densities (e.g., CrA). The transition of $\xi$ from positive to negative values was found to be at $N_{\mathrm{H_2,tr}} \sim 2.5\times10^{21}$ cm$^{-2}$.
Subsequent studies have expanded the HRO analysis to compare the large-scale magnetic field observed with Planck/HFI or the BLASTPol with the smaller-scale column density structures revealed by the Herschel Space Observatory. \citet{2016MNRAS.460.1934M} compared the Herschel dust emission structures at a 20$\arcsec$ ($\sim$0.01 pc) resolution and the large-scale magnetic field orientation revealed by Planck polarization maps at a 10$\arcmin$ ($\sim$0.4 pc) resolution and found a trend of decreasing $\xi$ with increasing $N_{\mathrm{H_2}}$ in the nearby cloud L1642, which transits at $N_{\mathrm{H_2,tr}} \sim 8\times10^{20}$ cm$^{-2}$. \citet{2017AA...603A..64S} found the same decreasing $\xi-N_{\mathrm{H_2}}$ trend in the Vela C molecular complex by comparing the large-scale magnetic field orientation revealed by BLASTPol at a smoothed resolution of 3$\arcmin$ ($\sim$0.61 pc) with the column density structures revealed by Herschel at a smoothed resolution of 35.2$\arcsec$ ($\sim$0.12 pc). The transition column density in Vela C is $N_{\mathrm{H_2,tr}} \sim 6\times10^{21}$ cm$^{-2}$. They also found the slope of the $\xi-N_{\mathrm{H_2}}$ relation is sharper in sub-regions where the high-column density tails of N-PDFs are flatter. \citet{2019A&A...629A..96S} compared the Herschel column density maps at a resolution of 36$\arcsec$ ($\sim$0.03-0.08 pc) with the large-scale magnetic field from Planck observations for the ten clouds studied by \citet{2016AA...586A.138P} and found the $\xi$ (or $Z_x$) decreases with increasing $N_{\mathrm{H_2}}$ in most clouds, which is in agreement with the study by \citet{2016AA...586A.138P}. They also found that regions with more negative cloud-averaged $\xi$ (or $Z_x$) tend to have steeper N-PDF tails, but did not find a clear trend between the cloud-averaged $\xi$ (or $Z_x$) and the star formation rate. In addition, \citet{2019ApJ...878..110F} compared the magnetic field orientation revealed by BLASTPol with the integrated line-intensity structures of different molecular lines observed with Mopra in Vela C. They found that the line emission for low-density tracers are statistically more aligned with the magnetic field, while high-density tracers tend to be perpendicular to the magnetic field. The transition occurs at $n_{\mathrm{H_2,tr}} \sim 10^{3}$ cm$^{-3}$.
At smaller scales, the polarization observations from the SOFIA/HAWC+ or JCMT/POL2 can be used to study the relative orientation between the magnetic field and column density structures within elongated filamentary structures. \citet{2021ApJ...918...39L} compared magnetic field orientations from the HAWC+ observations with the Herschel column density maps in Ophiuchus/L1688 at a smoothed resolution of 36.3$\arcsec$ ($\sim$0.02 pc). They found smaller $\xi$ values at higher $N_{\mathrm{H_2}}$, with $\xi$ being mostly negative except for the first two column density bins. The transition column density in L1688 is $N_{\mathrm{H_2,tr}} \sim 5\times10^{21}$ cm$^{-2}$. They also found that $\xi$ increases from negative to $\sim$0 at higher column densities in the dense core Oph A within L1688, which suggests the magnetic alignment behavior is more complex at higher column densities. \citet{2022MNRAS.510.6085L} compared the inferred magnetic field from HAWC+ with the Herschel column density map in Taurus/B211 and found that the magnetic field is more likely perpendicular to the column density structures in B211. \citet{2022ApJ...926..163K} found negative $\xi$ values in the Serpens Main cloud with JCMT observations at a resolution of 14$\arcsec$ ($\sim$0.03 pc), where they suggested that the first column density bin at $N_{\mathrm{H_2}} \sim 9.3\times10^{21}$ cm$^{-2}$ is approximately the transition column density. In \citet{2022ApJ...926..163K}, the $\xi$ decreases from 0 to negative values from $N_{\mathrm{H_2}} \sim 9.3\times10^{21}$ cm$^{-2}$ to $\sim 4.6\times10^{22}$ cm$^{-2}$, which mainly corresponds to the filamentary structures. Between $N_{\mathrm{H_2}} \sim 4.6\times10^{22}$ cm$^{-2}$ to $\sim 10^{23}$ cm$^{-2}$, $\xi$ increases back to 0, suggesting that the magnetic fields are trending parallel to elongated structures. At $N_{\mathrm{H_2}} > \times10^{23}$ cm$^{-2}$, $\xi$ again decreases with increasing $N_{\mathrm{H_2}}$. This behavior suggests a complex interplay between mass accumulation, hub-filament interaction, and gravitational collapse within filamentary structures.
Interferometric polarization observations are capable of revealing the magnetic field orientation within molecular dense cores. With ALMA observations at a resolution of 0.35$\arcsec$ ($\sim$140 AU), \citet{2017ApJ...842L...9H} found that the magnetic field is slightly more perpendicular than parallel to the column density structure around the class 0 protostellar source Ser-emb 8 in Serpens Main. Using ALMA observations at a resolution of 1.1$\arcsec$ ($\sim$200 AU), \citet{2020ApJ...892..152H} found that the magnetic field tends to be perpendicular to the column density structure in the class 0 protostellar source BHR 71 IRS1. They also found the magnetic field tends to be perpendicular to the column density structure but is parallel to outflow-cavity walls in another class 0 source BHR 71 IRS2, suggesting that the magnetic field is affected by the outflow activity. With ALMA observations at a resolution of 1.2$\arcsec$ ($\sim$0.02 pc) toward the high-mass star-forming region G327.3, \citet{2020ApJ...904..168B} found that the relative orientation between the magnetic field and the dust emission structure changes from perpendicular to a random distribution when it is closer to the dust emission peak (i.e., $\xi$ or $Z_x$ increases with increasing $N_{\mathrm{H_2}}$). This clearly contradicts the general decreasing $\xi-N_{\mathrm{H_2}}$ trend at larger scales found by studies of lower resolution data, and might be related to the rotational properties of potential disks.
In summary, there is a general trend that the relative orientation between the magnetic field and the column density structure changes from parallel to perpendicular with increasing gas column densities. The transition from parallel to perpendicular alignment occurs at $N_{\mathrm{H_2,tr}} \sim 10^{21}-10^{22}$ cm$^{-2}$ and may be at $n_{\mathrm{H_2,tr}} \sim 10^{3}$ cm$^{-3}$, which are comparable to the transition or critical densities of the $B-n$ and $B-N$ relations measured from DCF estimations \citep{2022ApJ...925...30L} or Zeeman observations \citep{2012ARA&A..50...29C}. Within filaments and molecular dense cores, the relative orientation could return to random alignment in some sub-regions due to the impact of the projection effect (see Section \ref{sec:hrosim}), accreting gas flows, outflows, and/or the disk rotation. More observations are needed to better understand the returning to random alignment at high densities.
\subsection{Simulations}\label{sec:hrosim}
As indicated in Section \ref{sec:hroobs}, most clouds show a transition from a preferentially parallel alignment between the magnetic field orientation and density structures at low densities to a preferentially perpendicular alignment at higher densities. Numerical HRO studies have been focused on interpreting the contradictory alignment at different densities and understanding the implication of this transition.
\citet{2013ApJ...774..128S}, \citet{2016AA...586A.138P}, and \citet{2017AA...603A..64S} have analysed 3 self-gravitating models with different initial magnetization levels (plasma $\beta_0$=100, 1, and 0.1). The 3 models are initially super-Alfv\'{e}nic ($\mathcal{M}_{A0}$ = 100, 10, and 3.16). At $\sim2/3$ of the flow crossing time\footnote{\citet{2016AA...586A.138P} did not explicitly indicate the time of the snapshot they studied. We consulted their corresponding author to confirm this information.}, the 3 models become super-Alfv\'{e}nic, trans-Alfv\'{e}nic, and sub-Alfv\'{e}nic on average \citep{2016AA...586A.138P}, respectively. The magnetic field in the 3 models are preferentially parallel to the density structures. Only in the trans- or sub-Alfv\'{e}nic models, the relative orientation between the magnetic field and the density strucutre changes from parallel to perpendicular at high densities \citep{2016AA...586A.138P}. \citet{2017AA...603A..64S} found that the transition occurs when the term $A_{23}$ changes its sign. The $A_{23}$ term is a coefficient in the time-varying equation describing the angle between the magnetic field and density gradient, which is a function of the velocity field, magnetic field, and density distribution. They also found that the local values of $\mathcal{M}_{A}>1$ and $\nabla \boldsymbol{v} < 0$ (i.e., converging gas flow) in high-density regions are associated with perpendicular alignment but are not sufficient for the relative orientation to become perpendicular.
\citet{2016ApJ...829...84C} analysed 3 self-gravitating models with different initial magnetic levels ($\mathcal{M}_{A0}$ = 2.2, 4.4, and 8.8). The 3 initially super-Alfv\'{e}nic models become trans-to-sub-Alfv\'{e}nic ($\mathcal{M}_{A}$ = 0.78, 0.81, and 0.84) on average due to shock compression when the most evolved core starts to collapse. They suggested that the change of relative orientation from parallel to perpendicular above a transition density happens when the sub-Alfv\'{e}nic gas is gravitationally accelerated to become locally super-Alfv\'{e}nic again in overdense regions. However, \citet{2017ApJ...836...95O} found that the change of relative orientation can still happen even when the high-density self-gravitating region is sub-Alfv\'{e}nic (local $\mathcal{M}_{A}=0.54$) in their simulations. Hence $\mathcal{M}_{A}>1$ may be sufficient but is not necessary for the transition of relative orientation.
\citet{2020MNRAS.497.4196S} performed a set of self-gravitating colliding flow simulations with $\mathcal{M}_{A0}$ = 0.6-4. At a time snapshot of 19 Myr, the mass-to-flux ratio $\mu$ of the simulations become 0.54-4.3, where the critical $\mu$ is 1. They found that only models with $\mu \lesssim$1 shows a clear transition from parallel to perpendicular alignment for the relative orientation. Note that their models with $\mu \lesssim$1 also have $\mathcal{M}_{A0} \lesssim 1$. Similar to \citet{2017AA...603A..64S}, they suggested that the observed transition can be well explained by the term $A = A_{1}+A_{23}$, where $A_{1}$ is another coefficient describing the evolution of the angle between the magnetic field and density gradient. They found that $\xi<1$ is associated with local $\mathcal{M}_{A}\gg1$ in high density regions, which agrees with \citet{2017AA...603A..64S}. \citet{2020MNRAS.497.4196S} also performed a set of SILCC-Zoom simulations with different physical conditions, but their output are less clear than those of the colliding flow simulations. Therefore no conclusions were drown from these SILCC-Zoom simulations.
\citet{2020MNRAS.499.4785K} performed a set of initially supersonic (initial Mach number $\mathcal{M}_{s0} = 7.5$) non-gravitating simulations with different initial magnetic levels ($\beta_0$=10, 1, and 0.01; $\mathcal{M}_{A0}$ = 16.7, 5.3, and 0.5), where the turbulence is driven either solenoidally or compressively. They found only models with $\beta_0$=0.01 ($\mathcal{M}_{A0}$ = 0.5) and compressible turbulence clearly show the transition from parallel to perpendicular alignment at higher densities, which may suggest the main driver of the change of relative orientation is the compression of the gas. They also found the transition point for $\xi \sim 0$ does not perfectly correspond to $A = A_{1}+A_{23} \sim 0$, where $A \sim 0$ happens at slightly lower densities than $\xi \sim 0$. Their analysis suggests that $A = A_{1}+A_{23} > 0$ is not a sufficient condition and self-gravity is not a necessary condition for the transition of relative orientation. The change of relative orientation without self-gravity is also seen in the initially trans-to-sub-Alfv\'{e}nic and very supersonic simulations by \citet{2019ApJ...886...17H}.
\citet{2021MNRAS.507.5641G} also found a transition from parallel alignment to perpendicular alignment at higher densities in a 10 pc region of their large-scale simulation. They suggested that the transition happens when the local mass-to-flux ratio exceeds its critical value and the gravitational forces dominates the combination of thermal and magnetic pressures.
\citet{2021MNRAS.503.5425B} studied the relative orientation between the magnetic field and density structure with simulations with a wide range of initial states (sub-Alfv\'{e}nic and super-Alfv\'{e}nic; with and without gravity). In non-gravitating simulations, only initially sub-Alfv\'{e}nic models with the largest initial sonic number ($\mathcal{M}_{s0} = 7$) show signs of perpendicular alignment between the magnetic field and column density structure in 2D at late stages of evolution. In simulations with gravity, the initial distribution of $Z_x$ is similar to the simulations without gravity. At later stages, all gravitating sub-Alfv\'{e}nic models show a change of $Z_x$ from positive to negative values at high densities. They concluded that self-gravity may help to create structures perpendicular to the magnetic field.
\citet{2022ApJ...925..196I} analysed 3 clouds in a galactic-scale simulation with gravity. They found that the gas is trans-Alfv\'{e}nic at $n_{\mathrm{H_2}} < 10^{2}$ cm$^{-3}$. At $n_{\mathrm{H_2}} > 10^{2}$ cm$^{-3}$, the gas becomes super-Alfv\'{e}nic, which they suggested to be due to gravitational collapse. The relative orientation changes from predominantly parallel to much more random or perpendicular at even higher densities. i.e., they also found $\mathcal{M}_{A}>1$ at high density is associated with perpendicular alignment but is not a sufficient condition for the transition of relative orientation, which agrees with \citet{2017AA...603A..64S} and \citet{2020MNRAS.497.4196S}.
Observations can only trace the projected quantities in the POS. The projection effect can make the 2D $\xi-N_{\mathrm{H_2}}$ relation different from the 3D $\xi-n_{\mathrm{H_2}}$ relation. Two parallel vectors in 3D are still parallel when projected in 2D, but 2 perpendicular vectors in 3D can have any relative orientations when projected in 2D depending on the viewing angle \citep[e.g.,][]{2016A&A...586A.135P}. \citet{2013ApJ...774..128S} found that the relative orientation preserve well from 3D to 2D in sub-Alfv\'{e}nic environments, except when the magnetic field orientation is close to the LOS. In contrast, the projected relative orientation does not necessarily reflect the relative orientation in 3D in super-Alfv\'{e}nic environments. Other numerical studies found similar results for the projection effect \citep{2020MNRAS.497.4196S, 2021MNRAS.503.5425B, 2021MNRAS.507.5641G}.
The transition density at which the relative orientation changes its sign has also been studied by simulations. There is evidence that models with stronger magnetic field tend to have smaller transition densities in individual numerical studies \citep{2013ApJ...774..128S, 2016ApJ...829...84C}, but this transition may not be solely dependent on the Alfv\'{e}nic Mach number \citep{2022arXiv220311179P}. It should be noted that different numerical studies do not have a consistent way for measuring the local $\mathcal{M}_{A}$. Each study reports the local $\mathcal{M}_{A}$ averaged over arbitrary scales, which makes it challenging to compare their results.
Table \ref{tab:hro} summarizes the simulations that show the transition of relative orientation from parallel to perpendicular alignment at higher densities. In most simulations \citep{2013ApJ...774..128S, 2016AA...586A.138P, 2017AA...603A..64S, 2016ApJ...829...84C, 2020MNRAS.497.4196S, 2020MNRAS.499.4785K, 2021MNRAS.503.5425B, 2022ApJ...925..196I}, the transition of relative orientation only occurs when the large-scale environment is trans- or sub-Alfv\'{e}nic. For non-self-gravitating simulations, the transition only happens when the initial environment is sub-Alfv\'{e}nic and supersonic \citep{2020MNRAS.499.4785K, 2021MNRAS.503.5425B}. Alternatively, \citet{2020MNRAS.497.4196S} proposed that the transition only occurs in an magnetically sub-critical large-scale environment. It is still unclear what triggers the relative orientation to change from parallel to perpendicular. Plausible reasons include local $\mathcal{M}_{A}>1$\footnote{It is unclear whether these numerical studies has excluded the non-turbulent motions (e.g., infall, outflow, and/or rotation.) in the calculation of $\mathcal{M}_{A}$, so the $\mathcal{M}_{A}>1$ at high densities found in these simulations may be arguable.}, $A_{1}+A_{23}>0$, and/or $\nabla \boldsymbol{v} < 0$ in high-density regions, which are associated with the perpendicular alignment but may not be sufficient conditions for the transition. Alternatively, local $\mu>1$ in high-density regions may also be responsible for the transition, where a dominant gravity can help to create perpendicular alignment between the magnetic field and density structure.
The analytical explanation for the parallel alignment between the magnetic field and density structure at low densities and the perpendicular alignment at high densities is still unclear. At low densities, the gravity is not dominant, so the alignment is mainly due to the interplay between the magnetic field and turbulence. \citet{2013ApJ...774..128S}, \citet{2016A&A...586A.135P}, \citet{2016AA...586A.138P}, and \citet{2017AA...603A..64S} proposed that the initially super-Alfv\'{e}nic compressive turbulent flows can stretch the magnetic field and density structures in the same direction due to magnetic freezing. However, the parallel alignment is also seen in initially sub-Alfv\'{e}nic simulations (see Table \ref{tab:hro}). So an initially super-Alfv\'{e}nic enviroment might not be a necessary condition for the parallel alignment. Alternatively, \citet{2016ApJ...829...84C} and \citet{2019ApJ...878..110F} suggested that the anisotropic turbulent eddies in sub-Alfv\'{e}nic gas naturally create elongated density structures
parallel to the magnetic field. The anisotropic nature of sub-Alfv\'{e}nic turbulence can explain the majority of the low-density gas with local $\mathcal{M}_{A}<1$ and $\xi>0$, but cannot explain the faction of gas with local $\mathcal{M}_{A}>1$ and $\xi>0$ in some simulations \citep{2017AA...603A..64S, 2020MNRAS.497.4196S, 2022ApJ...925..196I}. At high densities, the situation is more complicated as the gravity steps in. \citet{2013ApJ...774..128S} and \citet{2016AA...586A.138P} suggested that the gas compression along field lines in sub-Alfv\'{e}nic turbulence can create density structures perpendicular to the magnetic field even without gravity. Alternatively, a magnetized gravitational collapse in the presence of a strong magnetic field would naturally cause an elongated density structure that is perpendicular to the magnetic field \citep{1976ApJ...206..753M, 1976ApJ...207..141M}. However, both explanations are inconsistent with the results of some simulations where $\mathcal{M}_{A}>1$ is associated with the perpendicular alignment. Another alternative explanation is that the perpendicular alignment at high densities may be determined by the strong large-scale magnetic field at low densities rather than the small-scale magnetic field at high densities \citep{2016A&A...586A.135P}.
\begin{table}[tbh]
\tiny
\caption{Simulations that show transitions of relative orientation from parallel to perpendicular alignment at higher densities. \label{tab:hro}}
\begin{tabular}{ccccccccc}
\hline \noalign {\smallskip}
Size (pc) & Gravity & $\mu_0$ & $\mathcal{M}_{A0}$ & $\mu$ & $\mathcal{M}_{A}$ & $\xi>0$$^d$ & $\xi<0$$^d$ & Ref.$^e$ \\
\hline \noalign {\smallskip}
4 & Yes & 4.52-14.3 & 3.16-10 & ... & $\lesssim$1 & $A_{23}<0$ & $A_{23}>0$, $\mathcal{M}_{A}>1$, $\nabla \boldsymbol{v} < 0$ & 1,2,3\\ %
1 & Yes & ... & 2.2-8.8 & ... & 0.78-0.84 & $\mathcal{M}_{A}<1$ & $\mathcal{M}_{A}>1$ & 4 \\ %
32$^a$ & Yes & ... & 0.6-0.8 & 0.54-0.72 & ... & $A_{1}+A_{23}<0$ & $A_{1}+A_{23}>0$,$\mathcal{M}_{A}>1$ & 5 \\ %
10 & No & ... & 0.5 & ... & ... & ... & $A_{1}+A_{23}>0$ & 6 \\ %
10$^b$ & Yes & ... & ... & ... & ... & $\mu<1$ & $\mu>1$ & 7 \\ %
10 & No & ... & 0.7 & ... & ... & ... & ... & 8 \\ %
10 & Yes & ... & 0.6 & ... & ... & ... & ... & 8 \\ %
100$^c$ & Yes & ... & 0.6 & ... & ... & ... & $\mathcal{M}_{A}>1$ & 9 \\ %
\hline \noalign {\smallskip}
\end{tabular}
\normalsize{Notes}\\
\normalsize{$^{a}$ The colliding flow simulations in \citet{2020MNRAS.497.4196S} have a box volume of 128 $\times$ 32 $\times$ 32 pc$^3$. They only selected 32$^3$ pc$^3$ regions for study.}\\
\normalsize{$^{b}$ The simulations in \citet{2021MNRAS.507.5641G} have a box volume of 500$^3$ pc$^3$. They only selected 10$^3$ pc$^3$ regions for study.}\\
\normalsize{$^{c}$ The simulations in \citet{2022ApJ...925..196I} have a box volume of 1 $\times$ 1 $\times$ 40 kpc$^3$. They only selected 100$^3$ pc$^3$ regions for study.}\\
\normalsize{$^{d}$ The parameters listed in the two columns correspond to local values in sub-regions of $\xi>0$ or $\xi<0$ at different densities.}\\
\normalsize{$^{e}$ References: (1) \citet{2013ApJ...774..128S}; (2) \citet{2016AA...586A.138P}; (3) \citet{2017AA...603A..64S}; (4) \citet{2016ApJ...829...84C}; (5) \citet{2020MNRAS.497.4196S}; (6) \citet{2020MNRAS.499.4785K}; (7) \citet{2021MNRAS.507.5641G}; (8) \citet{2021MNRAS.503.5425B}; (9) \citet{2022ApJ...925..196I}.}\\
\end{table}
\section{The KTH method}\label{sec:KTH}
\subsection{Basics}
\citet{2012ApJ...747...79K} proposed the KTH method to determine the magnetic field strength with the relative orientations between the magnetic field, emission intensity gradient, and local gravity. This method is based on ideal MHD force equations, where the intensity gradient is assumed to trace the resulting direction of motions in the MHD equation \citep[i.e., the inertial term, ][]{2013ApJ...775...77K}. This method leads to maps of position-dependent magnetic field strengths and magnetic-to-gravitational force ratios.
Under the assumptions of negligible viscosity, infinite conductivity (ideal MHD), isotropic magnetic field pressure, small turbulent-to-ordered field strength ratio, smoothly and slowly varying field strength, stationarity, and that the intensity gradient indicates the resulting direction of motions, \citet{2012ApJ...747...79K} considered the force balance among pressure, gravity, and the curvature term of magnetic field to derive the field strength
\begin{equation}\label{eq:BKTH}
B = \sqrt{\frac{\sin \psi}{\sin \alpha}(\nabla P + \rho \nabla \phi_G) 4\pi R_B},
\end{equation}
where $\psi$ is the relative orientation between the gravity and the intensity gradient, $\alpha$ is the relative orientation between the magnetic field and the intensity structure\footnote{The angle $\alpha$ is equivalent to the $\phi_{\mathrm{B-N}}$ in Section \ref{sec:hro} if the intensity structure perfectly traces the density structure.}, $P$ is the hydrostatic dust pressure, $\phi_G$ is the local gravitational potential, and $R_B$ is the local magnetic field line radius. \citet{2012ApJ...747...79K} then introduced a parameter $\Sigma_B$ to measure the local field significance. The field significance parameter $\Sigma_B$ is given by the reformulation of Equation \ref{eq:BKTH}
\begin{equation}\label{eq:sigmaB}
\Sigma_B = \frac{\sin \psi}{\sin \alpha} = \frac{F_B}{\vert F_G + F_P \vert},
\end{equation}
which quantifies the relative importance of the magnetic force $F_B = B^2/(4\pi R_B)$ compared to the combination of the gravitational and pressure forces $F_G + F_P = \nabla P + \rho \nabla \phi_G$. \citet{2012ApJ...747...79K} further demonstrated analytically that Equations \ref{eq:BKTH} and \ref{eq:sigmaB} do not suffer too much from the geometry and projection effects. If local changes in temperatures and densities are small compared to gravity, the pressure terms ($\nabla P$ and $F_P$) can be omitted from Equations \ref{eq:BKTH} and \ref{eq:sigmaB}. Thus, $\Sigma_B$ can be used to quantitatively indicate whether the magnetic field in a region is strong enough to prevent gravitational collapse ($\Sigma_B>1$) or not ($\Sigma_B<1$). Later, \citet{2012ApJ...747...80K} suggested that the global mass-to-flux ratio normalized to the critical value within a specific region can be estimated with
\begin{equation}\label{eq:lambdaKTH}
\lambda_{\mathrm{KTH}} = \langle \Sigma_B^{-1/2} \rangle \pi^{-1/2},
\end{equation}
where they adopted the magnetic critical mass from \citet{1999ASIC..540..193S} in the derivation of $\lambda_{\mathrm{KTH}}$.
Due the a series of assumptions, the KTH method is subjected to many uncertainties \citep{2012ApJ...747...79K}. In contrast to the DCF method that requires the information on the turbulent field, the KTH method regards the turbulent field as insignificant and only requires the ordered field structure. The contribution from turbulent field may be removed by averaging several neighboring pixels to derive an averaged local ordered field curvature (e.g., applying the Pil15 or Pat17 technique mentioned in Section \ref{sec:unorder}), but the KTH method may still fail when the magnetic field is dominated by the turbulent component. The KTH method also is not applicable in the extremely strong field case where the matter can move only along the field lines (i.e., $\psi \sim 0$, $\alpha \sim \pi/2$, and $R_B \sim \infty$). In some regions with strong rotation, the effect of rotation can lead to a change in the resulting direction of motion, which may be mitigated by adding a centrifugal term in the MHD equation. If the temperature variation is irregular and significant over a map, the intensity gradient should be replaced by the density gradient in the KTH analysis. The biggest uncertainty of the KTH method may come from the basic assumption that the intensity (or density) gradient traces the resulting motion of the MHD force equation. \citet{2013ApJ...775...77K} has investigated this assumption analytically and suggested that its validity relies on the difference between the velocity and density gradient directions. Although the numerical studies of the HRO method can be regarded as a partial test of the KTH method, the KTH method itself has not been fully compared with simulations yet. Further numerical investigations will be of significance to understand the uncertainty of this method in different physical conditions.
\subsection{Observations}
Most observational studies using the KTH method only partially applied this method and studied the distribution of the intermediate parameters. Here we summarise the observational studies of each parameter.
\subsubsection{Magnetic field versus intensity gradient: $\delta$} \label{sec:KTHdelta}
The angle $\delta = 90\degr - \alpha$ is the relative orientation between the magnetic field and the intensity (or density) gradient. Basically, studying the distribution of $\delta$ is similar to studying the distribution of $\phi_{\mathrm{B-N}}$ introduced by the HRO technique.
Similar to $\phi_{\mathrm{B-N}}$, the interpretation of the value and distribution of $\delta$ is still not yet well established. \citet{2013ApJ...775...77K} and \citet{2014ApJ...797...99K} suggested that a bimodal distributions of $\delta$ can be interpreted as a sign of collapse, where the angle $\delta$ measures how efficiently the magnetic field inhibits a collapse. \citet{2013ApJ...775...77K} and \citet{2014ApJ...797...99K} further proposed that the angle $\delta$ can used as a tracer of the evolution stage of star-forming regions. They suggested that $\delta$ is spatially randomly distributed due to the lack of a gravitational center in the early phase (Type I). In a later stage, elongated dust structures appear in star-forming regions. The magnetic field is parallel to the major axis of elongated dust structures that are created by large-scale flows and/or turbulence (Type IIB) or is perpendicular to the major axis of elongated dust structures when the gravity has just started to shape the field (Type IIA). Types IIB and IIA may further evolve into one and the same system (Type III) where a dominant gravity drags the magnetic field to form a radial, pinched, or hourglass shape. \citet{2014ApJ...797...99K} analysed a sample of 50 star-forming regions with CSO and SMA observations and found that the mean $\vert \delta \vert$ values for Types IIB, IIA, and III are 51$\degr$ (CSO and SMA), 30$\degr$ (CSO) to 34$\degr$ (SMA), and 30$\degr$ (SMA only), respectively. The categorization of different $\delta$ types is an empirical characterization. The correspondence between the $\delta$ types and different evolution stages of low-mass and high-mass star formation at different scales is still unclear and warrants further investigations.
Studies of individual star-forming regions have revealed different values and distributions of $\delta$. \citet{2013ApJ...763..135T} found that the magnetic field is less correlated with the dust intensity gradient at larger scales revealed by JCMT or CSO, while the two angles are more correlated at smaller scales observed by SMA. This trend is consistent with the results from the observational HRO studies (Section \ref{sec:hroobs}). With SMA polarization observations toward the massive dense cores in the DR21 filament, \citet{2017ApJ...838..121C} found that the magnetic field and the intensity gradient are misaligned in rotation-like cores and are aligned in non-rotation-like cores, which suggests the magnetic field could be distorted by the rotation. \citet{2013ApJ...772...69G} found random distributions of $\delta$ with an average value of 40$\degr$ in the massive dense core DR21(OH) with SMA observations. With ALMA polarization observations toward the massive clumps in the IRDC G28.34, \cite{2020ApJ...895..142L} found random distributions of $\delta$ and average $\delta$ values of 40$\degr$ and 46$\degr$ in the dense cores in two massive clumps MM1 and MM4, respectively. In another IRDC G14.225, \citet{2020A&A...644A..52A} found that the $\delta$ value in the Hub-N region is mostly small with their CSO observations. More observational studies of $\delta$ are still required to better understand its general distributions at different scales and evolution stages of star formation.
\subsubsection{Magnetic field versus local gravity: $\omega$}
The angle $\omega$ measures the relative orientation between the magnetic field and the local gravity. The angle $\omega$ may be used as a measure for how effectively gravity can shape the magnetic field structure \citep{2012ApJ...747...79K}, but further explanations of the distribution of $\omega$ are still yet to be established. %
There are few observational studies of $\omega$. \citet{2012ApJ...747...79K} found an average $\omega$ of 13$\degr$ in W51 e2, suggesting the magnetic field lines are mainly dragged by gravity in this core. With ALMA polarization observations, \citet{2018ApJ...855...39K} found average $\sin \omega$ values of 0.4, 0.41, and 0.47 (i.e., $\omega \sim$ 23$\degr$, 23$\degr$, and 27$\degr$) in three massive cores W51 e2, W51 e8, and W51N, respectively. The distributions of $\omega$ in three cores vary, but all show some magnetic channels with $\sin \omega \sim 0$. \citet{2018ApJ...855...39K} suggested that the gas collapse can proceed in free-fall in these magnetic channels without magnetic resistance, which may have some interesting implication for the star formation rate. \cite{2020ApJ...895..142L} found average $\omega$ values of 34$\degr$ and 36$\degr$ in the dense cores in massive clumps G28-MM1 and G28-MM4, respectively, which are lower than the average $\delta$ values in the same region. This shows that the magnetic field is more aligned with the gravity direction than the intensity (or density) gradient, which suggests $\omega$ may be better than $\delta$ (or $\phi_{\mathrm{B-N}}$ of the HRO technique) for the study of the correlation between the magnetic field and gravity in high density regions. An interesting future study point of $\omega$ may be studying its variation at different densities in a similar way to the studies of the angle $\phi_{\mathrm{B-N}}$.
\subsubsection{Intensity gradient versus local gravity: $\psi$}
The angle $\psi$ measures the relative orientation between the intensity (or density) gradient and the local gravity. The angle $\psi$ may indicate how effective gravity can shape the density structure, but further analytical implications of $\psi$ are still yet to be investigated.
There are few observational studies of the angle $\psi$. \citet{2012ApJ...747...79K} found an average $\psi$ of 20$\degr$ in W51 e2. \cite{2020ApJ...895..142L} found average $\psi$ values of 30$\degr$, 22$\degr$, and 28$\degr$ in the dense cores in massive clumps G28-MM1, G28-MM4, and G28-MM9, respectively. These observations suggest that the gravity direction is closely aligned with the intensity gradient in high density molecular cores.
\subsubsection{Magnetic force versus gravitational force: $\Sigma_B$}
The field significance parameter $\Sigma_B$ measures the ratio between the magnetic force and the gravitational force if the gas pressure is negligible. The implication of $\Sigma_B$ is clear, but the accuracy of the estimated force ratio is not tested by simulations yet.
Based on a sample of 50 sources, \citet{2014ApJ...797...99K} found that the different types of star-forming regions categorized by the angle $\delta$ (see Section \ref{sec:KTHdelta}) show clear segregation in $\Sigma_B$ values. Type IIB sources where the magnetic field is aligned with the clump/core major axis have an average $\Sigma_B$ of 1.29 (CSO) to 1.49 (SMA), which suggests that type IIB sources are supported by the magnetic field and do not collapse on average. Type IIA sources where the magnetic field is perpendicular with the clump/core major axis have an average $\Sigma_B$ of 0.69 (CSO) to 0.74 (SMA), which suggests that type IIA sources are collapsing on average. Type III sources at a later stage have a smaller average $\Sigma_B$ value of 0.59 (SMA only), suggesting an even more dynamical collapsing.
Individual studies have mostly found $\Sigma_B\lesssim1$ in star-forming dense clumps/cores \citep[e.g., W51 e2, W51A, W51N, DR21(OH), clumps in G34.43, cores in G28.34, and clumps in G14.225, ][]{2012ApJ...747...79K, 2012ApJ...747...80K, 2013ApJ...763..135T, 2013ApJ...772...69G, 2019ApJ...878...10T, 2020ApJ...895..142L, 2020A&A...644A..52A}. Specifically, \citet{2013ApJ...763..135T} found that the lower density structures in W51N revealed by the CSO and JCMT have $\Sigma_B\sim0.71-1.17$, while the higher density structures revealed by SMA have $\Sigma_B\sim0.5$. \citet{2012ApJ...747...79K} and \citet{2012ApJ...747...80K} found smaller $\Sigma_B$ values in higher column density regions in W51 e2 and W51A. These findings indicate that the gravity plays an more dominant role in higher density regions, which agrees with the observational DCF studies (see Section \ref{sec:dcflambda}). On the other hand, \citet{2020ApJ...895..142L} found that the $\Sigma_B$ values are higher in more evolved dense cores in the IRDC G28.34, which might suggest a more dynamical star formation at earlier evolution stages.
\subsubsection{Magnetic field strength}
The KTH method can be used to estimate the magnetic field strength with Equation \ref{eq:BKTH}. The accuracy of the estimated field strength is still unclear as it has not been tested by simulations yet.
By far, there are only two observational studies that has applied the KTH method to estimate the field strength. \citet{2012ApJ...747...79K} found an average field strength of 7.7 mG in W51 e2. \citet{2013ApJ...769L..15S} estimated a field strength of 3.4 mG with the KTH method in L1157-mm, which is $\sim$2.5 times higher than the DCF estimation of 1.4 mG in the same work.
\subsubsection{Mass-to-flux-ratio to critical value}
The mass-to-flux-ratio to critical value $\lambda_{\mathrm{KTH}}$ can be estimated from the field significance parameter $\Sigma_B$ through Equation \ref{eq:lambdaKTH}. Similarly, the accuracy of $\lambda_{\mathrm{KTH}}$ is yet to be investigated by simulations.
\citet{2012ApJ...747...80K} found larger $\lambda_{\mathrm{KTH}}$ values in higher density regions in W51 e2, which agrees with the trend from DCF estimations (see Section \ref{sec:dcflambda}). Other two observational studies in DR21(OH) and G14.225N have found that the $\lambda_{\mathrm{KTH}}$ value estimated with the KTH method approximately agrees with the value of DCF estimations \citep{2013ApJ...772...69G, 2020A&A...644A..52A}.
\section{Summary}\label{sec:sum}
The recent improvement of instrumental sensitivity and development of new techniques (e.g., the VGT) have led to an increasing number of observations that reveal the POS component of magnetic field orientations in star-forming molecular clouds. In this review, we discuss the developments and limitations of the DCF and KTH methods that quantify the dynamic importance of magnetic fields in star-forming molecular clouds based on the field orientations and the HRO analysis that characterize the statistical relation between field orientations and density structures. We also summarize the observational studies using these methods and discuss their implications on star formation.
The original DCF method is based on several assumptions: the total magnetic field is composed of a mean field component and a turbulent field component; the energy equipartition; isotropic turbulence; and the turbulent-to-mean or -total field strength ratio is traced by angular dispersions. The ordered field component is considered instead of the mean field component (e.g., the ADF method) if there are curved ordered magnetic field structures. We suggest that the ordered field and turbulent field of a particular region are local properties and are dependent on the scale range (i.e., the beam resolution to the maximum recoverable scale of interferometric observations or the region size) of the region of interest. There is still a debate on whether there is an equipartition between the turbulent magnetic field and the turbulent kinetic field or between the coupling-term magnetic field and the turbulent kinetic field in the sub-Alfv\'{e}nic regime, while both equipartitions are not satisfied for super-Alfv\'{e}nic turbulence. The energy non-equipartition can be the biggest uncertainty in the DCF method, which should be further investigated with simulations and observations. The uncertainty from anisotropic turbulence may be insignificant for the DCF estimations in self-gravitating regions. The turbulent-to-underlying or -total field strength ratio can be expressed as different forms of angular dispersions, but each has its limitations. The ADF method correctly accounts for the beam-smoothing effect, interferometric filtering effect, and the ordered field structure, but its applicability for quantifying the turbulent field and LOS signal integration may need further numerical investigations. The correction factor for the most widely used formula $B^{\mathrm{u}}_{\mathrm{pos}} \sim \sqrt{\mu_0 \rho }\frac{\delta v_{\mathrm{los}}}{\delta\phi_{\mathrm{obs}}}$ decreases with increasing density. The DMA presents an important improvement for the DCF method by analytically accounting for the mean field inclination angle and the turbulence anisotropy in the non-self-gravitating regime. A further extension of the DMA in the self-gravitating regime would be of significance.
A compilation of previous DCF estimations suggests a scenario that magnetically sub-critical low-density clouds gradually form super-critical high-density substructures. The critical column density is around $3.4 \times 10^{21}$ cm$^{-2}$ on average, which needs to be better constrained and may differ in different clouds. The gravity may be more dominant in high-mass star formation than low-mass star formation. The average state of dense substructures within molecular clouds is approximately trans-Alfv\'{e}nic if the energy equipartition assumption is satisfied, or super-Alfv\'{e}nic if the energy equipartition assumption is unsatisfied for some of the sources.
Observational HRO studies mainly focus on the alignment between the magnetic field and density structure. Low-resolution HRO studies have found a general trend of transition from a preferentially parallel alignment at low column densities to a perpendicular alignment at higher column densities. This observational trend agrees with trans-to-sub-Alfv\'{e}nic simulations, which indicates that the star-forming molecular clouds are trans-to-sub-Alfv\'{e}nic. This trans-to-sub-Alfv\'{e}nic state is consistent with the results derived from other techniques \citep[e.g., the VGT, ][]{2019NatAs...3..776H}. The analytical explanation for the transition from parallel to perpendicular alignment is still unclear, but may be related to changes of the local Alfv\'{e}nic Mach number, $A_{1}+A_{23}$ term, mass-to-flux-ratio, and/or $\nabla \boldsymbol{v}$. The transition occurs at $10^{21}- 10^{22}$ cm$^{-2}$, which agrees with the critical column density derived from DCF estimations. But it is unclear whether the two transition column densities are related. High-resolution HRO studies have revealed a possible transition from perpendicular alignment back to random alignment at high column density sub-regions. The reason for this reverse transition is also unclear, but may be related to the impact of accretion gas flows, outflows, disk rotation, and/or the projection effect.
The advantage of the KTH method compared to the DCF method is that it does not require the information on the velocity dispersion. However, the uncertainty of the KTH method is still unknown since it has not been fully tested by simulations. Results from observational KTH studies on the relative alignment between the magnetic field and intensity (density) gradient within dense clumps/cores approximately agree with those of the observational HRO studies. The value and density-varying trend of the mass-to-flux-ratio and the magnetic field strength derived from the KTH method approximately agree with those derived from the DCF estimations.
\section*{Conflict of Interest Statement}
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
\section*{Author Contributions}
J.L., Q.Z. and K.Q. contributed to the outline of the review. J.L. led the writing of the manuscript. Q.Z. and K.Q. read, commented on, and edited the manuscript.
\section*{Funding}
J.L. acknowledges the support from the EAO Fellowship Program under the umbrella of the East Asia Core Observatories Association. K.Q. is supported by National Key R\&D Program of China grant No. 2017YFA0402600. K.Q. acknowledges the support from National Natural Science Foundation of
China (NSFC) through grant Nos. U1731237, 11590781, and 11629302.
\section*{Acknowledgments}
We thank the referees for constructive comments
that improved the clarity of this paper. J.L. thanks Prof. Martin Houde for insightful discussions about the correlation between turbulent magnetic and kinetic fields. J.L. thanks Dr. Heshou Zhang and Dr. Suoqing Ji for helpful discussions on the general concept of MHD turbulence.
\bibliographystyle{frontiersinSCNS_ENG_HUMS} %
\bibliography{astro}
|
Title:
Birefringence Tomography for Axion Cloud |
Abstract: An axion cloud surrounding a supermassive black hole can be naturally
produced through the superradiance process. Its existence can be examined by
the axion induced birefringence effect. It predicts an oscillation of the
electric vector position angle of linearly polarized radiations. Stringent
constraints of the existence of the axion in a particular mass window has been
obtained based on the recent Event Horizon Telescope measurement on
M87$^\star$. The future Very-Long-Baseline Interferometry (VLBI) observations
will be able to measure the vicinity of many supermassive black holes, thus it
opens the possibility to search for the existence of axions in a wide mass
regime. In this paper, we study how different black hole properties and
accretion flows influence the signatures of the axion induced birefringence. We
include the impacts of black hole inclination angles, spins, magnetic fields,
plasma velocity distributions, the thickness of the accretion flows. We pay
special attention to characterize the washout effects induced by the finite
thickness of the accretion flows and the lensed photons. Based on this study,
we give prospects on how to optimize the axion search using future VLBI
observations, such as the next-generation Event Horizon Telescope, to further
increase the sensitivity.
| https://export.arxiv.org/pdf/2208.05724 |
\title{\huge{Birefringence Tomography for Axion Cloud}}
\author{Yifan Chen,$^{a,b}$}
\author{Chunlong Li,$^{a}$}
\author{Yosuke Mizuno,$^{c,d}$}
\author{Jing Shu,$^{e,f}$}
\author{Xiao Xue,$^{g,h}$}
\author{Qiang Yuan,$^{i,j}$}
\author{Yue Zhao,$^k$}
\author{and Zihan Zhou$^l$}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\emailAdd{[email protected]}
\affiliation{
{$^a$CAS Key Laboratory of Theoretical Physics, Institute of Theoretical
Physics, Chinese Academy of Sciences, Beijing 100190, China\\
$^b$Niels Bohr International Academy, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark\\
$^c$Tsung-Dao Lee Institute and School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai, 200240, China\\
$^d$Institute for Theoretical Physics, Goethe University Frankfurt, Frankfurt am Main, 60438, Germany\\
$^e$School of Physics and State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, China\\
$^f$Center for High Energy Physics, Peking University, Beijing 100871, China\\
$^g$II. Institute of Theoretical Physics, Universit\"{a}t Hamburg, 22761, Hamburg, Germany\\
$^h$Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607, Hamburg, Germany\\
$^i$Key Laboratory of Dark Matter and Space Astronomy, Purple Mountain
Observatory, Chinese Academy of Sciences, Nanjing 210023, China\\
$^j$School of Astronomy and Space Science, University of Science and
Technology of China, Hefei 230026, China\\
$^k$Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112, USA\\
$^l$Department of Physics, Princeton University, Princeton, NJ 08544, USA
}}
\date{\today}
\abstract{
An axion cloud surrounding a supermassive black hole can be naturally produced through the superradiance process. Its existence can be examined by the axion induced birefringence effect. It predicts an oscillation of the electric vector position angle of linearly polarized radiations. Stringent constraints of the existence of the axion in a particular mass window has been obtained based on the recent Event Horizon Telescope measurement on M87$^\star$.
The future Very-Long-Baseline Interferometry (VLBI) observations will be able to measure the vicinity of many supermassive black holes, thus it opens the possibility to search for the existence of axions in a wide mass regime.
In this paper, we study how different black hole properties and accretion flows influence the signatures of the axion induced birefringence. We include the impacts of black hole inclination angles, spins, magnetic fields, plasma velocity distributions, the thickness of the accretion flows.
We pay special attention to characterize the washout effects induced by the finite thickness of the accretion flows and the lensed photons.
Based on this study, we give prospects on how to optimize the axion search using future VLBI observations, such as the next-generation Event Horizon Telescope, to further increase the sensitivity.
}
\section{Introduction}
Taking advantage of the Very Long Baseline Interferometer (VLBI) technology, the Event Horizon Telescope (EHT) opens a new era of probing the physics under extreme conditions near the horizon of a supermassive black hole (SMBH) \cite{Akiyama:2019cqa,Akiyama:2019bqs,Akiyama:2019eap,Akiyama:2019fyp}. This allows us to test general relativity in the strong gravity region around the black hole and study the accretion flow around it. Beyond constructing the intensity image of the accretion flow of the SMBH M87$^\star$, the EHT recently performed a polarimetric measurement on the radiation from its vicinity, with a high spatial resolution. From the astrophysical point of view, it helps us to understand the magnetic structure of the accretion flow \cite{EHTP,EHTM}.
Besides the applications to study astrophysics, such horizon-scale measurements also provide us opportunities to test particle physics, especially ultralight bosons. With a proper mass, ultralight bosonic particles can be spontaneously accumulated around a Kerr black hole through the superradiance mechanism \cite{Penrose:1971uk,ZS,Press:1972zz,Damour:1976kh,Zouros:1979iw,Detweiler:1980uk,Strafuss:2004qc,Dolan:2007mj,Brito:2015oca}.
Among various choices of ultralight bosons beyond the standard model, the most promising candidate is the QCD axion or axion-like particles \cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}. They generically appear in theories with extra dimensions \cite{Arvanitaki:2009fg}, and they can be good cold dark matter candidates \cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}. In \cite{Chen:2019fsq}, using the axion induced birefringence effects \cite{Carroll:1989vb,Harari:1992ea} is proposed to search for the existence of the axion cloud around a SMBH. If it exists, the coherently oscillating axion field will lead to a periodic variation to the electric vector position angles (EVPAs) of linearly polarized radiations from the accretion flow. Based on this theoretical proposal, the signatures of the axion cloud are further investigated using the recent EHT polarimetric measurement on M87$^\star$ \cite{Chen:2021lvo} and stringent constraints on the axion parameter space are achieved.
The future VLBI observations, such as the next-generation EHT (ngEHT) \cite{Raymond_2021,Lngeht} and space VLBI \cite{Gurvits:2022wgm}, with more observed frequencies and potentially longer baseline in the space, can further increase the spatial resolution and perform detailed measurements on the horizons of a large number of SMBHs \cite{Pesce:2021adg}. Since the axion cloud can only be produced when the axion Compton wavelength is comparable to the black hole size, by observing black holes with various masses, the future {VLBI experiments} open the opportunities to study the existence of axion in a large mass regime.
Given the potential information of a large landscape of SMBHs with various properties, such as spins, inclination angles, types of accretion flows etc, it is necessary to construct the foundation of the axion search at future VLBIs. In this paper, we perform a comprehensive study on the polarimetric signals caused by the axion induced birefringence, with various properties of SMBHs.
The layout of the paper is as follows. In Sec.\,\ref{ASMBH}, we review the production of the axion cloud around a Kerr black hole through the superradiance mechanism. In Sec.\,\ref{BRT}, we review the axion-induced birefringence in a curved space-time. We show how to embed the axion-photon coupling into the polarized radiative transfer equations. In Sec.\,\ref{TDRT}, we focus on the thin accretion disk model. With different choices on the inclination angle and the spin of the black hole, we show how ray tracing influences the birefringence signals from the axion cloud. We further define a new observable, the Fourier decomposition of the differential EVPA along the azimuthal direction, that can be generally applied to nearly face-on black holes, such as M87$^\star$.
In Sec.\,\ref{RW}, we consider more realistic accretion disk models, characterized by Radiative Inefficient Accretion Flows (RIAFs). Particularly, we study two washout effects in our signal, the sum of the linear polarization along line of sight through the accretion flow, and the incoherent sum from the lensed photons.
Finally we present the prospects for the future axion search in Sec.\,\ref{Pn}.
{Throughout this study, we work in units where $G = \hbar = c = 1$, and adopt the metric convention $(-,+,+,+)$}.
\section{Axion Cloud from Black Hole Superradiance}\label{ASMBH}
According to the superradiance mechanism, a rapidly spinning black hole can generate an exponentially growing axion cloud, when the axion's reduced Compton wavelength is comparable to the gravitational radius of a Kerr black hole ~\cite{Penrose:1971uk,ZS,Press:1972zz,Damour:1976kh,Zouros:1979iw,Detweiler:1980uk,Strafuss:2004qc,Dolan:2007mj}, for a review see~\cite{Brito:2015oca}.
The reduced Compton wavelength is related to the axion mass as $\lambda_c \equiv 1/\mu$, and the gravitational radius is determined by the black hole mass as $r_g \equiv M$. Specifically, ignoring the axion self-interaction, the Klein–Gordon equation of axion field in a curved spacetime takes the form
\begin{equation}
(\nabla^\mu \nabla_\mu - \mu^2) a = 0.
\label{kge}
\end{equation}
In the following discussion, we take the covariant derivative $\nabla_{\mu}$ in terms of the Kerr metric of rotating black holes, with the mass $M$ and the angular momentum $J$ in Boyer-Lindquist (BL) coordinates $x^{\mu}=[t, r, \theta, \phi]$.
Under the Kerr background, the variables in the solution of Eq.\,(\ref{kge}) are separable \cite{Brill:1972xj,Carter:1968ks}, and we take the ansatz as
\begin{equation}
a(t, \mathbf{r})=e^{-i\omega t+im\phi}R_{nlm}(r)S_{lm}(\theta),
\end{equation}
where $R_{nlm}(r)$ is the radial function, $S_{lm}(\theta)$ is the spheroidal harmonics which simplifies to the spherical harmonics $Y_{lm}$ in the non-rotating limit of the black hole or non-relativistic limit of the axion cloud. In addition, $\omega_{nlm}$ is the eigen-frequency of the corresponding eigenstate, and the number $n$, $l$ and $m$ satisfy $n \geq l+1, l \geq 0$ and $l \geq|m|$. One further imposes the ingoing bound condition at the Kerr black hole's outer horizon, and the wavefunction goes to zero at infinity. This makes the eigen-frequencies $\omega$ generally take a complex form
\begin{equation}
\omega_{nlm}=\omega_{nlm}^r+i\omega_{nlm}^i.
\end{equation}
We first consider small values of $\alpha$ satisfying $\alpha \ll 0.1$. In this limit, the real part $\omega^r_{nlm}$ and the imaginary part $\omega_{nlm}^i$ can be written as \cite{Ternov:1978gq,Detweiler:1980uk}
\begin{align}
&\omega_{nlm}^r=\mu\left(1-\frac{\alpha^2}{2n^2}+\mathcal{O}(\alpha^4)\right), \\
&\omega_{nlm}^i\propto\alpha^{4l+5}\left(m\Omega_H-\omega_{nlm}^r\right)\left(1+\mathcal{O}(\alpha)\right).
\label{omegai}
\end{align}
The dependence on the number $l$ and $m$ is included in the higher order terms of $\alpha$ whose expressions can be found in \cite{Baumann:2019eav}. Here $\Omega_{\rm H}\equiv a_J/(2r_+)$, with the radius of the outer horizon as $r_+\equiv M+M\sqrt{1-a_J^2}$ and the dimensionless spin as $a_J \equiv J/M^2$. When the superradiance condition is met, \begin{equation}
\Omega_{H} > \frac{\omega^r_{nlm}}{m},
\label{src}
\end{equation}
$\omega_{nlm}^i$ becomes positive. This leads to an exponential growth with the timescale as $\tau_{SR}= 1/\omega_{nlm}^i$.
The radial profile of the axion cloud peaks at
\begin{equation}
r_{{\rm max},n} \approx \Big(\frac{n^2}{2\alpha^2}\Big)r_g ~.
\label{scalerealtion}
\end{equation}
This relation gives a simple scaling relation between the peak radius $r_{\rm max}$ and the gravitational fine-structure constant $\alpha$.
As for larger values of $\alpha$, one can perform the numerical calculation to obtain the solution of the axion field. According to the numerical study in \cite{Dolan:2007mj}, the state with $n=2, l = 1, m = 1$ has the highest superradiant rate. This is the lowest energy state among the ones which satisfy the superradiance condition. The axion wavefunction of such a state peaks at the equatorial plane ($\theta=90^\circ$) of the black hole. In Fig.\,\ref{RWFaf}, the radial function of this state, i.e., $R_{211}(r)$, is displayed for $a_J=0.99$. We emphasize that the axion cloud, with the Compton wavelength satisfying $\alpha = 0.4$, peaks close to $r \approx 5\,r_g$ \cite{Chen:2019fsq}. This is in a good agreement with the result presented in Eq.\,(\ref{scalerealtion}), as in the limit of $\alpha \ll 0.1$.
For a bigger value of the angular momentum number $l$, $r_\textrm{max}$ becomes larger, and the axion cloud takes a much longer time to build up according to Eq.\,(\ref{omegai}). Thus in this study we only focus on the state with $l = m = 1$.
Ignoring the $\alpha$'s higher order terms in Eq.\,(\ref{src}), for a fixed azimuthal mode $m$ and black hole spin $a_J$, the superradiance condition imposes an upper limit on $\alpha$
\be \alpha \lesssim \frac{a_J\ m}{2\ \left(1 + \sqrt{1-a_J^2} \right)}.\label{SRC}\ee
Choosing $m = 1$, $\alpha$ can be at most $0.5$ for an extreme Kerr black hole and $0.25$ for $a_J = 0.8$. Once the superradiance condition is satisfied, the axion cloud profile is only slightly influenced by the value of $a_J$ \cite{Amorim:2019hwp}.
In this study, we focus on the axion mass region satisfying $\alpha>0.1$ so that the superradiant timescale is much shorter than the age of the universe, i.e., within the range of $10^9$ years \cite{Dolan:2007mj}. The black hole spin can be as low as $a_J=0.5$ in order to satisfy the superradiance condition for $\alpha=0.1$.
As shown in Eq.\,(\ref{SRC}), the specific range of $\alpha$ which satisfies the superradiance condition is sensitive to the black hole spin $a_J$. Though the spin of M87$^\star$ is still uncertain \cite{EHTP}, Refs. \cite{Tamburini:2019vrf,Feng:2017vba} claim M87$^\star$ to be a nearly extreme Kerr black hole. In this study, we take the black hole spin $a_J$ to be 0.99 and 0.8 as two benchmarks, since these might be good representatives of the M87$^\star$ spin.
Finally, one may question whether the axion cloud produced by the superradiance process is stable. For specific astrophysical systems, the stability of the axion cloud is discussed in \cite{Arvanitaki:2010sy}, where several potential perturbations which may destroy the axion cloud are discussed. Particularly, the presence of accreting matter and the tidal force from a companion star turn out to be negligible. For the parameter region we are interested in, the metric is always dominated by the SMBH. One may be concerned about the possibility of a merger with another SMBH in the past. However, we mainly focus on the axion mass which triggers a relatively short timescale for the superradiance. Even such a drastic merger happened once, the axion cloud should generically have enough time to build up again after the merger. Thus we neglect this possibility in our study. At last, the superradiance can be terminated by the axion self-interaction. Indeed, with the growth of the axion cloud, the axion field value in certain region of the cloud gets close to its decay constant $f_a$. The axion self interaction, described by $V(a) = \mu^2 f_a^2 \left( 1 - {\cos [ a/f_a ]} \right)$, leads to a correction to the potential energy. Due to
the nontrivial self-interaction, the axion cloud can enter a violent bosenova or a saturating phase \cite{Yoshino:2012kn,Yoshino:2013ofa,Yoshino:2015nsa,Baryakhtar:2020gao,Omiya:2020vji}. Interestingly, both the numerical simulation \cite{Yoshino:2012kn,Yoshino:2013ofa,Yoshino:2015nsa} and the analytic estimation \cite{Baryakhtar:2020gao} indicate that the maximum of the field value $a_{\rm max}$ remains close to $f_a$ in either case, as long as the nonlinear regime is ever reached.
\section{Axion Induced Birefringence and Radiative Transfer}\label{BRT}
In this section, we first review the birefringence effects induced by the axion-photon coupling, based on the geometric optics approximation. The axion field background leads to modified Maxwell equations with different dispersion relations for the left and right circular polarized photons, which consequently causes a variation in electric vector position angles (EVPAs) of linearly polarized photons. Without medium effects, this birefringence effect is achromatic and topological since the shift of the EVPAs only depends on axion field values at emission and observation points \cite{Carroll:1989vb,Harari:1992ea,Plascencia:2017kca,Ivanov:2018byi,Fujita:2018zaj,Liu:2019brz,Fedderke:2019ajk,Caputo:2019tms,Yuan:2020xui}. This property also holds in curved space-time \cite{Schwarz:2020jjh}.
Further we need to properly characterize the axion induced effects when photons propagate in the medium. We show that they can be properly taken into account by a simple modification of the Faraday rotation terms in the covariant radiative transfer equations.
Such additional terms are proportional to the gradient of axion field along the geodesics, which can be easily included into a numerical radiative transfer code like \texttt{IPOLE} \cite{Moscibrodzka:2017lcu, Noble:2007zx}.
\subsection {Axion-Photon Coupling and Birefringence}
\label{APCB}
We start with the photon propagation in a curved space-time with the axion background field, without including medium effects. In this case, the Lagrangian can be written as
\begin{align}
\mathcal{L}=&-\frac{1}{4} F_{\mu \nu} F^{\mu \nu}-\frac{1}{2} g_{a\gamma\gamma} a F_{\mu \nu} \tilde{F}^{\mu \nu}+\frac{1}{2} \nabla^{\mu} a \nabla_{\mu} a-V(a).
\label{overall_la}
\end{align}
Here {$g_{a\gamma\gamma}$ is the axion-photon coupling constant (not to be confused with the spacetime metric tensor)}, $\tilde{F}^{\mu\nu}=\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}/2$ is the dual tensor of the electromagnetic field strength tensor, and $V(a)$ is the axion potential. In the Lorenz gauge $\nabla_\mu A^{\mu}=0$, the equation of motion for the electromagnetic field is
\begin{align}
\nabla_\mu \nabla^{\mu}A^{\nu}-R_{\nu}{}^{\mu}A_{\mu} = -g_{a\gamma\gamma}(\nabla_\mu a)\tilde F^{\mu\nu}.
\label{ame}
\end{align}
With a good accuracy, we follow \cite{Schwarz:2020jjh} and apply the geometric optics approximation, which is valid for photons with frequency much larger than the variation scale of the background metric and the axion field. This allows us to take the ansatz
\begin{align}
A_{\mu}(x)=\bar{A}_{\mu}(x)\exp\left(\frac{i}{\epsilon}S(x)\right),
\label{geoticex}
\end{align}
with the four dimensional wave-vector $k_\mu$ identified as
\begin{align}
k_{\mu}\equiv\frac{1}{\epsilon}\partial_{\mu}S(x).
\end{align}
We take $\epsilon$ as a small number characterizing the geometric optics approximation. Our following calculations will be based on the perturbation on $\epsilon$.
By substituting Eq.\,(\ref{geoticex}) into the Eq.\,(\ref{ame}), we find that the leading order term, i.e., $\mathcal{O}$($1/\epsilon^2$), gives
\begin{align}
k^{\mu}k_{\mu}=0.
\label{sdr}
\end{align}
We require this condition to hold along the path of photons. It indicates that the derivative of $k^{\mu}k_{\mu}$ with respect to the affine parameter equals to zero. This gives $k^{\mu}\nabla_{\mu}k_{\alpha}=0$, which means that photons follow null geodesics.
The next order, i.e., $\mathcal{O}$($1/\epsilon$), expansion in Eq.\,(\ref{ame}) gives
\begin{align}
k^{\mu}\nabla_{\mu}\bar{A}^{\nu}+\frac12\bar{A}^{\nu}\nabla_{\mu}k^{\mu}+g_{a\gamma\gamma}\epsilon^{\mu\nu\rho\sigma}\bar{A}_{\sigma}k_{\rho}\nabla_{\mu}a=0.
\label{ptewos}
\end{align}
The Lorenz gauge $\nabla_{\mu}A^{\mu}=0$ under the geometrical optics approximation becomes $\bar{A}^{\mu}k_{\mu}=0$.
To further simplify the calculation, we introduce the normalised space-like polarization vector $\xi^{\mu}$, and the vector potential can be written as $ \bar{A}^{\mu}=\bar{A} \xi^{\mu}$. The polarization vector satisfies $\xi^{\mu}\xi_{\mu}^*=1$ and $\xi_{\mu}k^{\mu}=0$. In this case, Eq.\,(\ref{ptewos}) can be decomposed into equations of motion for the amplitude $\bar{A}$ and the polarization vector $\xi^{\mu}$ \cite{Schwarz:2020jjh} respectively,
\ba
k^{\mu}\nabla_{\mu}\bar{A}+\frac12\bar{A}\nabla_{\mu}k^{\mu} &=&0,\label{eomI}\\
k^{\mu}\nabla_{\mu}\xi^{\sigma}+g_{a\gamma\gamma}\epsilon^{\mu\nu\rho\sigma}k_{\mu}\xi_{\nu}\nabla_{\rho}a &=& 0.\label{eomxi}
\ea
The equation of motion for $\bar{A}$, i.e., Eq.\,(\ref{eomI}), does not contain the axion field. This means that the axion field does not affect the observed intensity of the light. The first term in Eq.\,(\ref{eomxi}) describes the parallel transport of the polarization vector $\xi^{\mu}$. The second term contains the axion effect, which is the birefringent effect that we are focused on.
In order to see the evolution of the polarization direction, one needs to project Eq.\,(\ref{eomxi}) to the reference frame of an observer. Such a reference frame can be properly characterized by an orthonormal basis of vectors $e^{\mu}_{(a)}$. These base vectors satisfy $e^{\mu}_{(a)} e_{\mu (b)} = \eta_{(a)(b)}$, where $a$ or $b = 0,1,2,3$. Particularly, $e^{\mu}_{(0)}$ is the time-like 4-velocity of the observer, which will be specified later, and $e_{(3)}^{\mu}\equiv(k^{\mu}-\omega e^{\mu}_{(0)})/\omega$ is a space-like vector with $\omega\equiv-k_{\mu} e^{\mu}_{(0)}$. Furthermore, $e^{\mu}_{(1)}$ and $e^{\mu}_{(2)}$ are space-like vectors which span the transverse plane orthogonal to both $e^{\mu}_{(0)}$ and $e^{\mu}_{(3)}$. The residual gauge freedom allows us to set $\xi^{\mu}e^{(0)}_{\mu} = \xi^{\mu}e^{(3)}_{\mu}= 0$ and thus $|\xi^{(1)}|^2 + |\xi^{(2)}|^2 = 1$.
By parallel transporting the basis $e^{\mu}_{(a)}$ with the condition $k^{\mu}\nabla_{\mu}e^{\nu}_{(a)}=0$, we project Eq.\,(\ref{eomxi}) into the vector fields $e^{\mu}_{(a)}$ and obtain
\be
\partial_s\xi^{(j)}+g_{a\gamma\gamma}\partial_s a\epsilon^{(0)(i)(3)(j)}\xi_{(i)}=0.
\label{Aeq}
\ee
Here $s$ is the affine parameter of the photon trajectory, and $i$ or $j$ takes a value of $1$ or $2$. Writing the polarization vectors in the basis of circular polarization, we have $\xi_{L,R} \equiv \xi_{(1)}\pm i \xi_{(2)}$. The Eq.\,(\ref{Aeq}) can be easily solved as
\be
\xi_{L,R}(x^{\mu}_o)=\exp \left(\pm i\Delta\chi\right)\ \xi_{L,R}(x^{\mu}_e), \label{bi-eff}
\ee
where $\Delta\chi\equiv g_{a\gamma\gamma}\left[a(x^{\mu}_o)-a(x^{\mu}_e)\right]$. It only depends on the difference of the axion field values at the emission and the observation points, i.e., $x^{\mu}_e$ and $x^{\mu}_o$, respectively \cite{Carroll:1989vb,Harari:1992ea,Plascencia:2017kca,Ivanov:2018byi,Fujita:2018zaj,Liu:2019brz,Fedderke:2019ajk,Caputo:2019tms,Chen:2019fsq,Yuan:2020xui,Schwarz:2020jjh}. The linear polarization is a superposition of left and right circular polarization, thus $\Delta\chi$ represents the shift of EVPA for the linear polarization.
Interestingly, the ordinary birefringence, i.e., the Faraday rotation in the plasma with a magnetic field, has a nontrivial frequency dependence. On the other hand, the axion-induced birefringence is achromatic, as long as the geometric optics approximation is valid.
\subsection{Radiative Transfer}
The photon propagation nearby a SMBH is properly described by the covariant radiative transfer equations where both the plasma and the curved space-time are taken into account. In this subsection, we follow the formalism developed in \cite{Gammie_2012} and demonstrate how to include the axion effects. Comparing with the photon propagation equation without the medium, i.e., Eq.\,(\ref{ptewos}), the plasma effect leads to additional terms and the corresponding equation can written as
\be
2 i k^{\mu} \nabla_\mu \bar{A}^{\nu} + i \bar{A}^{\nu} \nabla_{\mu} k^{\mu} + 2i g_{a\gamma\gamma} (\nabla_\mu a)\epsilon^{\mu\nu\alpha\beta}k_{\alpha} \bar{A}_{\beta} = \Pi^{\nu}_{\mu} \bar{A}^{\mu} + j^{\nu}.
\label{te}
\ee
Here the first term on the right hand side is the induced current from plasma, with $\Pi^{\sigma}_{\mu}$ being the linear response tensor.
Further $j^{\nu}$ is the external current describing the plasma emission.
To describe the propagation of the incoherent superposition of a large number of electromagnetic waves, one introduces macroscopic polarization tensor $N^{\mu\nu}$ \cite{Gammie_2012}
\begin{align}
N^{\mu \nu} \equiv\left\langle \bar{A}^{\mu} \bar{A}^{* \nu}\right\rangle,\label{Nmn}
\end{align}
where $\left\langle\cdots\right\rangle$ denotes ensemble average. Using Eq.\,(\ref{Nmn}), one can rewrite Eq.\,(\ref{te}) into a compact form
\begin{align}
k^{\mu} \nabla_{\mu} N^{\alpha \beta}=J^{\alpha \beta}+\tilde{H}^{\alpha \beta \kappa \lambda} N_{\kappa \lambda}.
\label{mrt}
\end{align}
Here $\tilde{H}^{\alpha \beta \kappa \lambda}$ is defined as
\begin{align}
\tilde{H}^{\alpha \beta \kappa \lambda} \equiv- i\left(g^{\beta \lambda} \tilde{\Pi}^{\alpha \kappa}-g^{\alpha \kappa} \tilde{\Pi}^{* \beta \lambda}\right),
\label{Pi1}
\end{align}
and the modified response tensor $\tilde{\Pi}^{\nu \mu}$ is
\begin{align}
\tilde{\Pi}^{\nu \mu} \equiv \Pi^{\nu \mu}-2i g_{a\gamma\gamma}\left(\partial_{\lambda} a\right) \epsilon^{\lambda \nu \rho \mu} k_{\rho}.
\label{Pi2}
\end{align}
The emissivity tensor $J^{\alpha \beta}$ is related to the external current as
\be
J^{\alpha \beta}=-i\left(\langle j^{\alpha}\bar{A}^{\beta *}\rangle-\langle j^{\beta *}\bar{A}^{\alpha}\rangle\right).
\ee
In addition to the axion-photon coupling , $\tilde{H}^{\alpha \beta \kappa \lambda}$
in Eq.\,(\ref{mrt}) contains various plasma effects, whose coefficients can be calculated conveniently if one chooses a comoving Cartesian frame with respect to the plasma. In such a frame, $e^{\mu}_{(0)}$ points along the plasma 4-velocity, and we further choose the other three base vectors as the same way described in Sec.\,\ref{APCB}. We now project Eq.\,(\ref{mrt}) using these base vectors, after applying the parallel transport condition $k^{\alpha} \nabla_{\alpha} e_{(a)}^{\nu}=0$, we obtain
\begin{align}
\frac{d N^{(a)(b)}}{d s}=J^{(a)(b)}+\tilde{H}^{(a)(b)(c)(d)} N_{(c)(d)},
\label{prcoratr}
\end{align}
with
\begin{align}
\tilde{H}^{(a)(b)(c)(d)} = H^{(a)(b)(c)(d)} - g_{a\gamma\gamma} k_{(f)}\nabla_{(e)} a\Big[\eta^{(b)(d)} \epsilon^{(e)(a)(f)(c)}+\eta^{(a)(c)} \epsilon^{(e)(b)(f)(d)}\Big].
\label{tildeH}
\end{align}
Here $\tilde H^{(a)(b)(c)(d)}$ contains the ordinary plasma effect, i.e., $H^{(a)(b)(c)(d)}$, and the axion contribution. In this local tangent space, the total intensity and the polarization intensities can be parameterized by $4$ Stokes parameters as
\be I^S \equiv m^S_{(a)(b)} N^{(a)(b)},\ee
where $I^S \equiv (I,Q,U,V)$ are locally Lorentz invariant Stokes parameters. They contain the total intensity $I$, the linear polarization intensities at two different directions, $Q$ and $U$, and circular polarization intensity $V$, respectively. The projection matrix $m^S_{(a)(b)}$ is defined as
\begin{align}
m^{I}\equiv\left(\begin{array}{llll}0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0\end{array}\right),\ \qquad m^{Q}\equiv\left(\begin{array}{cccc}0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0\end{array}\right), \nn\\
m^{U}\equiv\left(\begin{array}{llll}0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0\end{array}\right),\ \qquad m^{V}\equiv\left(\begin{array}{cccc}0 & 0 & 0 & 0 \\ 0 & 0 & -i & 0 \\ 0 & i & 0 & 0 \\ 0 & 0 & 0 & 0\end{array}\right).
\end{align}
Similarly, the four Stokes emissivities $j^S \equiv (j_I,j_Q,j_U,j_V)$ are obtained through
\be j^S \equiv m^S_{(a)(b)} J^{(a)(b)}.\ee
Contracting $\tilde{H}^{(a)(b)(c)(d)}$ with the projection matrices, we define
\begin{align}
M^{S T} \equiv -\frac{1}{2} m_{(a)(b)}^{S*} \tilde{H}^{(a)(b)(c)(d)} m_{(c)(d)}^T.
\label{Minverse}
\end{align}
Splitting the contributions from the plasma effects and the axion-photon coupling, Eq.\,(\ref{Minverse}) can be decomposed as
\be M^{S T}=M_{\rm plasma}^{S T}+M_{\rm axion}^{S T}, \ee
where the first term is exactly the Muller Matrix in the ordinary radiative transfer equations,
\begin{align}
M_{\rm plasma}^{S T} \equiv\left(\begin{array}{cccc}\alpha_{I} & \alpha_{Q} & \alpha_{U} & \alpha_{V} \\ \alpha_{Q} & \alpha_{I} & \rho_{V} & -\rho_{U} \\ \alpha_{U} & -\rho_{V} & \alpha_{I} & \rho_{Q} \\ \alpha_{V} & \rho_{U} & -\rho_{Q} & \alpha_{I}\end{array}\right).
\label{Mueller Matrix}
\end{align}
Here $\alpha_I$, $\alpha_Q$, $\alpha_U$, $\alpha_V$ are the absorption coefficients, while $\rho_V$, $\rho_U$, $\rho_Q$ are the Faraday rotation and conversion coefficients.
{For example, in \texttt{IPOLE} \cite{Moscibrodzka:2017lcu, Noble:2007zx}, the Stokes $U$ is taken to align with the magnetic field so that $j_U = \alpha_U = \rho_U = 0$.}
Further, the axion contribution is simply characterized as
\begin{align}
M_{\rm axion}^{S T}=
\left(\begin{array}{cccc}0 & 0 & 0 & 0 \\ 0 & 0 & -2g_{a\gamma\gamma} \frac{d a}{d s} & 0 \\ 0 & 2g_{a\gamma\gamma} \frac{d a}{d s} & 0 & 0 \\ 0 & 0 & 0 & 0\end{array}\right).
\end{align}
Therefore the modified radiative transfer equation in a local tangent space can be written as
\be
\frac{d}{d s}\left(\begin{array}{l}I \\ Q \\ U \\ V\end{array}\right)=\left(\begin{array}{l}j_{I} \\ j_{Q} \\ j_{U} \\ j_{V}\end{array}\right)-\left(\begin{array}{llll}\alpha_{I} & \alpha_{Q} & \alpha_{U} & \alpha_{V} \\ \alpha_{Q} & \alpha_{I} & \rho^\prime_{V} & \rho_{U} \\ \alpha_{U} & -\rho^\prime_{V} & \alpha_{I} & \rho_{Q} \\ \alpha_{V} & -\rho_{U} & -\rho_{Q} & \alpha_{I}\end{array}\right)\left(\begin{array}{l}I \\ Q \\ U \\ V\end{array}\right),
\label{finalrte}
\ee
with
\be\label{ipole-mod} \rho^\prime_{V} = \rho_{V} - 2g_{a\gamma\gamma} \frac{d a}{d s}.\ee
The axion contributions are simply included by a change in the ordinary Faraday rotation coefficient.
It is clear from Eq.\,(\ref{finalrte}) that $\rho^\prime_V$ plays the role of changing the EVPA, defined as
\begin{align}
\chi \equiv \frac{1}{2} \arg (Q+i U).
\end{align}
In the absence of the emissivities and the plasma effects, Eq.\,(\ref{finalrte})
leads to consistent results as in Eq.\,(\ref{bi-eff}).
\section{Birefringence from Axion Cloud -- Thin Disk and Ray Tracing}\label{TDRT}
The following sections study the birefringent signals induced from the superradiant axion cloud accumulated around the SMBH, with various astrophysical conditions. We only consider the cases in which the radiations are emitted from the accretion flow, rather than the jet.
We first consider the geometrically thin and optically thick disk. Then we will further discuss the RIAF models, whose geometric thickness is an input parameter. Both cases are expected to be explored at horizon scale by the future VLBI observations \cite{Raymond_2021,Lngeht,Gurvits:2022wgm}.
For the thin disk, after photons are emitted, they propagate in the vacuum without the plasma. Consequently, the EVPA shift of the linear polarized photon can be simply described by Eq.\,(\ref{bi-eff}). For the frequency regime that we consider here, a geometrically thin disk is opticially thick, thus the contribution from lensed photons can be safely ignored. For each point on the sky plane, we can trace back along the line of sight, and the emission only comes from the point of its first intersection with the disk.
Neglecting the axion field value near the Earth, the EVPA shift $\Delta\chi$ in Eq.\,(\ref{bi-eff}) becomes
\ba
\Delta \chi (t, \rho, \varphi) = - \frac{b \ c\ R_{211} (r_E) \cos{ [\omega t_E - m \phi_E]}} {2 \pi R_{211} (r_{\textrm{max}})}.\label{deltachi}
\ea
The ratio $R_{211} (r_E) / R_{211} (r_{\textrm{max}})$ is shown in Fig.\,\ref{RWFaf}.
The peak value of the axion cloud is parametrized by
\be b \equiv \frac{a_{\textrm{max}}}{f_a},\ee which can be $\mathcal{O} (1)$ \cite{Yoshino:2012kn,Yoshino:2013ofa,Yoshino:2015nsa,Baryakhtar:2020gao} as mentioned above. $f_a$ is required to be below $10^{16}$ GeV so that the extraction of black bole rotation energy is negligible, thus complementary to black hole spin measurements \cite{Arvanitaki:2010sy,Arvanitaki:2014wva,Brito:2014wla,Davoudiasl:2019nlo,Stott:2020gjj,Unal:2020jiy,Saha:2022hcd} and direct shadow observations \cite{Roy:2019esk,Cunha:2019ikd,Creci:2020mfg,Roy:2021uye,Chen:2022nbb}.
One also defines the fundamental constant
\be c \equiv 2\pi g_{a\gamma\gamma} f_a,\label{defC}\ee
that translates the axion-photon coupling $g_{a\gamma\gamma}$ to a dimensionless quantity in the unit of the decay constant $f_a$. Here $c$ is the fundamental constant that we aim to constrain in our study \cite{Chen:2019fsq,Chen:2021lvo}.
In Eq.\,(\ref{deltachi}), there are two sets of coordinates on the two sides of the equation. First, $(t, \rho, \varphi)$ are the time of observation and the polar coordinates on the sky plane. Further, $(t_E, r_E, \phi_E)$ label the emission time and the polar coordinates of the black hole equatorial plane. These two sets of coordinates are related to each other by ray tracing, following photons' geodesics. Both the inclination angle, $i$, of the Kerr black hole with respect to the sky plane and the magnitude of the black hole spin, $a_J$, have impacts.
The EVPA measurements performed by the EHT are presented as a function of the azimuthal angle on the skype plane \cite{EHTP, EHTM}. Without loss of the generality, we use the following ansatz to parametrize the EVPA variations
\be
\Delta \chi (t, \varphi, \rho) = \mathcal{A} (\varphi, \rho) \cos{ [\omega t \pm \varphi + \delta(\varphi, \rho)]}.
\label{ansatz}
\ee
Here $\mathcal{A}$ and $\delta$ characterize the amplitude and the relative phase of the EVPA oscillation respectively. The $\pm \varphi$ term in the bracket comes from the angular dependence of the axion cloud since $|m|=1$. The sign is for two possibilities of the black hole spin orientation, either opposite to us ( $i > 90^\circ$ ) or towards us ( $i < 90^\circ$ ) respectively. In the following analysis, we normalize the amplitude $\mathcal{A}$ in terms of the maximum value of the axion field
\be g_{a\gamma\gamma} a_{\textrm{max}} \equiv \frac{b\,c}{2\pi},\ee
according to Eq.\,(\ref{deltachi}).
We note that Eq.\,(\ref{ansatz}) has not taken into account of the intrinsic variations of the accretion flows.
To reduce the nontrivial uncertainties from the time-dependent astrophysical background, we introduced differential EVPAs in the time domain \cite{Chen:2021lvo}. In this case, we extract the axion signal by comparing the EVPA observations at two different times $t_i$ and $t_j$
\be \Delta \chi (t_i, \varphi, \rho) - \Delta \chi (t_j, \varphi, \rho) = 2 \sin{[\omega t_{\textrm{int}}/2]}\,
\mathcal{A} (\varphi, \rho)\, \sin{ \left[\omega (t_i+t_j)/2 \pm \varphi + \delta(\varphi, \rho)\right]},\ee
where the interval time between two sequential observations is $t_{\textrm{int}} \equiv t_j - t_i$.
As far as $t_{\textrm{int}}$ is shorter than the timescale of the accretion flow dynamics, the astrophysical uncertainties can be suppressed.
On the other hand, one pays the price for the suppression factor $2 \sin{[\omega t_{\textrm{int}}/2]}$ if the axion oscillation period is much longer than $t_{\textrm{int}}$.
More details about the optimized analysis method will be given in the later discussion.
\subsection{Ray Tracing from Novikov Thorne Thin Disk}
In this subsection, we start from the thin disk model to study the properties of the axion-induced birefringence signals, with different choices on the inclination angle $i$ and the black hole spin $a_J$. Shakura and Sunyaev first developed this kind of model in \cite{Shakura:1972te}, and later it was generalized to a fully general relativistic version by Novikov and Thorne \cite{Novikov:1973kta} (NT model). The NT model is an axisymmetric and stationary solution, with an optically thick and geometrically thin disk on the equatorial plane. All photons we receive come directly from it without the contribution of lensed photons, and one can safely neglect the thickness of the accretion disk. The fluid in the disk has a nearly Keplerian orbit.
The polarization of the radiation in this model is calculated from the electron scatterings in a semi-infinite atmosphere \cite{RTChandra}.
The spectrum of the thin disk is approximately thermal. The geometrically thin disk model can be applicable for some classes of active galactic nuclei (AGN) with mass accretion rate being nearly Eddington mass accretion rate such as quasars, which future VLBI observations have the potential to measure.
We substitute the axion cloud induced birefringence contribution, i.e., given in Eq. (\ref{ipole-mod}), into the radiative transfer code \texttt{IPOLE} \cite{Moscibrodzka:2017lcu, Noble:2007zx} and we specify the NT model as the emission source around the SMBH. The birefringence signals, with various choices of the inclination angle $i$ and the black spin $a_J$, are shown in Fig.\,\ref{inclination_a}.
\clearpage
\clearpage
On the left panel, each plot contains an intensity map. On top of it, the quivers with different colors provide the information about the linear polarization. The length of each quiver is proportional to the intensity of the linear polarization $I_L = \sqrt{Q^2 + U^2}$, and the direction represents the EVPA. The white quiver lines show the EVPA without the axion. One oscillation period of the axion cloud is equally divided into eight segments, and the color of each quiver, from red to purple in the rainbow order, represents the time evolution. As expected, the birefringence signals from the axion cloud behave as a propagating wave along the azimuthal angle $\varphi$ of the sky plane.
On the right panel, we use the ansatz in Eq. (\ref{ansatz}) to fit the relative phase $\delta$ and the amplitude $\mathcal{A}$ of the EVPA oscillation along the $\varphi$ direction. We choose the radial coordinate as $\rho = 5\,r_g, 7\,r_g$ and $10\,r_g$ respectively, for black hole spin $a_J = 0.99$ and $0.8$. The axion mass is taken to satisfy $\alpha = 0.25$.
\subsubsection{Relative Phase of Azimuthal EVPA Oscillation}
The relative phase $\delta(\varphi, \rho)$ can be precisely obtained by reading out the numerical results from \texttt{IPOLE}. However, in the almost face-on scenarios, i.e., $i\simeq 0^\circ$ or $180^\circ$, this may be calculated analytically with a good approximation. In Fig.\,\ref{example}, we show the trajectories of photons from the emission point A on the accretion disk. We assume this point is relatively far from the black hole horizon, so that the frame dragging effects are not important. Under these assumptions, the relative phase $\delta(\varphi, \rho)$ can be written as
\be
\delta(\varphi,\rho) \approx \alpha\ \tan{i}\ \cos{\varphi}\ \rho/r_g \label{phasedelay}.
\ee
This relation can be understood as the time delay of the equatorial plane emission, induced by the inclination angle $i$ \cite{Chen:2021lvo}.
More explicitly, the time delay caused by the travel distance for the light from point $A$ can be approximated as $A'C'$ \cite{Loktev:2021nhk}.
On the other hand, the edge-on scenario with $i = 90^\circ$ is subtle. The observer is located on the equatorial plane. Such a plane is singular in the NT model due to the assumption of the infinitely thin disk. Consequently, the emission from the edge of the accretion disk, which propagates on the equatorial plane, is not included artificially. When the frame dragging effects are negligible, there is no $\varphi$-dependence in the phase component of Eq.\,(\ref{ansatz}) due to the rotation symmetry on the sky plane. This is consistent with the $\delta(\varphi)$ result shown in last pair of Fig.\,\ref{inclination_a}, where the $\varphi$ dependence in $\delta(\varphi)$ approximately cancels the $\varphi$ term in cosine function in Eq.\,(\ref{ansatz}).
\subsubsection{Amplitude of Azimuthal EVPA Oscillation}
We next turn to the amplitude $\mathcal{A}$ of the EVPA oscillation. This quantity is determined by the axion field value at the emission point, shown in Fig.\,\ref{RWFaf}. Under the thin disk approximation, we simply need a map from the sky plane coordinate $(\rho, \varphi)$ to the equatorial plane coordinate $(r_E, \phi_E)$.
For the face-on case with ${i}=0^\circ$ or $180^\circ$, the amplitude $\mathcal{A}$ only depends on the $\rho-r_E$ mapping. This is consistent with the results shown in the first row of Fig.\,\ref{inclination_a}. Particularly, the amplitude at a fixed radius has no $\varphi$ dependence.
For general cases, such as ${i}=30^{\circ}$ and $60^{\circ}$, the rotation symmetry on the sky plane is broken.
However the curve of $\mathcal{A}(\varphi)$ still preserves an approximate reflection symmetry with respect to $\varphi=\pi$. Such a feature can be understood using ray tracing that connects the equatorial plane and the sky plane through geodesics. The small violation of the reflection symmetry is caused by the frame dragging under the Kerr metric.
We calculate the photon geodesics according to the formalism developed in \cite{Gralla:2019ceu,Gelles:2021kti}. This constructs a map between the sky plane coordinate $(\rho, \varphi)$ and the equatorial plane coordinate $(r_E, \phi_E)$. The results are shown in Fig.\,\ref{analytic_A} for black holes with a spin of $a_J=0.99$ and $0.8$.
We show $r_E$ as a function of $\varphi$ at $\rho = 5\,r_g$ and $10\,r_g$ respectively, with different choices of the inclination angle ${i} = 30^\circ$ and $60^\circ$. Given the properties of the axion radial wave functions presented in Fig.\,\ref{RWFaf}, such a coordinate mapping explains the feature in Fig.\,\ref{inclination_a} nicely.
More explicitly, with ${i} = 60^{\circ}$ and $\rho = 10\,r_g$, the curve of $\mathcal{A}(\varphi)$ contains a double peak feature. This is caused by the geometric projection from a circle on the sky plane to an ellipse on the equatorial. As $\rho$ becomes smaller, the gravitational bending of a photon trajectory plays a more critical role. In this case, the mapping between two sets of coordinates becomes more subtle, and the approximation of the reflection symmetry with respect to $\varphi=\pi$ becomes worse.
Particularly, for a given $\rho$, the photon emitted at the point $A$ with the sky plane angle $\varphi$ in Fig.\,\ref{example} experiences more gravitational bending than the photon from the opposite point with $\varphi-\pi$. Consequently, although both the photons from these two points reach the circle with the same $\rho$ on the sky plane, the point $A$ is more close to the black hole horizon. For the benchmark we consider here, this translates to a smaller value of the axion field, according to the radial wave function shown in Fig.\,\ref{RWFaf}. The explains the difference of $\mathcal{A}(\varphi)$ at $\varphi=0^\circ$ and $180^\circ$. Again, we emphasize that a slight asymmetry with respect to $\varphi=\pi$ is caused by the black hole spin.
On the right panel of Fig.\,\ref{inclination_a}, we show the results for both $a_J=0.99$ and $a_J=0.8$. Since the difference between the axion cloud wave functions for these two spin choices is negligible, the main difference of the birefringence signals comes from how spin modifies the geodesics. As shown on the right panel, the larger value of $a_J$ tends to decrease the signal amplitude $\mathcal{A}(\varphi)$. This effect is more pronounced for smaller radius $\rho$ and for $\varphi$ close to $\pi$, which is caused by the fact that photons reaching these regions are emitted at places closer to the black hole, where the black hole spin will have a stronger effect on geodesics.
Now let us consider the edge-on scenario where ${i} = 90^{\circ}$. In the limit of no black hole spin, $\mathcal{A}(\varphi)$ should be a constant, due to the rotation symmetry on the sky plane. The extra features, as shown in the right panel of the last pair in Fig.\,\ref{inclination_a}, are induced by the frame dragging. Such features become weaker when the black hole has a smaller spin or the distance to the black hole is larger.
\subsection{Global Feature: Angular Modes of the Azimuthal EVPA}
Without loss of generality, we focus on the cases with the inclination angle ${i} < 90^\circ$, and the axion cloud occupies the $m=1$ quantum state. In this case, the ansatz of the EVPA shift in Eq.\,(\ref{ansatz}) becomes
\be
\Delta \chi (t, \varphi, \rho) = \frac{\mathcal{A} (\varphi, \rho)}{2} \left( e^{i\omega t} e^{ -i \varphi + i \delta(\varphi, \rho)} + e^{-i \omega t} e^{ i \varphi - i \delta(\varphi, \rho)} \right).
\label{ansatzE}
\ee
The features in the EVPA variation can be nicely captured by performing a Fourier transformation on Eq.\,(\ref{ansatzE}). To demonstrate that, we consider two scenarios. In the first scenario, we assume the observations are long enough to cover the whole period of the axion oscillation. In this case, the time dependence in Eq.\,(\ref{ansatzE}) can be properly extracted and we only need to focus on the angular dependence when we perform the Fourier transform. Let us define $\Delta\chi_{n}^+$ as
\be \Delta\chi_{n}^+ = \frac{1}{4\pi}\int_{0}^{2\pi}\ \mathcal{A} (\varphi, \rho) \ e^{ -i \varphi + i \delta(\varphi, \rho)} \ e^{in\varphi}\ d\varphi. \label{AM}\ee
In Fig.\,\ref{analytic_A_Fourier}, we show the results of $|\Delta\chi_{n}^+|$ for $n = 1, 2, 3$ as a function of the inclination angle ${i}$ in solid lines.
For the face-on case, with a negligible relative phase $\delta(\varphi, \rho)$, only the mode with $n= 1$ is non-zero, as expected. When the inclination angle gradually increases, as shown Eq.\,(\ref{phasedelay}), we have $\delta(\varphi,\rho) \propto \sin {{i}}\ \cos \varphi$ for small $i$. This leads to mixtures among various Fourier modes. Approximately, one can take Eq.\,(\ref{phasedelay}) into Eq.\,(\ref{AM}) and ignore the $\varphi$ dependence in $\mathcal{A} (\varphi)$. This leads to
\begin{equation}
|\Delta\chi_n^+(\rho)| \simeq \frac{1}{2}\ \mathcal{A}(0, \rho)\ J_{n-1}\left(\alpha\ \sin {i} \ \rho/r_g\right),
\label{AF}
\end{equation}
where $J_n(x)$ is the first type Bessel function. Eq.\,(\ref{AF}) gives the dashed lines in Fig.\,\ref{analytic_A_Fourier}, which agree well with the numerical results for nearly face-on cases, e.g., with ${i} < 30^\circ$. We note that, for larger inclination angles ${i} > 30^\circ$, the mixtures among various angular modes become complicated and higher modes are also important. Consequently, the Fourier analysis suggested in Eq.\,(\ref{AM}) becomes less convenient to characterize the axion induced signal. One may simply perform a direct comparison between the ansatz in Eq.\,(\ref{ansatz}) with the data. We also highlight that, for the edge-on case with ${i} \simeq 90^\circ$, the Fourier mode with $n= 0$ is dominant, as consistent with the results shown in Fig.\,\ref{inclination_a}.
\section{Birefringence from Axion Cloud -- RIAF and Washout}\label{RW}
In contrast to the geometrically thin and optically thick disk, such as the one discussed in the previous section, the emission of a RIAF has a larger spatial distribution along the line of sight. Also significant contributions may come from lensed photons that can propagate around the black holes for several times before reaching us \cite{Johannsen:2010ru,
Gralla:2019xty, Johnson:2019ljv, Gralla:2019drh}, thanks to the optically thin disk.
These lensed photons enhance the radiation intensity around $\rho \simeq 5\,r_g$ on the sky plane, forming the observed photon ring feature. Meanwhile, these photons contribute less than 10\% to the total intensity \cite{Johnson:2019ljv}.
The RIAF is usually a good description for a low-luminosity active galactic nuclei (LLAGN), such as Sgr A$^\star$ and M87$^\star$ \cite{Narayan:1996wu,Yuan:2014gma}.
As we will see, the birefringence signals can be influenced by both the geometric thickness of the accretion flow and the lensed photons.
Besides the axion-photon interaction,
{the polarization state of the photon is significantly influenced by the medium}, such as by the Faraday conversion and rotation. Thus one should use the differential radiative transfer Eq.\,(\ref{finalrte}) to properly describe the axion induced birefringence.
In this section, we study in detail how the amplitude of the axion-induced EVPA oscillation can be influenced in various RIAFs. When the accretion flow is optically thin, photons that reach the Earth at the same time are emitted at different spatial points on the accretion flow and they experience different propagation time. Consequently, if there is an axion cloud, the axion oscillation causes different contributions to the EVPA variation. Adding these contributions together generically leads to a suppression factor to the amplitude of the EVPA oscillation in Eq.\,(\ref{ansatz}).
One should expect a significant washout effect when a decent portion of photons are emitted from a large spatial region along the line of sight, especially when the size of such a region is comparable to the Compton wavelength of the axion, $2\pi \lambda_c$.
This indicates that such a washout effect becomes less important for lighter axion, due to a longer Compton wavelength.
In the following, we discuss two simple cases where one can study the washout effects quantitatively. One is a constant emission from a continuous and finite length. This mimics the photon emission from the finite thickness of the accretion flow. The other is the emission from two largely separate points. This is a good representation to describe the contribution from lensed photons. The washout effects in various accretion flows should be approximately described by a mixture of these two extreme scenarios. For illustration, we provide a schematic diagram in Fig.\,\ref{SBL} to demonstrate the two possible origins of the washout effects.
\subsection{Washout From Finite Radiation Length}
For simplicity, we focus on the simple radiative transfer equation with only linearly polarized emissions and axion-induced birefringent terms
\be \frac{d\Big(Q + i\ U\Big)}{ds} = j_Q + i\ j_U - i 2 g_{a\gamma\gamma} \frac{d a}{d s} \Big(Q + i\ U\Big).\label{RTEQU}\ee
Here we neglect other contributions from the plasma in the radiative transfer equations since they are not relevant for the washout effects we consider here. The solution to Eq.\,(\ref{RTEQU}) is
\be Q (s_f) + i\ U (s_f) = \int_{s_i}^{s_f} e^{i 2 g_{a\gamma\gamma} \Big(a(s_f) - a(s)\Big)} \Big( j_Q (s) + i\ j_U (s) \Big) ds,\label{QUIaxion}\ee
where $s_i$ and $s_f$ are used to label the initial and final points along the line of sight respectively. We consider a simplified case in which the linearly polarized emissions, $j_Q / j_U$, are constant in a finite length, $s_r$, along the line of sight. Without loss of generality, we take $j_U$ to be $0$ and the linearly polarized emissions in this case can be written as
\be j_Q^{\textrm{const}} (s) = j_Q^0 \ \Theta \left( |s| - \frac{s_r}{2} \right),\label{jQFL}\ee
where $\Theta$ is the heaviside function and $s = 0$ corresponds to the middle of the emission segment.
The axion field is taken to be a coherently oscillating background whose amplitude $a_0$ stays constant in the same region of emission but approaches to zero at the observer's location. In this case, the washout effect on the amplitude $\mathcal{A}$ of Eq.\,(\ref{ansatz}) can be solved explicitly. We show the result in Fig.\,\ref{washout},
as a function of the radiation length $s_r$, normalized to the axion Compton wavelength $2\pi \lambda_c$. As expected, the amplitude approaches to $0$ when $s_r$ becomes comparable with $\lambda_c$.
In the analytic RIAF model \cite{Pu:2018ute}, there is a dimensionless parameter $H \equiv h/R$, defined as the ratio between the height $h$ and the horizontal scale $R$ of the accretion flow. This parameter is used to characterize the geometric thickness of the accretion flow. The linearly polarized radiation is proportional to the electron number density, which is exponentially suppressed respect to the distance from the equatorial plane. For a nearly face-on disk, neglecting the background metric when the emissions are far away from the horizon, the emission length in Eq.\,(\ref{jQFL}) can be approximated as
\be s_r \simeq \rho H.\ee
It is reasonable to expect that the washout effect induced by the finite radiation length is not important if the thickness of the accretion flow satisfies
\be \rho H \ll 2\pi \lambda_c.\ee
For example, if we take $\alpha \equiv r_g/\lambda_c = 0.4$ and $\rho \simeq 5\,r_g$, this condition leads $H \ll 3$, which applies to most kinds of accretion flows. For smaller $\alpha$, the finite length washout becomes even more negligible.
\subsection{Washout From Lensed Photon}
We next consider the case that the linearly polarized radiation received on the sky plane comes from two discrete emission points. The separation in distance of these two points leads to a phase difference due to the axion cloud oscillation as well as the axion cloud spatial profile. We assume the linearly polarized emissions at the two points are independent, and the source of emission can be characterized as
\be j_Q (s) + i\ j_U (s) = \sum_{p = 1, 2}\ e^{i 2\chi_p} I_L^p \ \delta (s - s_p).\ee
Here $I_L^p$ is the linear polarization intensity at each emission point, and the EVPA of the emission is $\chi_p$.
Substituting this ansatz into Eq.\,(\ref{QUIaxion}), one gets
\be Q (s_f) + i\ U (s_f) = \sum_{p = 1, 2} e^{- i 2 g_{a\gamma\gamma} a(s_p) + i 2\chi_p} I_L^p.\ee
We take the axion field value at the first point as $a(s_1) = a_0 \cos \left( \omega t \right)$. We further set the axion field amplitude to be the same at these two emission points for simplicity. The axion field at the second emission point can then be parametrized as $a(s_2) = a_0 \cos \left( \omega t + \delta_{12} \right)$, with $\delta_{12}$ being the phase delay between these two points. In this case, the oscillation amplitude of the EVPA can be written as
\be \frac{\mathcal{A}}{g_{a\gamma\gamma} a_0} = \sqrt{\cos^2 \left( \frac{\delta_{12}}{2} \right) + \sin^2 \left( \frac{\delta_{12}}{2} \right) \left| \frac{I_L^1 - I_L^2 e^{i 2 \Delta \chi_{12}}}{I_L^1 + I_L^2 e^{i 2 \Delta \chi_{12}}} \right|^2},\label{EVPAA2E}\ee
with $\Delta \chi_{12} = \chi_2 - \chi_1$.
For optically thin RIAFs, some photons are nearly in bound states around the SMBH. These photons can propagate around the BH for several times before exiting, and they make a significant contribution to the photon ring observed on the sky plane \cite{Johannsen:2010ru, Gralla:2019xty, Johnson:2019ljv, Gralla:2019drh}. If the emissions happen dominantly around the equatorial plane, one gets a discrete sum of the radiation that differs with each other by the times that propagates around the black hole. Since the emission points of both the direct radiation and lensed photons have comparable radii from the black hole, the axion field values are comparable. Thus Eq.\,(\ref{EVPAA2E}) serves a good approximation for studying the EVPA oscillation amplitude on the photon ring.
The relative phase of the axion oscillation $\delta_{12}$ in Eq.\,(\ref{EVPAA2E}) is
\be \delta_{12} = \omega \Delta t - \Delta \phi . \label{d12}\ee
The time delay $\Delta t$ and the azimuthal angle difference $\Delta \phi$ are the critical parameters to characterize the lensed photons, and these quantities can be properly calculated \cite{Gralla:2019drh}.
\subsection{Landscape of Accretion Flows}
Now let us adopt the analytic RIAF \cite{Pu:2018ute} as a benchmark model.
We vary parameters for several aspects, such as the magnetic field structure, velocity distribution, and thickness $H$, in order to see how the birefringence signals are influenced. Three types of magnetic field geometries, including a vertical field, a toroidal field, and a radial field, are considered \cite{EHTM}.
Notice that the EHT observation for M87$^\star$ favors the vertical one \cite{EHTM}.
Velocity distributions are characterized by a Keplerian, a sub-Keplerian, and a free-falling flow, respectively \cite{Pu:2016qak}.
In Fig.\,\ref{figRIAF0.3}, we show several examples of RIAFs. The impacts induced by the inclination angle and the black hole spin are qualitatively the same as those of the NT thin disk model. For illustration, we fix these quantities as $163^\circ$ and $a_J = 0.99$, which are motivated by M87$^\star$. We consider the EVPA distribution as a function of the azimuthal angle with a given radius $\rho$ on the sky plane. In addition, motivated by the study in \cite{EHTP}, we also calculate the intensity weighted average (IWA) EVPAs,
\be \langle \chi (\varphi) \rangle \equiv \frac{1}{2} \textrm{arg}\Big{(} \langle Q \times I \rangle + i \langle U \times I \ \rangle \Big{)}.\label{defIEVPA}\ee
Here the IWA region covers the dominant emission on the sky plane \cite{Akiyama:2019bqs}.
\clearpage
There are several features to be explained. First, let us study the impact of the thickness of the accretion flow. As discussed in \cite{EHTP}, a magnetically arrested disk (MAD) \cite{EHTM} has a strong magnetic field which compresses the thickness parameter of the RIAF, $H$, to $0.05$ in the inner region, and extends it to about $0.3$ in the outer region \cite{Igumenshchev:2003rt, Narayan:2003by, McKinney:2012vh, Tchekhovskoy2015}. We take $H = 0.3$ and $0.05$ for comparison as benchmarks in this study.
As demonstrated in Fig.\,\ref{figRIAF0.3}, the oscillation amplitude, $\mathcal{A}$, for $H = 0.3$ is typically smaller than that for $H = 0.05$ by a simple scaling factor. This is consistent with the washout effect induced by the finite thickness of the accretion flow, discussed previously.
Furthermore, lensed photons also contribute significantly to the washout effects. Such a washout effect, led by lensed photons, can be reduced if one focuses on EVPAs away from the neighbourhood of the black hole, e.g., $\rho \gg 5\,r_g$ for M87$^\star$. Particularly, the EVPA variations at $\rho = 10\,r_g$ are shown as the black dashed lines in the right panel of Fig.\,\ref{figRIAF0.3}. On the other hand, when we consider IWA EVPAs, lensed photons may lead to a substantial impact. In order to disentangle the washout effect from the finite thickness and that from the lensed photons, we artificially remove the lensed photon and recalculate the EVPA variations. This is done by a simple manipulation in \texttt{IPOLE} \cite{Moscibrodzka:2017lcu, Noble:2007zx}. We show the new results in Fig.\,\ref{AEVPA5}.
As we can see, after artificially removing lensed photons, the variations of the IWA EVPA show universal structures in both the relative phase $\delta$ and the amplitude $\mathcal{A}$ for various choices of the accretion flow parameters. This indicates that more astrophysical model independent analyses can be carried out if the future VLBI measurements provide detailed information about the EVPA variations in the regions away from the SMBH, where lensed photons are not important.
It is also worth to mention that when we increase the axion Compton wavelength, i.e., decrease $\alpha$ from 0.4 to 0.2, both washout effects become less important. In this case, and the amplitudes are mainly influenced by the radial wave-function of the axion cloud.
By comparing $\delta (\varphi)/2\pi$ distributions in Fig.\,\ref{AEVPA5}, we find that most of them are fit well by Eq.\,(\ref{phasedelay}), except for the one with a radial magnetic field. It turns out that such a deviation is caused by a significant contribution from lensed photons. In Fig.\,\ref{ILRf}, we compare the linear polarization intensity from lensed photons, $I_L^{\textrm{lp}}$, with the total linear polarization intensity $I_L$. It is clear that, with a radial magnetic field, the lensed photons give a much larger contribution, and they dominate in the region near $\varphi\simeq \pi$ where the largest deviation from Eq.\,(\ref{phasedelay}) appears.
Notice that in more realistic cases, such as the accretion flows described by general relativistic magnetohydrodynamic (GRMHD) simulations, lensed photons are typically less polarized than the ones from direct emissions, due to the magnetic turbulence \cite{Jim_nez_Rosales_2021, Palumbo:2022pzj}. Consequently, our study in this section, based on the analytic RIAF, tends to overestimate the washout effect from lensed photons.
In addition, we also study the effect of the black hole spin $a_J$.
In Fig.\,\ref{AEVPAspin}, we show the comparison of IWA EVPAs with various choices of spins.
We find that, as long as the superradiance can happen, the EVPA variations remain qualitatively the same as the ones of $a_J = 0.99$.
\section{Prospect for future VLBI observations}\label{Pn}
\subsection{Statistics}
\subsubsection{Search for EVPA variations}
In this section, we characterize the statistics method for the axion-induced birefringence search. The EVPA data from observation can be parametrized as
\begin{align}
&\chi_{D}=\chi^{\rm astro}_{D}+\chi^a_{D}(\boldsymbol{\vartheta}^a)+n_{D}.\label{chiD}
\end{align}
Here $\chi^{\rm astro}$ is the EVPA variation with an astrophysical origin, and $\chi^{a}(\boldsymbol{\vartheta}^a)$ is the EVPA variation induced by the axion cloud. Further, $\boldsymbol{\vartheta}^a$ represents the axion related parameters, such as its mass and its coupling to photons, and $n_{D}$ is the measurement noise.
The subscript $D$ labels the properties of the observation data, including the time of a measurement, the coordinates on the sky plane and the photon frequency.
For simplicity, we assume the measurement noise follows a Gaussian distribution, thus $n_{D}$ has a probability distribution as
\begin{align}
P(n_{D})= \frac{1}{\sqrt{2\pi}\sigma_{D}}\exp\left[-\frac{1}{2}\frac{n_{D}^2}{\sigma_{D}^2}\right].\label{white_noise}
\end{align}
We note that our following discussion can be easily generalized to include non-diagonal noise correlations.
Given a set of observation data $\chi_D$, the likelihood function can be written as
\begin{equation}
\mathcal{L} \left[ \chi_{D}| {\boldsymbol{\vartheta}}^a; \chi^{\rm astro}_D \right] = \prod_{D} \frac{1}{\sqrt{2\pi}\sigma_{D}}\exp\left[-\frac{\Big(\chi_{D}-\chi^{\rm astro}_D-{\chi}^a_{D}({\boldsymbol{\vartheta}}^a)\Big)^2}{2\sigma_{D}^2}\right].\label{likelihood_original}
\end{equation}
In order to estimate the likelihood, one needs to properly model the astrophysical contribution, $\chi_{D}^{\rm astro}$. The complexity of the accretion flow leads to the biggest technical obstacle. Now let us introduce a method in order to characterize the behavior of $\chi_{D}^{\rm astro}$.
The dynamics of the accretion flow can be modeled by numerical simulations, e.g., based on GRMHD. There are many parameters, such as electron density and temperature, velocity distribution, and magnetic field structure and strength, serving as the inputs. For a specific SMBH, one can determine the range of these input parameters by comparing the simulation results with the observation data, such as the photon ring morphology, the luminosity distribution, etc. We label the input parameters as $\{p_i\}$, and their allowed ranges to describe such a SMBH as $\{\Delta p_i\}$. Now one can perform the numerical simulations with a scan of these parameters within their allowed region $\{\Delta p_i\}$. For each choice of $\{p_i\}$, we obtain a distribution of EVPA, labeled as $\chi_{D}^{\rm astro}(\{p_i\})$. This forms an ensemble of EVPA with various choices on the astrophysical input parameters.
First, let us define the ensemble average of the EVPA for each $D$, which can be written as
\begin{equation}
\chi^0_D=\frac{1}{N_{\rm ens}}\sum_{\{p_i\}}\chi_{D}^{\rm astro}(\{p_i\}),\label{ensemble-ave}
\end{equation}
with $N_{\rm ens}$ as the number of simulations carried out in this ensemble.
In order to characterize the uncertainties from the accretion flow modeling, let us assume that $\chi_{D}^{\rm astro}$ for various choices of $\{p_i\}$ follows a Gaussian distribution. More explicitly, we define
\begin{equation}
M_{DD'} = \frac{1}{N_{\rm ens}}\sum_{\{p_i\}} (\chi_{D}^{\rm astro}(\{p_i\}) - \chi^0_D)(\chi_{D'}^{\rm astro}(\{p_i\}) - \chi^0_{D'}).
\end{equation}
Here we maintain the potential correlations in time, space and photon frequency. The Gaussian approximation is exact if the following two requirements are met. First, we need the parameters within $\{\Delta p_i\}$ follow a multivariate normal distribution. Further, $\chi_{D}^{\rm astro}(\{p_i\})$ needs to respond linearly to all $p_i$ within $\{\Delta p_i\}$, which can be approximately justified using Taylor expansion. In practice, whether the Gaussian approximation is valid should be examined in the GRMHD simulation. If this is not satisfied, a more complicated probability distribution of $\chi_{D}^{\rm astro}(\{p_i\})$ can be numerically introduced and our analysis method can be easily generalized. For now, let us stick with the Gaussian approximation for the simplicity.
After obtaining the probability distribution of $\chi_{D}^{\rm astro}$, one can convolute it with the likelihood calculation in Eq.\,(\ref{likelihood_original}) and integrate out $\chi_{D}^{\rm astro}$ as nuisance parameters. The likelihood distribution can be written as
\begin{align}
\mathcal{L}\left[ \chi_{D} | {\boldsymbol{\vartheta}}^a\right] = \frac{1}{\sqrt{|2\pi M'|}}\exp\left[
-\frac{1}{2}\sum_{DD'} \left(\chi_{D} - \chi_{D}^0 - {\chi}_{D}^a({\boldsymbol{\vartheta}}^a)\right) M'^{-1}_{DD'} \left(\chi_{D'} - \chi_{D'}^0 - {\chi}_{D'}^a({\boldsymbol{\vartheta}}^a)\right)\right],\label{likelihoodAG} \end{align}
where $M'_{DD'} \equiv M_{DD'} + \sigma_{D}^2 \delta_{DD'}$.
This gives a viable calculation to estimate the sensitivity on the axion related parameters $\boldsymbol{\vartheta}^a$.
In order to perform a back-of-envelop estimation on the sensitivity in the parameter space, let us assume that the uncertainties, for all values of $D$, are uncorrelated with each other and they are approximately at the same order of magnitude. Under this assumption, one can calculate the typical size of the uncertainty as \begin{equation}
\overline{\sigma}^2 = \frac{1}{{\rm Tr}[M'^{-1}_{DD'}]}.\label{fix_chi0}
\end{equation}
Consequently, the signal to noise ratio (SNR) can be estimated by comparing the typical size of the axion-induced birefringence signal with $\overline{\sigma}$. By this simple approximation, the sensitivity on the axion-photon coupling, i.e., $c$, scales as $1/\sqrt{N_D}$, where $N_D$ is the total number of data points.
\subsubsection{Search for differential EVPA variations}
The statistical method considered above requires a systematic study of the accretion flow, using GRMHD for example. Now let us consider an alternative analysis using the differential EVPA. This method has been introduced in \cite{Chen:2021lvo} to perform an axion search using the EHT observation on M87$^\star$. Although we pay the price of a suppression factor, one does not need a very comprehensive understanding on the accretion flow dynamics.
Remember that the index $D$ labels the observation time, the coordinates on the sky plane and the photon frequency. Let us single out the time information, using the index $i$, and the other information is labeled by $d$, i.e., $D\equiv \{i, d\}$. We will compare the EVPA at different times, with fixed coordinates and frequency. Let us define the differential EVPA as
\be \Delta\tilde{\chi}_{i} \equiv \frac{{\chi}_{i+1}}{\sigma_{i+1}} - \frac{{\chi}_{i}}{\sigma_{i}},\label{diffEVPA} \ee
where all the indices $d$ are dropped for convenience.
Let us focus on the axion parameter space where the following condition is met,
\begin{equation}
\left| \Delta\tilde{\chi}_i^{\rm astro} \right| \ll \left| \Delta\tilde{\chi}_i^{a} ({\boldsymbol{\vartheta}}^a) \right|. \label{small_variation_limit}
\end{equation}
This condition implies that the change of the EVPA between two observation times is dominated by the axion birefringence effect rather than an astrophysical origin.
Under this assumption, one can easily calculate the likelihood function for $\Delta\tilde{\chi}_i^a$ as
\begin{align}
\mathcal{L}\left[ \Delta\tilde{\chi}_{i} | {\boldsymbol{\vartheta}}^a \right] = \frac{1}{\sqrt{|2\pi \tilde{M} |}}\exp\left[
-\frac{1}{2}\sum_{ii'} \Big( \Delta\tilde{\chi}_{i} - \Delta\tilde{\chi}_i^a ({\boldsymbol{\vartheta}}^a) \Big) \tilde{M}^{-1}_{ii'} \Big( \Delta\tilde{\chi}_{i'} - \Delta\tilde{\chi}_{i'}^a ({\boldsymbol{\vartheta}}^a) \Big) \right].\label{likelihoodDEVPA} \end{align}
Here $\tilde{M}$ characterizes the measurement noise for the differential EVPA. Using the notation in Eq.\,(\ref{chiD}), \begin{align}\tilde{M}_{ii'}=\Big\langle\left(\frac{n_{i+1}}{\sigma_{i+1}}-\frac{n_{i}}{\sigma_{i}}\right)\left(\frac{n_{i'+1}}{\sigma_{i'+1}}-\frac{n_{i'}}{\sigma_{i'}}\right)\Big\rangle. \end{align}
Assuming the measurement noise is uncorrelated, we obtain $\tilde{M}_{ii} = 2$, $\tilde{M}_{i\,(i\pm1)} = 1$ and 0 for all other matrix elements.
Here we see that the benefit of using differential EVPA is to remove the non-trivial dependence on $\chi_i^{\rm astro}$ in the likelihood function. In order to justify the condition in Eq.\,(\ref{small_variation_limit}), one only need to understand the accretion flow at the level of orders of magnitude. This is much easier to achieve than a comprehensive understanding required in the previous analysis method.
On the other hand, the analysis based on the differential EVPA needs to pay the price of a suppression factor. To demonstrate that, let us use the axion signal ansatz, presented in Eq.\,(\ref{ansatz}), to calculate the differential EVPA.
Assuming $\sigma_i \simeq \sigma_j$, the axion contribution to the differential EVPA, defined in Eq.\,(\ref{diffEVPA}), can be written as
\begin{equation}
\Delta\tilde{\chi}_i^a = 2{\mathcal{A}}\sin\left[\frac{{\omega}\Delta t}{2}\right]\,\cos\left[\frac{{\omega}(t_i+t_j)}{2}+{\delta}\right]/\sigma_i,
\end{equation}
where $\Delta t \equiv t_j - t_i$ is the time interval of the sequential observations. Thus we see that the axion signal in terms of the differential EVPA suffers from a suppression factor as $2 \sin\left[\frac{{\omega}\Delta t}{2}\right]$. This suppression is more severe for a smaller axion mass.
\subsection{Data sets increase}
In this subsection, we discuss the prospective improvements that can be achieved using the EVPA data from the future VLBI observations, e.g., ngEHT \cite{Raymond_2021,Lngeht}. We focus on the correlations among the axion induced birefringence signals, which can potentially increase the sensitivity as well as discriminate against astrophysical background.
First, we study the axion signal correlations in various frequency bands. The ngEHT can potentially observe at three different frequencies simultaneously, i.e., $86$ GHz, $230$ GHz and $345$ GHz \cite{Raymond_2021,Lngeht}.
Since the axion-induced birefringence is achromatic, the EVPA variations at different frequencies are the same while propagating in the vacuum. After including the plasma effects based on RIAF models, we show the comparison on the IWA EVPA oscillations at these three frequencies in Fig.\,\ref{3f}. There are slight differences, which are caused by the washout effects induced by the finite thickness of the accretion flow and the lensed photons.
Notice that, for $86$ GHz, the accretion flow is optically thicker compared to that at higher frequencies. Thus the contribution from lensed photons is less important, which makes the green solid line (IWA at $86$ GHz) in Fig.\,\ref{3f} less asymmetric respect to $\varphi=\pi$.
The correlations among EVPA variations at different frequencies appear to be quite strong for this benchmark. The Faraday rotation modifications on the EVPA, characterized by $\rho_{V}$ in Eq.\,(\ref{ipole-mod}), has a square dependence on the photon wave-length, while the axion-induced term is universal for all frequencies. Thus this provides a powerful way to subtract the astrophysical contributions.
Furthermore, the future {VLBI experiments} have potentials to increase the spatial resolution and improve the dynamic range. The EVPA variations at different radii from the black hole can be measured.
As mentioned in previous sections, the lensed photons contribute significantly to the washout effects. However, since these lensed photons contribute dominantly at small radii, such as $\sim 5.5 r_g$ for M87$^\star$, EVPA variations at the outer region are almost free from such a washout, as demonstrated previously. In addition, for the parameter space we are interested in this study, the axion wave-functions generically peak at a larger radius, e.g., $\sim10\,r_g$ on the sky plane. Therefore, correlating the EVPA variations at different radii on the sky plane can be a powerful handle to reduce the lensed photon contamination.
In order to demonstrate the prospects of the axion search at ngEHT, we perform a back-of-envelop estimation based on several potential improvements to be achieved on future observation of M87$^\star$. The results are shown in Fig.\,\ref{pngEHT}. As a comparison, we also present the existing constraint in the axion parameter space, using the recently published EHT results \cite{Chen:2021lvo}.
The improvements on the sensitivity mainly come from the following aspects:
\begin{itemize}
\item three different frequencies;
\item five different radii between $\rho = 5.5\,r_g$ and $\rho = 9.5\,r_g$;
\item ten times the observation time ($\sim 40$ days span);
\item the axion field values at different radii, according to the axion cloud wavefunction, are taken into consideration;
\item compared with the differential EVPA analysis carried in \cite{Chen:2021lvo}, we remove of the suppression factor $\sin{[\omega t_{\textrm{int}}/2]}$ assuming a better understanding on the accretion disk dynamics.
\end{itemize}
We note that, in this estimation, we assume that the uncertainty in each measurement is at the same order of magnitude as that in \cite{EHTP}, for simplicity.
Here we emphasize that the ngEHT observation on M87$^\star$ can potentially probe $c_{\rm min} \sim \mathcal{O}(1) \alpha_{\rm EM}$. This serves as a well-motivated theoretical benchmark, in which the axion-photon coupling is induced by $\mathcal{O}(1)$ numbers of chiral fermions with $\mathcal{O}(1)$ units of the electric charge.
Finally, the future {VLBI experiments} \cite{Gurvits:2022wgm} have the potential to observe more SMBHs at the horizon scale \cite{SMBHVLBI,Pesce:2021adg}. In this case, one can perform an axion search at a broader mass window, potentially covering from $10^{-22}$ eV to $10^{-17}$ eV.
In Table\,\ref{TSMBH}, we list some candidates of SMBHs in \cite{SMBHVLBI} that can be observed by the future {VLBI experiments}. Such observations require the photon rings of these SMBHs to have open angles larger than $2 \mu$as, and enough flux at the radio frequency band can be received. In the table, we provide the axion mass range corresponding to $\alpha$ between $0.1$ and $0.5$, which should be eventually determined by the individual spin of each SMBH.
\begin{table}[htb]
\begin{center}
\begin{tabular}{ | c | c | c | c | c | c |}
\hline
SMBH & $M/M_\odot$ & $\theta_{\rm ring} / \mu$as & $\mu/$eV range & $T_a/s$ at $\alpha = 0.3$ \\
\hline
Sgr A$^\star$ &$4.3\times10^6$ & 53 & $3.1 \times 10^{-18}\sim 1.6 \times 10^{-17}$ & $4.4\times 10^2$ \\
M87$^\star$ & $6.5\times10^9$ & 42 & $2.1 \times 10^{-21}\sim 1.0\times 10^{-20}$ & $6.7\times 10^5$ \\
\hline
IC 1459 & $2.8\times10^9$ & 9.2 & $4.9 \times 10^{-21}\sim 2.4\times 10^{-20}$ & $2.8\times 10^5$ \\
NGC 4374 & $1.5\times10^9$ & 9.1 & $8.8 \times 10^{-21}\sim 4.4\times 10^{-20}$ & $1.6\times 10^5$ \\
NGC 4594 & $5.8\times10^8$ & 5.7 & $2.3 \times 10^{-20}\sim 1.2\times 10^{-19}$ & $6.0\times 10^4$ \\
IC 4296 & $1.3\times10^9$ & 2.5 & $9.9 \times 10^{-21}\sim 5.0\times 10^{-20}$ & $1.4\times 10^5$ \\
NGC 3031 & $7.9\times10^7$ & 2.0 & $1.7 \times 10^{-19}\sim 8.4\times 10^{-19}$ & $8.2\times 10^3$ \\
\hline
\end{tabular}
\caption{\footnotesize{Here we provide a list of SMBHs. Two of them are already by measured by EHT, M87$^\star$ and Sgr A$^\star$. The rest are potential candidates to be resolved in the future \cite{SMBHVLBI}. We also provide the typical axion mass window, corresponding to $\alpha$ between $0.1$ and $0.5$, as well as the typical axion oscillation timescale $T_a$ for each SMBH. }}
\label{TSMBH}
\end{center}
\end{table}
\section{Conclusion}
The polarimetric measurements of the horizon scale emissions from SMBHs open a new window to probe the existence of ultralight axion fields \cite{Chen:2019fsq,Chen:2021lvo}. An axion cloud can be generated through the superradiance process, and the axion field can potentially reach the highest possible value. On the other hand, accretion flows generate large amount of linearly polarized radiations from the neighborhood of rotating black holes, overlapping with the densest region of the axion cloud. Consequently the EVPA of these photons will oscillate periodically due to the axion photon coupling. The current and next-generation VLBI polarimetric measurements \cite{EHTP,Lngeht} are powerful ways to search for axion clouds around SMBHs.
The strong gravity and medium effects highly influence the horizon scale observations. Both the axion cloud and the accretion flow dynamics can lead to EVPA variations. In our study, we show explicitly how the axion photon coupling can be embedded into the polarized covariant radiative transfer equations where both the curved spacetime and plasma effects are taken into consideration. The axion effect can be included by a simple modification on the numerical radiative transfer simulation, such as \texttt{IPOLE} \cite{Moscibrodzka:2017lcu, Noble:2007zx}.
The mapping from the SMBH coordinates to the sky plane for observation is non-trivial. For a geometrically thin and optically thick disk, such as the NT model, one needs to follow the photon geodesics which connects the sky plane to the surface of the accretion disk. We study in detail on how such a mapping depends on the size of the black hole spin and its inclination angle. This mapping is further used to generate the amplitude and the relative phase of the axion induced EVPA signal on the sky plane.
For a more realistic model of the accretion flow, photons being observed at each point on the sky plane may have different spatial and temporal origins along the line of sight. The sum of these photons generically leads to a suppression on the EVPA oscillation amplitude. We study such washout effects in two simple toy models. One is the constant radiation source along a continuous and finite length, representing the thickness of the accretion flow. The other one is the radiation from two spatially separated point sources, mocking the contributions from lensed photons.
The future {VLBI experiments}, such as the next-generation Event Horizon Telescope \cite{Raymond_2021,Lngeht}, will be able to perform better measurements and provide more detailed information about the EVPA variations. The sensitivities of the axion searches can therefore be significantly improved, especially by correlating the EVPA oscillations at different radii and frequencies. In addition, a much larger axion mass window is expected to be explored since more SMBHs will be observed.
\acknowledgments
We are grateful to useful discussions with Richard Brito, Vitor Cardoso, Horng Sheng Chia, Ru-Sen Lu, Alexandru Lupsasca, Elias Most, Chen Sun, George N. Wong, Ziri Younsi and Yunlong Zhang.
This work is supported by the National Key Research and Development Program of China under Grant No. 2020YFC2201501.
Y.C. is supported by the China Postdoctoral Science Foundation under Grant No. 2020T130661, No. 2020M680688, the International Postdoctoral Exchange Fellowship Program, by the National Natural Science Foundation of China
(NSFC) under Grants No. 12047557, by VILLUM FONDEN (grant no. 37766), by the Danish Research Foundation, and under the European Union’s H2020 ERC Advanced Grant “Black holes: gravitational engines of discovery” grant agreement no. Gravitas–101052587.
Y.M. is supported by the ERC Synergy Grant ``BlackHoleCam: Imaging the Event Horizon of Black Holes'' (Grant No. 610058) and the National Natural Science Foundation of China (Grant No. 12273022).
J.S. is supported by the National Natural Science Foundation of China under Grants No. 12025507, No. 12150015, No.12047503; and is supported by the Strategic Priority Research Program and Key Research Program of Frontier Science of the Chinese Academy of Sciences under Grants No. XDB21010200, No. XDB23010000, and No. ZDBS-LY-7003 and CAS project for Young Scientists in Basic Research YSBR-006.
X.X. is supported by Deutsche Forschungsgemeinschaft under Germany’s Excellence Strategy EXC2121 “Quantum Universe” - 390833306.
Q.Y. is supported by the Key Research Program of CAS under Grant No. XDPB15, and by the Program for Innovative Talents and Entrepreneur in Jiangsu.
Y.Z. is supported by U.S. Department of Energy under Award No. DESC0009959.
Y.C. would like to thank the SHAO and TDLI for their kind hospitality.
Y.Z. would like to thank the ITP-CAS for their kind hospitality.
The simulation codes used in this study are a modified version of publicly available code \texttt{IPOLE} \cite{Moscibrodzka:2017lcu,Noble:2007zx}.
\providecommand{\href}[2]{#2}\begingroup\raggedright\endgroup
|